Lab: Tekton on OpenShift

OpenShift Deployment using Tekton Pipelines

Tekton is an open source project to configure and run CI/CD pipelines within a OpenShift/Kubernetes cluster.

Introduction

In this tutorial you'll learn
    what are the basic concepts used by Tekton pipelines
    how to create a pipeline to build and deploy a container
    how to run the pipeline, check its status and troubleshoot problems
Also, check out this very good tutorial by Red Hat.

Prerequisites

Before you start the tutorial you must set up a OpenShift environment with Tekton installed.
Create a route for the OpenShift registry if you have not done so already.
1
oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
Copied!

Estimated time

1 hour

Steps

1. Tekton pipeline concepts

Tekton provides a set of extensions to Kubernetes, in the form of Custom Resources, for defining pipelines. The following diagram shows the resources used in this tutorial. The arrows depict references from one resource to another resource.
crd
The resources are used as follows.
    A PipelineRun defines an execution of a pipeline. It references the Pipeline to run.
    A Pipeline defines the set of Tasks that compose a pipeline.
    A Task defines a set of build steps such as compiling code, running tests, and building and deploying images.
We will go into more detail about each resource during the walkthrough of the example.
Let's create a simple pipeline that
    builds a Docker image from source files and pushes it to your private container registry
    deploys the image to your Kubernetes cluster

2. Clone the repository

You should clone this project to your workstation since you will need to edit some of the yaml files before applying them to your cluster. Check out the beta-update branch after cloning.
1
git clone https://github.com/odrodrig/tekton-tutorial
2
cd tekton-tutorial
3
git checkout beta-update
Copied!
We will work from the bottom-up, i.e. first we will define the Task resources needed to build and deploy the image, then we'll define the Pipeline resource that references the tasks, and finally we'll create the PipelineRun resource needed to run the pipeline.

3. Create a task to clone the Git repository

The first thing that the pipeline needs is a task to clone the Git repository that the pipeline is building. This is such a common function that you don't need to write this task yourself. Tekton provides a library of reusable tasks called the Tekton catalog. It provides a git-clone task which is described here.
The task is reproduced below so that we can talk about it.
1
apiVersion: tekton.dev/v1beta1
2
kind: Task
3
metadata:
4
name: git-clone
5
spec:
6
workspaces:
7
- name: output
8
description: The git repo will be cloned onto the volume backing this workspace
9
params:
10
- name: url
11
description: git url to clone
12
type: string
13
- name: revision
14
description: git revision to checkout (branch, tag, sha, ref�)
15
type: string
16
default: master
17
- name: submodules
18
description: defines if the resource should initialize and fetch the submodules
19
type: string
20
default: "true"
21
- name: depth
22
description: performs a shallow clone where only the most recent commit(s) will be fetched
23
type: string
24
default: "1"
25
- name: sslVerify
26
description: defines if http.sslVerify should be set to true or false in the global git config
27
type: string
28
default: "true"
29
- name: subdirectory
30
description: subdirectory inside the "output" workspace to clone the git repo into
31
type: string
32
default: "src"
33
- name: deleteExisting
34
description: clean out the contents of the repo's destination directory (if it already exists) before trying to clone the repo there
35
type: string
36
default: "false"
37
results:
38
- name: commit
39
description: The precise commit SHA that was fetched by this Task
40
steps:
41
- name: clone
42
image: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:latest
43
script: |
44
CHECKOUT_DIR="$(workspaces.output.path)/$(params.subdirectory)"
45
46
cleandir() {
47
# Delete any existing contents of the repo directory if it exists.
48
#
49
# We don't just "rm -rf $CHECKOUT_DIR" because $CHECKOUT_DIR might be "/"
50
# or the root of a mounted volume.
51
if [[ -d "$CHECKOUT_DIR" ]] ; then
52
# Delete non-hidden files and directories
53
rm -rf "$CHECKOUT_DIR"/*
54
# Delete files and directories starting with . but excluding ..
55
rm -rf "$CHECKOUT_DIR"/.[!.]*
56
# Delete files and directories starting with .. plus any other character
57
rm -rf "$CHECKOUT_DIR"/..?*
58
fi
59
}
60
61
if [[ "$(params.deleteExisting)" == "true" ]] ; then
62
cleandir
63
fi
64
65
/ko-app/git-init \
66
-url "$(params.url)" \
67
-revision "$(params.revision)" \
68
-path "$CHECKOUT_DIR" \
69
-sslVerify="$(params.sslVerify)" \
70
-submodules="$(params.submodules)" \
71
-depth="$(params.depth)"
72
cd "$CHECKOUT_DIR"
73
RESULT_SHA="$(git rev-parse HEAD | tr -d '\n')"
74
EXIT_CODE="$?"
75
if [ "$EXIT_CODE" != 0 ]
76
then
77
exit $EXIT_CODE
78
fi
79
# Make sure we don't add a trailing newline to the result!
80
echo -n "$RESULT_SHA" > $(results.commit.path)
Copied!
A task can have one or more steps. Each step defines an image to run to perform the function of the step. This task has one step that uses a Tekton-provided container to clone a Git repo.
A task can have parameters. Parameters help to make a task reusable. This task accepts many parameters such as:
    the URL of the Git repository to clone
    the revision to check out
Parameters can have default values provided by the task or the values can be provided by the Pipeline and PipelineRun resources that we'll see later. Steps can reference parameter values by using the syntax $(params.name) where name is the name of the parameter. For example the step uses $(params.url) to reference the url parameter value.
The task requires a workspace where the clone is stored. From the point of view of the task, a workspace provides a file system path where it can read or write data. Steps can reference the path using the syntax $(workspaces.name.path) where name is the name of the workspace. We'll see later how the workspace becomes associated with a storage volume.
Apply the file to your cluster to create the task.
1
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/v1beta1/git/git-clone.yaml
Copied!

4. Create a task to build an image and push it to a container registry

The next function that the pipeline needs is a task that builds a docker image and pushes it to a container registry. The catalog provides a kaniko task which does this using Google's kaniko tool. The task is described here.
The task is reproduced below.
1
apiVersion: tekton.dev/v1beta1
2
kind: Task
3
metadata:
4
name: kaniko
5
spec:
6
params:
7
- name: IMAGE
8
description: Name (reference) of the image to build.
9
- name: DOCKERFILE
10
description: Path to the Dockerfile to build.
11
default: ./Dockerfile
12
- name: CONTEXT
13
description: The build context used by Kaniko.
14
default: ./
15
- name: EXTRA_ARGS
16
default: ""
17
- name: BUILDER_IMAGE
18
description: The image on which builds will run
19
default: gcr.io/kaniko-project/executor:latest
20
workspaces:
21
- name: source
22
results:
23
- name: IMAGE-DIGEST
24
description: Digest of the image just built.
25
26
steps:
27
- name: build-and-push
28
workingDir: $(workspaces.source.path)
29
image: $(params.BUILDER_IMAGE)
30
# specifying DOCKER_CONFIG is required to allow kaniko to detect docker credential
31
# https://github.com/tektoncd/pipeline/pull/706
32
env:
33
- name: DOCKER_CONFIG
34
value: /tekton/home/.docker
35
command:
36
- /kaniko/executor
37
- $(params.EXTRA_ARGS)
38
- --dockerfile=$(params.DOCKERFILE)
39
- --context=$(workspaces.source.path)/$(params.CONTEXT) # The user does not need to care the workspace and the source.
40
- --destination=$(params.IMAGE)
41
- --oci-layout-path=$(workspaces.source.path)/image-digest
42
securityContext:
43
runAsUser: 0
44
- name: write-digest
45
workingDir: $(workspaces.source.path)
46
image: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter:v0.11.1
47
# output of imagedigestexport [{"key":"digest","value":"sha256:eed29..660","resourceRef":{"name":"myrepo/myimage"}}]
48
command: ["/ko-app/imagedigestexporter"]
49
args:
50
- -images=[{"name":"$(params.IMAGE)","type":"image","url":"$(params.IMAGE)","digest":"","OutputImageDir":"$(workspaces.source.path)/image-digest"}]
51
- -terminationMessagePath=image-digested
52
- name: digest-to-results
53
workingDir: $(workspaces.source.path)
54
image: stedolan/jq
55
script: |
56
cat image-digested | jq -j '.[0].value' | tee /tekton/results/IMAGE-DIGEST
Copied!
You can see that this task needs a workspace as well. This workspace has the source to build. The pipeline will provide the same workspace that it used for the git-clone task.
The kaniko task also uses a feature called results. A result is a value produced by a task which can then be used as a parameter value to other tasks. This task declares a result named IMAGE-DIGEST which it sets to the digest of the built image. A task sets a result by writing it to a file named /tekton/results/name where name is the name of the result, in this case IMAGE-DIGEST. We will see later how the pipeline uses this result.
You may be wondering about how the task authenticates to the image repository for permission to push the image. This will be covered later on in the tutorial.
Apply the file to your cluster to create the task.
1
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/v1beta1/kaniko/kaniko.yaml
Copied!

5. Create a task to deploy an image to a Kubernetes cluster

The final function that the pipeline needs is a task that deploys a docker image to a Kubernetes cluster. Below is a Tekton task that does this. You can find this yaml file at tekton/tasks/deploy-using-kubectl.yaml.
1
apiVersion: tekton.dev/v1beta1
2
kind: Task
3
metadata:
4
name: deploy-using-kubectl
5
spec:
6
workspaces:
7
- name: git-source
8
description: The git repo
9
params:
10
- name: pathToYamlFile
11
description: The path to the yaml file to deploy within the git source
12
- name: imageUrl
13
description: Image name including repository
14
- name: imageTag
15
description: Image tag
16
default: "latest"
17
- name: imageDigest
18
description: Digest of the image to be used.
19
steps:
20
- name: update-yaml
21
image: alpine
22
command: ["sed"]
23
args:
24
- "-i"
25
- "-e"
26
- "s;__IMAGE__;$(params.imageUrl):$(params.imageTag);g"
27
- "-e"
28
- "s;__DIGEST__;$(params.imageDigest);g"
29
- "$(workspaces.git-source.path)/$(params.pathToYamlFile)"
30
- name: run-kubectl
31
image: lachlanevenson/k8s-kubectl
32
command: ["kubectl"]
33
args:
34
- "apply"
35
- "-f"
36
- "$(workspaces.git-source.path)/$(params.pathToYamlFile)"
Copied!
This task has two steps.
    1.
    The first step runs sed in an Alpine Linux container to update the yaml file used for deployment with the image that was built by the kaniko task. The step requires the yaml file to have two character strings, __IMAGE__ and __DIGEST__, which are substituted with parameter values.
    2.
    The second step runs kubectl using Lachlan Evenson's popular k8s-kubectl container image to apply the yaml file to the same cluster where the pipeline is running.
As was the case in the git-clone and kaniko tasks, this task makes use of parameters in order to make the task as reusable as possible. It also needs the workspace to get the deployment yaml file.
You may be wondering about how the task authenticates to the cluster for permission to apply the resource(s) in the yaml file. This will be covered later on in the tutorial.
Apply the file to your cluster to create the task.
1
kubectl apply -f tekton/tasks/deploy-using-kubectl.yaml
Copied!

6. Create a pipeline

Below is a Tekton pipeline that runs the tasks we defined above. You can find this yaml file at tekton/pipeline/build-and-deploy-pipeline.yaml.
1
apiVersion: tekton.dev/v1beta1
2
kind: Pipeline
3
metadata:
4
name: build-and-deploy-pipeline
5
spec:
6
workspaces:
7
- name: git-source
8
description: The git repo
9
params:
10
- name: gitUrl
11
description: Git repository url
12
- name: pathToContext
13
description: The path to the build context, used by Kaniko - within the workspace
14
default: src
15
- name: pathToYamlFile
16
description: The path to the yaml file to deploy within the git source
17
- name: imageUrl
18
description: Image name including repository
19
- name: imageTag
20
description: Image tag
21
default: "latest"
22
tasks:
23
- name: clone-repo
24
taskRef:
25
name: git-clone
26
workspaces:
27
- name: output
28
workspace: git-source
29
params:
30
- name: url
31
value: "$(params.gitUrl)"
32
- name: subdirectory
33
value: "."
34
- name: deleteExisting
35
value: "true"
36
- name: source-to-image
37
taskRef:
38
name: kaniko
39
runAfter:
40
- clone-repo
41
workspaces:
42
- name: source
43
workspace: git-source
44
params:
45
- name: CONTEXT
46
value: $(params.pathToContext)
47
- name: IMAGE
48
value: $(params.imageUrl):$(params.imageTag)
49
- name: deploy-to-cluster
50
taskRef:
51
name: deploy-using-kubectl
52
workspaces:
53
- name: git-source
54
workspace: git-source
55
params:
56
- name: pathToYamlFile
57
value: $(params.pathToYamlFile)
58
- name: imageUrl
59
value: $(params.imageUrl)
60
- name: imageTag
61
value: $(params.imageTag)
62
- name: imageDigest
63
value: $(tasks.source-to-image.results.IMAGE-DIGEST)
Copied!
A Pipeline resource contains a list of tasks to run. Each pipeline task is assigned a name within the pipeline; here they are clone-repo, source-to-image, and deploy-using-kubectl.
The pipeline configures each task via the task's parameters. You can choose whether to expose a task parameter as a pipeline parameter, set the value directly, or let the value default inside the task (if it's an optional parameter). For example this pipeline exposes the CONTEXT parameter from the kaniko task (under a different name, pathToContext) but does not expose the DOCKERFILE parameter and allows it to default inside the task.
This pipeline also shows how to take the result of one task and pass it to another task. We saw earlier that the kaniko task produces a result named IMAGE-DIGEST that holds the digest of the built image. The pipeline passes that value to the deploy-using-kubectl task by using the syntax $(tasks.source-to-image.results.IMAGE-DIGEST) where source-to-image is the name used in the pipeline to run the kaniko task.
By default Tekton assumes that pipeline tasks can be executed concurrently. In this pipeline each pipeline task depends on the previous one so they must be executed sequentially. One way that dependencies between pipeline tasks can be expressed is by using the runAfter key. It specifies that the task must run after the given list of tasks has completed. In this example, the pipeline specifies that the source-to-image pipeline task must run after the clone-repo pipeline task.
The deploy-using-kubectl pipeline task must run after the source-to-image pipeline task but it doesn't need to specify the runAfter key. This is because it references a task result from the source-to-image pipeline task and Tekton is smart enough to figure out that this means it must run after that task.
Apply the file to your cluster to create the pipeline.
1
kubectl apply -f tekton/pipeline/build-and-deploy-pipeline.yaml
Copied!

7. Define a service account

Before running the pipeline, we need to set up a service account so that it can access protected resources. The service account ties together a couple of secrets containing credentials for authentication along with RBAC-related resources for permission to create and modify certain Kubernetes resources.
The 3rd command will create a secret that contains credentials for accessing the internal OpenShift image registry
Next, run the get routes command to view the registry endpoint. Copy it and insert it into the command after that replacing <Registry route>. Then, replace with the email address you used for the IBM Cloud account.
1
kubectl get routes -n openshift-image-registry
2
3
export REGISTRY=<Registry route>
4
export EMAIL=<your IBM ID>
5
6
kubectl create secret generic ibm-registry-secret --type="kubernetes.io/basic-auth" --from-literal=username=$(oc whoami) --from-literal=password=$(oc whoami -t)
7
8
kubectl create secret docker-registry pull-secret --docker-username=$(oc whoami) --docker-password=$(oc whoami -t) --docker-email=$EMAIL --docker-server=$REGISTRY
Copied!
1
kubectl annotate secret ibm-registry-secret tekton.dev/docker-0=$REGISTRY
Copied!
This secret will be used to both push and pull images from your registry.
Now you can create the service account using the following yaml. You can find this yaml file at tekton/pipeline-account.yaml.
1
apiVersion: v1
2
kind: ServiceAccount
3
metadata:
4
name: pipeline-account
5
secrets:
6
- name: ibm-registry-secret
7
8
---
9
10
apiVersion: v1
11
kind: Secret
12
metadata:
13
name: kube-api-secret
14
annotations:
15
kubernetes.io/service-account.name: pipeline-account
16
type: kubernetes.io/service-account-token
17
18
---
19
20
kind: Role
21
apiVersion: rbac.authorization.k8s.io/v1
22
metadata:
23
name: pipeline-role
24
rules:
25
- apiGroups: [""]
26
resources: ["services"]
27
verbs: ["get", "create", "update", "patch"]
28
- apiGroups: ["apps"]
29
resources: ["deployments"]
30
verbs: ["get", "create", "update", "patch"]
31
32
---
33
34
apiVersion: rbac.authorization.k8s.io/v1
35
kind: RoleBinding
36
metadata:
37
name: pipeline-role-binding
38
roleRef:
39
apiGroup: rbac.authorization.k8s.io
40
kind: Role
41
name: pipeline-role
42
subjects:
43
- kind: ServiceAccount
44
name: pipeline-account
Copied!
This yaml creates the following Kubernetes resources:
    A ServiceAccount named pipeline-account. The service account references the ibm-registry-secret secret so that the pipeline can authenticate to your private container registry when it pushes and pulls a container image.
    A Secret named kube-api-secret which contains an API credential (generated by Kubernetes) for accessing the Kubernetes API. This allows the pipeline to use kubectl to talk to your cluster.
    A Role named pipeline-role and a RoleBinding named pipeline-role-binding which provide the resource-based access control permissions needed for this pipeline to create and modify Kubernetes resources.
Apply the file to your cluster to create the service account and related resources.
1
kubectl apply -f tekton/pipeline-account.yaml
Copied!

8. Create a PipelineRun

We've defined reusable Pipeline and Task resources for building and deploying an image. It is now time to look at how one runs the pipeline.
Below is a Tekton PipelineRun resource that runs the pipeline we defined above. You can find this yaml file at tekton/run/picalc-pipeline-run.yaml.
1
apiVersion: tekton.dev/v1beta1
2
kind: PipelineRun
3
metadata:
4
generateName: picalc-pr-
5
spec:
6
pipelineRef:
7
name: build-and-deploy-pipeline
8
params:
9
- name: gitUrl
10
value: https://github.com/odrodrig/tekton-tutorial
11
- name: pathToYamlFile
12
value: kubernetes/picalc.yaml
13
- name: imageUrl
14
value: <REGISTRY>/default/picalc
15
- name: imageTag
16
value: "1.0"
17
serviceAccountName: pipeline-account
18
workspaces:
19
- name: git-source
20
persistentVolumeClaim:
21
claimName: picalc-source-pvc
Copied!
Although this file is small there is a lot going on here. Let's break it down from top to bottom:
    The PipelineRun does not have a fixed name. It uses generateName to generate a name each time it is created. This is because a particular PipelineRun resource executes the pipeline only once. If you want to run the pipeline again, you cannot modify an existing PipelineRun resource to request it to re-run -- you must create a new PipelineRun resource. While you could use name to assign a unique name to your PipelineRun each time you create one, it is much easier to use generateName.
    The Pipeline resource is identified under the pipelineRef key.
    Parameters exposed by the pipeline are set to specific values such as the Git repository to clone, the image to build, and the yaml file to deploy. This example builds a go program that calculates an approximation of pi. The source includes a Dockerfile which runs tests, compiles the code, and builds an image for execution.
    You must edit the picalc-pipeline-run.yaml file to substitute the values of <REGISTRY> with the information for your private container registry.
      To find the value for <REGISTRY>, enter the command oc get routes -n openshift-image-registry.
    The service account named pipeline-account which we created earlier is specified to provide the credentials needed for the pipeline to run successfully.
    The workspace used by the pipeline to clone the Git repository is mapped to a persistent volume claim which is a request for a storage volume.
Before you run the pipeline for the first time, you must create the persistent volume claim for the workspace.
1
kubectl create -f tekton/picalc-pipeline-pvc.yaml
Copied!
The persistent volume claim requests Kubernetes to obtain a storage volume. Since each PipelineRun references the same claim and thus the same volume, this means the PipelineRun can only be run serially to avoid conflicting use of the volume. There is funtionality coming in Tekton to allow each PipelineRun to create its own persistent volume claim and thus use its own volume.
Check that the persistent volume claim is bound before continuing.
1
$ kubectl get pvc picalc-source-pvc
2
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
3
picalc-source-pvc Bound pvc-662946bc-57f2-4ba5-982c-b0fa9db1d065 20Gi RWO ibmc-file-bronze 2m
Copied!

9. Run the pipeline

All the pieces are in place to run the pipeline.
1
$ kubectl create -f tekton/run/picalc-pipeline-run.yaml
2
pipelinerun.tekton.dev/picalc-pr-c7hsb created
Copied!
Note that we're using kubectl create here instead of kubectl apply. As mentioned previously a given PipelineRun resource can run a pipeline only once so you need to create a new one each time you want to run the pipeline. kubectl will respond with the generated name of the PipelineRun resource.
Let's view the status of our pipeline run
1
kubectl get pipelinerun
2
3
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
4
picalc-pr-dqwqb Unknown Running 5s
Copied!
The pipeline will be successful when SUCCEEDED is True.
1
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
2
picalc-pr-dqwqb True Succeeded 7m26s 4m49s
Copied!
Check the status of the Kubernetes deployment. It should be ready.
1
$ kubectl get deploy picalc
2
NAME READY UP-TO-DATE AVAILABLE AGE
3
picalc 1/1 1 1 9m
Copied!
You can curl the application using its NodePort service. First display the nodes and choose one of the node's external IP addresses. Then display the service to get its NodePort.
1
$ kubectl get nodes -o wide
2
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
3
10.221.22.11 Ready <none> 7d23h v1.16.8+IKS 10.221.22.11 150.238.236.26 Ubuntu 18.04.4 LTS 4.15.0-96-generic containerd://1.3.3
4
10.221.22.49 Ready <none> 7d23h v1.16.8+IKS 10.221.22.49 150.238.236.21 Ubuntu 18.04.4 LTS 4.15.0-96-generic containerd://1.3.3
5
6
$ kubectl get svc picalc
7
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
8
picalc NodePort 172.21.199.71 <none> 8080:30925/TCP 9m
9
10
$ curl 150.238.236.26:30925?iterations=20000000
11
3.1415926036
Copied!

Debugging a failed PipelineRun

Let's take a look at what a PipelineRun failure would look like. Edit the PipelineRun yaml and change the gitUrl parameter to a non-existent Git repository to force a failure. Then create a new PipelineRun and describe it after letting it run for a minute or two.
1
$ kubectl create -f tekton/picalc-pipeline-run.yaml
2
pipelinerun.tekton.dev/picalc-pr-sk7md created
3
4
$ tkn pipelinerun describe picalc-pr-sk7md
5
Name: picalc-pr-sk7md
6
Namespace: default
7
Pipeline Ref: build-and-deploy-pipeline
8
Service Account: pipeline-account
9
10
Status
11
12
STARTED DURATION STATUS
13
2 minutes ago 41 seconds Failed
14
15
Message
16
17
TaskRun picalc-pr-sk7md-clone-repo-8gs25 has failed (&#34;step-clone&#34; exited with code 1 (image: &#34;gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/[email protected]:bee98bfe6807e8f4e0a31b4e786fd1f7f459e653ed1a22b1a25999f33fa9134a&#34;); for logs run: kubectl -n default logs picalc-pr-sk7md-clone-repo-8gs25-pod-v7fg8 -c step-clone)
18
19
Resources
20
21
No resources
22
23
Params
24
25
NAME VALUE
26
gitUrl https://github.com/IBM/tekton-tutorial-not-there
27
pathToYamlFile kubernetes/picalc.yaml
28
imageUrl us.icr.io/gregd/picalc
29
imageTag 1.0
30
31
Taskruns
32
33
NAME TASK NAME STARTED DURATION STATUS
34
picalc-pr-sk7md-clone-repo-8gs25 clone-repo 2 minutes ago 41 seconds Failed
Copied!
The output tells us that the clone-repo pipeline task failed. The Message also tells us how to get the logs from the pod which was used to run the task:
1
for logs run: kubectl -n default logs picalc-pr-sk7md-clone-repo-8gs25-pod-v7fg8 -c step-clone
Copied!
If you run that kubectl logs command you will see that there is a failure trying to fetch the non-existing Git repository.
An even easier way to get the logs from a PipelineRun is to use the tkn CLI.
1
tkn pipelinerun logs picalc-pr-sk7md-clone-repo-8gs25-pod-v7fg8 -t clone-repo
Copied!
If you omit the -t flag then the command will get the logs for all pipeline tasks that executed.
You can also get the logs for the last PipelineRun for a particular Pipeline using this command:
1
tkn pipeline logs build-and-deploy-pipeline -L
Copied!
You should delete a PipelineRun when you no longer have a need to reference its logs. Deleting the PipelineRun deletes the pods that were used to run the pipeline tasks.

10. Check it out in OpenShift Console

OpenShift provides a nice UI for the pipelines and the applications deployed. From the OpenShift console, click developer in the upper left drop-down to get to the developer view. Then click Topology to view your running app.
OpenShift Topology
Click Pipelines to explore the pipline your created and explore the PipelineRuns
OpenShift Pipelines

Summary

Tekton provides simple, easy-to-learn features for constructing CI/CD pipelines that run on Kubernetes. This tutorial covered the basics to get you started building your own pipelines. There are more features available and many more planned for upcoming releases.
Last modified 1yr ago