2022-11-30

Back to Kubernetes blog.

Replacing Tanzu Community Edition (TCE)

Group of developers programming

Tanzu Communiy Edition (TCE) is dead. Long live TCE! It was a bundle of free and open source applications (that will still always remain free and open source!). This was a community project lead by VMware (full disclaimer: I currently work there) of what is collectively known as Tanzu Kubernetes Grid multicloud (TKGm) and Tanzu Application Platform (TAP). The most common use-case for TCE was for evaluation or local development purposes.

Users can now get TKGm for free to use for personal use. A Kubernetes deployment of up to 100 cores is allowed through this trial license. One of the reasons behind this is to make migration seamless as TCE was a radically different codebase. For example, you probably do not want to use Red Hat’s AWX (open source) over Ansible Tower (product) due to the quality of code and support. In many cases, you are looking at a painful migration.

For local deveopment on your workstation, the official recommendation is to use Minikube or KinD. I used to use TCE for lab environments and demos (both internal and external). It’s great for getting a quick and dirty Kubernetes cluster up. So what is a person to do in a world without TCE? Let’s dive into what TCE provides from a Kubernetes platform standpoint and then see if we can re-create it.

  • Install TCE.
$ wget https://github.com/vmware-tanzu/community-edition/releases/download/v0.12.1/tce-linux-amd64-v0.12.1.tar.gz
$ tar -xvf tce-linux-amd64-v0.12.1.tar.gz
$ cd tce-linux-amd64-v0.12.1
$ bash install.sh
  • View the default package plugins.
$ tanzu plugin list
  NAME                DESCRIPTION                                                        SCOPE       DISCOVERY      VERSION  STATUS     
  apps                Applications on Kubernetes                                         Standalone  default-local  v0.6.0   installed  
  builder             Build Tanzu components                                             Standalone  default-local  v0.11.4  installed  
  cluster             Kubernetes cluster operations                                      Standalone  default-local  v0.11.4  installed  
  codegen             Tanzu code generation tool                                         Standalone  default-local  v0.11.4  installed  
  conformance         Run Sonobuoy conformance tests against clusters                    Standalone  default-local  v0.12.1  installed  
  diagnostics         Cluster diagnostics                                                Standalone  default-local  v0.12.1  installed  
  kubernetes-release  Kubernetes release operations                                      Standalone  default-local  v0.11.4  installed  
  login               Login to the platform                                              Standalone  default-local  v0.11.4  installed  
  management-cluster  Kubernetes management cluster operations                           Standalone  default-local  v0.11.4  installed  
  package             Tanzu package management                                           Standalone  default-local  v0.11.4  installed  
  pinniped-auth       Pinniped authentication operations (usually not directly invoked)  Standalone  default-local  v0.11.4  installed  
  secret              Tanzu secret management                                            Standalone  default-local  v0.11.4  installed  
  unmanaged-cluster   Deploy and manage single-node, static, Tanzu clusters.             Standalone  default-local  v0.12.1  installed
  • Create a Kubernetes workload cluster.
$ tanzu unmanaged-cluster create lukeshortcloud
  • You will automatically be placed into the the new context. Notice how this is a kind cluster. It is using kind for the actual deployment of Kubernetes. More specifically, kind is using the Cluster API Provider for Docker (CAPD).
$ kubectl config get-contexts
CURRENT   NAME                  CLUSTER               AUTHINFO              NAMESPACE
*         kind-lukeshortcloud   kind-lukeshortcloud   kind-lukeshortcloud
$ sudo docker ps -a
CONTAINER ID   IMAGE                                           COMMAND                  CREATED         STATUS         PORTS                       NAMES
4d9cdfdd917e   projects.registry.vmware.com/tce/kind:v1.22.7   "/usr/local/bin/entr…"   2 minutes ago   Up 2 minutes   127.0.0.1:38359->6443/tcp   lukeshortcloud-control-plane
  • What is pre-installed? The only third-party components installed are (1) Calico for CNI and (2) kapp-controller for installing Carvel packages (a more advanced take on package management compared to Helm charts).
$ kubectl get pods --all-namespaces
NAMESPACE            NAME                                                   READY   STATUS     RESTARTS   AGE
kube-system          calico-kube-controllers-75b5f998f9-5tmmf               0/1     Pending    0          61s
kube-system          calico-node-vq5rl                                      0/1     Init:0/3   0          61s
kube-system          coredns-78fcd69978-gsf6r                               0/1     Pending    0          2m19s
kube-system          coredns-78fcd69978-qhrc8                               0/1     Pending    0          2m19s
kube-system          etcd-lukeshortcloud-control-plane                      1/1     Running    0          2m35s
kube-system          kube-apiserver-lukeshortcloud-control-plane            1/1     Running    0          2m33s
kube-system          kube-controller-manager-lukeshortcloud-control-plane   1/1     Running    0          2m34s
kube-system          kube-proxy-ng46j                                       1/1     Running    0          2m20s
kube-system          kube-scheduler-lukeshortcloud-control-plane            1/1     Running    0          2m33s
local-path-storage   local-path-provisioner-74567d47b4-7lcwf                0/1     Pending    0          2m19s
tkg-system           kapp-controller-779d9777dc-9vx47                       1/1     Running    0          2m19s
  • Let’s look at the packages we have available to install. These are a combination of open source equivilanets of what is provided both in the Tanzu Standard repository and the Tanzu Application Platform (TAP) repository.
$ tanzu package available list
  NAME                                                    DISPLAY-NAME                 SHORT-DESCRIPTION                                                                                                                                          LATEST-VERSION  
  app-toolkit.community.tanzu.vmware.com                  App-Toolkit package for TCE  Kubernetes-native toolkit to support application lifecycle                                                                                                 0.2.0           
  cartographer-catalog.community.tanzu.vmware.com         Cartographer Catalog         Reusable Cartographer blueprints                                                                                                                           0.3.0           
  cartographer.community.tanzu.vmware.com                 Cartographer                 Kubernetes native Supply Chain Choreographer.                                                                                                              0.3.0           
  cert-injection-webhook.community.tanzu.vmware.com       cert-injection-webhook       The Cert Injection Webhook injects CA certificates and proxy environment variables into pods                                                               0.1.1           
  cert-manager.community.tanzu.vmware.com                 cert-manager                 Certificate management                                                                                                                                     1.8.0           
  contour.community.tanzu.vmware.com                      contour                      An ingress controller                                                                                                                                      1.20.1          
  external-dns.community.tanzu.vmware.com                 external-dns                 This package provides DNS synchronization functionality.                                                                                                   0.10.0          
  fluent-bit.community.tanzu.vmware.com                   fluent-bit                   Fluent Bit is a fast Log Processor and Forwarder                                                                                                           1.7.5           
  fluxcd-source-controller.community.tanzu.vmware.com     Flux Source Controller       The source-controller is a Kubernetes operator, specialised in artifacts acquisition from external sources such as Git, Helm repositories and S3 buckets.  0.21.5          
  gatekeeper.community.tanzu.vmware.com                   gatekeeper                   policy management                                                                                                                                          3.7.1           
  grafana.community.tanzu.vmware.com                      grafana                      Visualization and analytics software                                                                                                                       7.5.11          
  harbor.community.tanzu.vmware.com                       harbor                       OCI Registry                                                                                                                                               2.4.2           
  helm-controller.fluxcd.community.tanzu.vmware.com       Flux Helm Controller         The Helm Controller is a Kubernetes operator, allowing one to declaratively manage Helm chart releases with Kubernetes manifests.                          0.17.2          
  knative-serving.community.tanzu.vmware.com              knative-serving              Knative Serving builds on Kubernetes to support deploying and serving of applications and functions as serverless containers                               1.0.0           
  kpack-dependencies.community.tanzu.vmware.com           kpack dependencies           Dependencies in the form of Buildpacks and Stacks for the kpack package                                                                                    0.0.27          
  kpack.community.tanzu.vmware.com                        kpack                        kpack builds application source code into OCI compliant images using Cloud Native Buildpacks                                                               0.5.3           
  kustomize-controller.fluxcd.community.tanzu.vmware.com  Flux Kustomize Controller    Kustomize controller is one of the components in GitOps toolkit.                                                                                           0.21.1          
  local-path-storage.community.tanzu.vmware.com           local-path-storage           This package provides local path node storage and primarily supports RWO AccessMode.                                                                       0.0.22          
  multus-cni.community.tanzu.vmware.com                   multus-cni                   This package provides the ability for enabling attaching multiple network interfaces to pods in Kubernetes                                                 3.8.0           
  prometheus.community.tanzu.vmware.com                   prometheus                   A time series database for your metrics                                                                                                                    2.27.0-1        
  velero.community.tanzu.vmware.com                       velero                       Disaster recovery capabilities                                                                                                                             1.8.0           
  whereabouts.community.tanzu.vmware.com                  whereabouts                  A CNI IPAM plugin that assigns IP addresses cluster-wide                                                                                                   0.5.1 

Great! Now we know what makes Tanzu tick. Let’s clean up what we have and try to re-create this with KinD.

  • Let’s clean up and replace TCE.
$ tanzu unmanaged-cluster delete lukeshortcloud
  • Install kind:
$ GO111MODULE="on" go get sigs.k8s.io/kind@v0.17.0
$ sudo cp ~/go/bin/kind /usr/local/bin/
  • Deploy a local kind cluster using CAPD:
$ kind create cluster --image kindest/node:v1.22.7 --name lukeshortcloud
  • The context is automatically changed:
$ kubectl config get-contexts
CURRENT   NAME                  CLUSTER               AUTHINFO              NAMESPACE
*         kind-lukeshortcloud   kind-lukeshortcloud   kind-lukeshortcloud
  • How do the Pods compare here? For one thing, the CNI plugin is kindnet instead of Calico. We are also missing kapp-controller.
$ kubectl get pods --all-namespaces
NAMESPACE            NAME                                                   READY   STATUS    RESTARTS   AGE
kube-system          coredns-78fcd69978-h5vg8                               1/1     Running   0          36s
kube-system          coredns-78fcd69978-qqgh6                               1/1     Running   0          36s
kube-system          etcd-lukeshortcloud-control-plane                      1/1     Running   0          51s
kube-system          kindnet-4x2bv                                          1/1     Running   0          37s
kube-system          kube-apiserver-lukeshortcloud-control-plane            1/1     Running   0          51s
kube-system          kube-controller-manager-lukeshortcloud-control-plane   1/1     Running   0          52s
kube-system          kube-proxy-q9wlk                                       1/1     Running   0          37s
kube-system          kube-scheduler-lukeshortcloud-control-plane            1/1     Running   0          49s
local-path-storage   local-path-provisioner-74567d47b4-d7fwg                1/1     Running   0          36s
  • Let’s redeploy with Calico on kind instead.
$ cat <<EOF | kind create cluster --name lukeshortcloud --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  disableDefaultCNI: true
  podSubnet: 192.168.0.0/16
EOF
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
  • The last thing we need for the infrastructure is to install kapp-controller. This is part of the suite of Carvel tools that provide more advanced packaging combared to Helm.
$ kubectl apply -f https://github.com/vmware-tanzu/carvel-kapp-controller/releases/latest/download/release.yml
  • What pods do we have now? Ah, that’s better. It’s just like TCE!
$ kubectl get pods --all-namespaces
NAMESPACE            NAME                                                   READY   STATUS    RESTARTS   AGE
kapp-controller      kapp-controller-57f6bf9f96-w7rnn                       2/2     Running   0          74s
kube-system          calico-kube-controllers-798cc86c47-6kf2d               1/1     Running   0          2m25s
kube-system          calico-node-ghrhk                                      1/1     Running   0          2m25s
kube-system          coredns-565d847f94-mrv9s                               1/1     Running   0          5m46s
kube-system          coredns-565d847f94-wfsdj                               1/1     Running   0          5m46s
kube-system          etcd-lukeshortcloud-control-plane                      1/1     Running   0          6m
kube-system          kube-apiserver-lukeshortcloud-control-plane            1/1     Running   0          6m
kube-system          kube-controller-manager-lukeshortcloud-control-plane   1/1     Running   0          6m
kube-system          kube-proxy-nv2sj                                       1/1     Running   0          5m46s
kube-system          kube-scheduler-lukeshortcloud-control-plane            1/1     Running   0          6m
local-path-storage   local-path-provisioner-684f458cdd-5k27h                1/1     Running   0          5m46s

That’s it! We now have a Kubernetes lab environment similar to what TCE provides. What about those fancy packages? Keeping aligned with TCE, you can install TAP which deserves its own separate blog. Otherwise, Bitnami Helm charts provide a large catalogue of useful applications with no strings attached.

Does this help you with your Kubernetes adoption journey? Looking for more Kubernetes advice? Feel free to call me, beep me, if you want to reach me!

2021-10-06

Back to Kubernetes blog.

CKAD CKA CKS Allowed URLs and Bookmarks

Colorful bookmarks

A few common questions many people have around the Kubernetes exams (including myself) are what websites are you allowed to access during the exam and how many web browsers or tabs can you have open? As I prepare for my CKS exam this week, I wanted to make sure I knew what resources we have available to us. You can, and will, be kicked out of your exam if you do not follow their strict guidelines.

You can only have two tabs open: (1) with the actual exam and (2) with documentation. No other windows are allowed (web browser or otherwise) which means no split-screen windows. Sorry to all the 4K and/or ultra-wide monitor users out there. Below are the list of documentation websites you can access and bookmark:

CKA and CKAD:

CKS:

For setting yourself up for success, organize all of your bookmarks into at least one folder. Sort them by importance. The concepts or references you need the most should be at the top for quick and easy access.

Hopefully these small tips help prepare you for success! Good luck with your exams!

Source: “Important Instructions: CKA and CKAD.” September 29, 2021. Accessed October 6, 2021. https://docs.linuxfoundation.org/tc-docs/certification/faq-cka-ckad-cks

2021-08-03

Back to Kubernetes blog.

CKS Exam Expectations

These are my expectations for the kind of questions to expect on the Certified Kubernetes Security (CKS) exam.

DISCLAIMER: These are predictions largely based on the official CKS Curriculum and courses found on Udemy. I’ve NOT taken any CKS exams (practice, real, or otherwise). I’m writing my thoughts down now before I start taking practice exams and enter NDA territory. These questions will NOT reflect what is exactly on the exam and are only educated guesses for what we should be prepared for.

Practice Exam

Without further ado, here is a practice exam I’ve created that you can use as study material. Do not consider this a comprehensive/complete list of questions. Use it as a starting point.

  • Build a container using a Dockerfile with a multi-stage build.
    • The last stage should use the lightweight “alpine” image.
  • Create a PodSecurityPolicy to force Pods to be read-only.
  • Deploy a WordPress website with a HTTP front-end (nginx) and a database back-end (mysql).
    • Pod configuartion:
      • Use the gVisor/runsc container runtime class.
      • Apply an existing AppArmor profile for the NGINX container.
      • Run as a non-root user.
      • Add the “CAP_SYS_ADMIN” capability.
        • (Production tip: avoid using this capability as it grants near-root levels of access).
      • Disable the ServiceAccount mount.
    • Use NetworkPolicies to only allow traffic to/from the relevant ports (80 and 3306) used by the Pods.
    • Create a self-signed certificate and store it as a Secret object.
    • Create an Ingress object with the TLS certificate.
      • (Lab tip: use “cert-manager” to automate certificate creation).
  • Encrypt all existing Secret objects and ensure new Secrets will also be encrypted.
  • Use crictl to manually access a container running on a worker node.
  • Upgrade Kubernetes from one minor version to the next: 1.21.0 to 1.22.0.
  • Create a new user account using the ClusterRole, ClusterRoleBinding, and CertificateSigningRequest APIs.
  • Create a new ServiceAccount.
  • Create a new ConstraintTemplate object with a provided OPA policy.
  • Scan the “nginx:1.18.0” image with Trivy.
  • Run CIS benchmarks with ‘kube-bench’.
  • Run a system scan with Falco.
  • Find all non-Kubernetes systemd services and stop them.
  • Enable audit logging in the kupe-apiserver.
    • Identify which users are interacting with the API.
  • Enable ImagePolicyWebhook in the kube-apiserver.
    • Allow the container image “nginx:1.18.0” to be used on the Kubernetes cluster.
  • Verify the checksum of the binaries installed with those mentioned in the official Kubernetes change log.

If you can do everything from above, you’re most likely in a good spot to get a passing score.

Parting Words

The CKS builds on-top of concepts from what the Certified Kubernetes Administrator (CKA) exam is about. That also makes it the most challenging exam the Cloud Native Computing Foundation (CNCF) has made to date. Only take the CKS if you’ve already passed the CKA.

Be sure to also check out my related guide on Tips to Help You Pass Any Kubernetes Exam.

2021-05-18

Back to Kubernetes blog.

Tips to Help You Pass Any Kubernetes Exam

Person nervously biting pencil while studying

Trying to get your CKA, CKAD, and/or CKS certification for fun, profit, and/or glory? Here are the top tips for success based on my own experiences!

Exams Order/Priority

All of the exams are very similar and build off one another. Assuming your goal is to obtain all of the certifications, tackle them in the order below. This provides a clear path of adding on additional APIs and tools one exam at a time. That means that, for example, taking the CKA before the CKAD would probably be a harder experience.

  1. Certified Kubernetes Application Developer (CKAD)
  2. Certified Kubernetes Administrator (CKA)
  3. Certified Kubernetes Security Specialist (CKS)

Schedule a Date

“If you talk about it, it’s a dream. If you envision it, it’s possible. But if you schedule it, it’s real.” - Anthony Robbins. This is hands-down one of my favorite quotes I discovered from a mentoree/mentor of mine.

I found that I was ready for the CKA exam after a few months of studying. My biggest downfall was never scheduling the exam in the first place. I spent extra time over-preparing and trying to always dig deeper. After all, the end-goal wasn’t to get a seemingly worthless piece of a paper. It’s to build up your skills to help you with your job! Just make sure you have a clear goal to work towards or otherwise you won’t have anything to show for it!

Get the certification first and then your time will be freed up to go that extra mile of learning more of the related and advanced Kubernetes topics. If nothing else, this will unlock new job opportunities sooner. Recruiters will be knocking on your door!

Study Time

If you want to become an expert in anything, you need to devote time every day. No exceptions. I aim for 1 hour a day. Even if I can’t commit a full hour, I try to spend a minimum of 30 minutes. On “cheat days” I’ll just watch tutorial videos online and not do hands-on. However, the hands-on experience is the most valuable.

Study Resources

Great, you want to dedicate time every day to learn! Now what?

For learning the primary exam materials, there are no better courses than the ones offered by KodeKloud. Use the free Katakoda Kubernetes Playground to test things you have learned. A couple of weeks before your exams, take a Killer Shell (killer.sh) practice exam. It’s designed to be harder than the actual exam to prepare you better. From that, identify areas of improvement and continue to review those sections in KodeKloud and run through the related end-of-chapter practice tests.

Bookmarks

Yes, you’re allowed to use bookmarks during the exam! More specifically, you’re allowed two tabs: one for the exam and the second for searching official Kubernetes resources. From the official Kubernetes websites, I can assure you that the Kubernetes Documentation is all you need. There are lots of great real-world example manifests and hints to be found. Identify your weak areas and bookmark related documentation pages. Consider keeping all of your bookmarks in a single bookmark folder that is clearly visible and easily accessible.

The kubectl Cheat Sheet and kubectl Command Reference are great examples of hidden gem bookmarks.

P.S. - You are NOT allowed to have one of your tabs open in a separate window. This means that if you have a large 4K or ultrawide monitor, you won’t be able to take full advantage of the extra screen real estate.

Manifests

Most Kubernetes professionals will tell you to never use kubectl run. I’m here to tell you that you should use it and use it all the time. The catch is, however, that you should never create objects with it. Instead, use it to create YAML manifests. For the exam, you’ll have a copy of it that you can examine later. For work/play, you’ll have a manifest you can git commit for the invaluable version control/history.

Say hello to your new friend: --dry-run=client -o yaml. This’ll output an example YAML manifest. It lays down the foundation so you can then tweak it based on the object you need.

  • kubectl run nginx --image nginx --dry-run=client -o yaml >> pod-nginx.yaml

Pods

Creating Pods on the CLI uses a special command: kubectl run. This is intentionally meant to be similar to docker run. You don’t even need to memorize the options. Most times, you can get away with grabbing the relevant help information:

  • kubectl run --help | grep run

Everything Else

Most of the common APIs can be created via the CLI. View the ones you can create (look for “Available Commands”):

  • kubectl create --help

Again, do a simple grep to help find pre-made examples of arguments that can be used:

  • kubectl create <API> --help | grep create

kubectl

Ah, look what we have here. kubectl. Get used to it, my friends. You’ll be using this a lot. Here’s a brain dump of useful, and not very well-known, commands for the exam and also in the real-world:

  • Shortcuts

    • alias k=kubectl # Save time by not having to type 6 extra letters! This, however, negates bash completion. Pick your kryptonite.
    • export d="--dry-run=client -o yaml" # Set a shell variable to save time when creating YAML manifests. Add $d to the end of the kubectl [run|create] command.
  • API

    • kubectl api-resources # View all of the APIs available.
    • kubectl api-resources --namespaced # View all of the APIs that support being namespaced.
    • kubectl api-versions # View all of the API versions available.
    • kubectl explain <API> –recursive # View all of the available options for a specific API.
    • kubectl explain <API>.spec # View the “spec[ification]” field of a specific API.
  • Create, Read, Update, and Delete (CRUD)

    • kubectl create <API> | grep create # View examples of how to create objects.
    • kubectl get <API> --show-labels # View the labels for each object.
    • kubectl get <API> -w # Watch for updates to a particular resource
    • kubectl top pod --containers # View the resource consumption of all containers.
    • EDITOR=nano kubectl edit --record <OBJECT> # Update the manifest of a running object using the specified $EDITOR variable. Record the previous object manifest as a single-line annotation.
    • kubectl delete <API> <OBJECT> --wait=0 # Do not wait for the object deletion to finish. Return back to the shell prompt immediately.
  • Cluster

    • kubectl get events -A --sort-by=.metadata.creationTimestamp # Get all (most, actually, as -A only applies to a handful of APIs) events from a cluster ordered by the newest first.
    • kubectl describe node <NODE> | grep -i cidr # Find the Pod network CIDR allocated to a specific Node.
    • kubectl cluster-info dump | grep -- --service # Find the cluster-wide Service CIDR
  • Role-Based Access Control (RBAC)

    • kubectl create sa ...; kubectl create role ...; kubectl create rolebinding ... # Create a ServiceAccount, a Role which defines which permissions are granted, and active the ServiceAccount by assigning the Role via a RoleBinding (or ClusterRoleBinding).
    • kubectl auth can-i --list # View all of the permissions the current user has.
    • kubectl auth can-i <ACTION> <API> --as system:serviceaccount:<NAMESPACE>:<SERVICEACCOUNT_NAME> # Verify that a ServiceAccount can perform the specified action.
  • kubeadm

    • kubeadm certs check-expiration # View the TLS certificates expirations for Kubernetes services.
    • kubeadm certs renew <KUBERNETES_SERVICE> # Renew a TLS certificate.
    • kubeadm token create --print-join-command # Print the command to join a Worker Node. Copy that command and run it on the new Worker Node to add it to the Kubernetes cluster.

Retake

Worst case scenario, you fail your exam. That’s okay becase you get a free retake! Better yet, now you now know what to expect. Think back to the questions you didn’t understand and/or took too much time on. Build up your skills from there. Get better at solving those kinds of scenarios and solving them quickly.

What Next?

You have your certification! Congratulations! Now what?

Find your niche. Heck, you can even use your shiny new certification as an inspiration for your starting point. Here are a few high-level examples (there are many ways to tackle each):

  • Administration = Familiarize yourself a few different deployment tools. Customize the Kubernetes services to expose (or even disable) different features.
  • Application Developer = Find frameworks to help build and deploy cloud-native applications automatically.
  • Security Specialist = Brush up on more advanced RBAC topics and how to lock-down clusters in such a way that any government agency would be proud.

Get a promotion, a new job, or even create a start-up! Sky’s the limit for what you want to do with your new skills!

P.S. - VMware, where I currently work, is always hiring for our Kubernetes teams. If you’re looking for a job, let me know! We’re especially interested in hiring women and people of different races and ethnicities. We’ve got amazingly ambitious diversity goals! Don’t believe me? Read more about our goals here.

Closing Thoughts

Even if only one piece of advice helps you on your journey then I’m happy to have written this article. If you truly want a Kubernetes certification, you’ll get it.

Reach out to me on Twitter @LukeShortCloud (previously @ekultails) or on LinkedIn if you need any help with your Kubernetes journey. I’d be glad to provide guidance!

  • Luke Short

2021-04-18

Back to Kubernetes blog.

Free Ways to Use and Learn Kubernetes

Person reading on top of a pile of coins

You don’t even have to spend a dime!

While doing additional research for this blog post, I came across a very cool project. Let me introduce you to Free Kubernetes. This git project contains a variety of ways you can run your own Kubernetes cluster for free. Most of these use public clouds so you don’t even need to worry about requiring any hardware.

For my own learning and growth, I’ve found these to be great tools and hope you do as well!

  • Cloud:
    • Katakoda Playground = This is all you need. You get a single Control Plane Node and a single Worker Node for 1 hour.
      • Once you get good at Kubernetes, you can even use this to create your own training. Companies such as KodeKloud built Kubernetes courses using it.
    • Civo = Civo uses k3s in the back-end. You can get a highly-available cluster in literally a few minutes. No exaggeration! Their Kubernetes service used to offer $70/month for free for its beta program. That’s now ending and they’re instead offering a one-time $250 credit.
      • Using “Small” Nodes, you could get 8 months with three Nodes or 25 months with a single Node!
  • Local workstation/server:
    • Minikube/Minishift = Honestly, I haven’t used these much because they’re so limited out-of-the-box. That being said, this is easy and it works. They provide a golden virtual machine image of a working all-in-one (Controle Plane + Worker) Kubernetes Node.
    • k3s = The ultimate home lab tool for your Raspberry Pi cluster. A single binary, one minute install, and one minute upgrades. What’s not to love?
    • kubeadm = The official tool for installing Kubernetes. This is important to know! Even many third-party Kubernetes installers, such as VMware’s Tanzu, are built on-top of this. It’s also featured in the Certified Kubernetes Administrator (CKA) exam. Spin up a virtual machine with Vagrant and hack around with the tool.

Bonus:

  • AWS Lamba = I’m not 100% sure on what’s used in the back-end. I would guess Kubernetes and honestly it doesn’t matter. This concept of “function-as-a-service” or “serverless” is a huge topic in Kubernetes and I’d argue it’s the next big thing. You can learn the concepts of it with the AWS Free Tier and later apply it to Kubernetes via the use of Knative or OpenFaaS. You can use 1 million requests or 3.2 million compute seconds. Whichever comes first.

Have any questions about setting up a lab? Reach out to me at @ekultails and I’ll see if I can help! - Luke Short

2021-04-08

Back to Kubernetes blog.

How You Can Become a Kubernetes Expert

Ocean shipyard with containers

Here’s how.

  • YAML = What do Kubernetes, Ansible, and GItHub Actions have in common? YAML! It’s important. My colleagues and I constantly joke about how we are YAML Engineers(™). Don’t be intimated! It’s so popular because it was made to be easy and human-readable.
  • Containers = Do you know and understand what containers are and how they work?
    • No? Start here before you go any further. Kubernetes is an automation and orchestration platform of sorts. You need to understand the underlying technology first before you start adding on extra layers of features and complexity. Checkout the KodeKloud Docker for Beginners course.
    • Yes? Great! Moving on.
  • Vanilla Kubernetes = A quick note! This is a mistake that I’ve made and many others make. Do NOT start by studying a vendor-specific implementation such as OpenShift. They are extremely biased and have features that are not portable across other Kubernetes clusters. OpenShift, in particular, is overly complex. It tries to give you everything including the kitchen sink. If you learn Kubernetes then, by extension, you’ll learn OpenShift. The same cannot be said for the other way around.
  • Kubernetes basics = Learn the basics of Kubernetes. VMware has free training on KubeAcademy that does a great job of going through those fundamentals.
  • APIs = Don’t sweat how to install Kubernetes or how it works. Focus on using the APIs which is as simple as writing some YAML manifests. I’ll be writing future tutorials on my blog to help out here.
  • Community group = Okay, now you know enough to be dangerous. Nice! Find fellow friends, coworkers, even strangers! Just anyone who wants to learn Kubernetes and gets excited by it! Having others around to help motivate you is more powerful than anything else.
  • Find use cases = Figure out how you could use Kubernetes at home or for work. Try to move existing apps you use into containers (if not already) and then into Kubernetes. Here are some example use-cases for real applications I run on my home Kubernetes cluster:
    • Application development = I’m a cloud native developer at heart so when I’m testing my apps, I test them as containers on my Chromebook and then push them to Kubernetes.
    • CI/CD
    • DNS
    • Gitea
    • Blog (staging area)
    • CIFS server
    • NFS server
    • Game servers (ex., Minecraft, Halo Custom Edition) = These aren’t very cloud-native but they’re fun to get working!
  • Certifications = Studying for the Kubernetes certifications is a great goal to set for yourself. You’ll learn a lot and have the credentials to help you get that shiny new Kubernetes job! There are currently 3 different certs: the Administrator, Application Developer, and (recently released) Security. Pick your own adventure!
  • Going beyond = The final step is to go to, as Buzz Lightyear from Toy Story would say, “to infinity and beyond!” These are a collection a great resources to help you learn about extra features you can add on Kubernetes. Use this as an opportunity to find your niche(s).

My final thoughts are this: Kubernetes is a lot easier than you think. You can do anything if you put your mind to it! I wish you the best with your Kubernetes adventure! - Luke Short

2021-01-31

Back to Kubernetes blog.

The State of Kubernetes According to One of Its Creators

Containers and Clouds

Recently, there was a webinar Q&A session with Joe Beda. He’s one of the authors of “Kubernetes Up & Running”. Here are his thoughts on the past, present, and future of Kubernetes.

Note: this information has been paraphrased.

Joe Beda’s thoughts:

  • The #1 goal for Kubernetes it to become boring and “just work”. It’s almost there!
  • How can someone learn Kubernetes?
    • Start small and focus on vanilla Kubernetes. Don’t start off trying to learn a very vendor-specific product like OpenShift. If you learn pure Kubernetes, you can seamlessly migrate between different clouds.
  • Was YAML supposed to be the primary way to interact with and use Kubernetes?
    • No, YAML had a long-term goal of being replaced but never was. There are a handful of tools out there that make applications on Kubernetes easier to manage. The first evolution of the interaction with the API was Helm. ytt is now the second evolution.
  • What are your thoughts on Platform-as-a-Service (PaaS) offerings such as “low-code” and “no-code”?
    • They are too restrictive and won’t provide developers all the features they need. Eventually, they will need to migrate to Kubernetes where they will have more power, control, and features.
  • What is the future of PaaS?
    • Serverless and other frameworks built ontop of Kubernetes will replace the traditional PaaS services we know today.
  • What is the future of Infrastructure-as-a-Service (IaaS)?
    • Managing Kubernetes on baremetal is difficult. The best way to manage it is with programmable infrastructure (via IaaS APIs). Platforms such as Amazon EC2 and OpenStack Nova will still be needed for that.
  • What are the top problems for developers using Kubernetes today?
    • Networking. The Service and Ingress APIs have created a lot of confusion for developers on how to expose an application to the Internet. A new unified networking API is being worked on in the upstream Kubernetes community to help with this.
  • How can developers closely replicate their production (prod) environment as a development (dev) environment?
    • The drift between prod and dev leads to inconsistencies which leads to bugs. Things that need to be aligned as much as possible: node count, networking, storage, and external (non-Kubernetes) services integration. For at least address the node count, the Kind and Cluster API projects provide a seamless way to spin-up Kubernetes clouds of any size instantly using docker containers. There is no real solution for everything else.
  • What is Isito?
    • Features: secure communication between Pods, dynamic routing, observability, and service mesh configurability.
  • Why is Istio not part of Kubernetes?
    • Google wants full control over the project. There are also many other Kubernetes plugins in the open source community that solve similar problems usch as Open Service Mesh and Linkerd.
  • What book are you reading right now?
    • Range. It explains how generalist can succeed with not well defined problems. It’s about identifying larger patterns/issues and how different things can come together to solve those common issues.
  • If people aren’t using a specific Kubernetes API or a functionality of it, maybe we did it wrong. We need to rethink it and get community feedback on how to make all APIs useful.
  • What is the future of Kubernetes?
    • GPUs (A.I./M.L.) are gaining a lot of traction right now. However, Kubernetes does not solve every problem. What it did great was embrace the idea of declarative infrastructure state. In 10-20 years, there may be a similar declarative tool and it may not be Kubernetes.

My thoughts and biggest takeaways:

  • It’s fascinating that YAML was not meant to be the long-term solution to using Kubernetes. Helm was an amazing leap forward for deploying applications and has been a joy for me to work with over the years. I’m now starting to get my hands dirty with ytt on a few projects I’m working on so we’ll see how that compares.
  • I love the attitude of if the end-users aren’t using the APIs then it was probably designed or implemented wrong. It’s important to fail fast, gather feedback, and then iterate again.
  • For development environments, I love Kind. It’s so easy to get started with. It’s a much better experience than trying to lab OpenShift (outside of the limited Minishift environment).
  • People always wrongly assume projects like OpenStack are dead. They aren’t. They’re just boring and work. That’s where Kubernetes is heading. Arguably, I actually think Kubernetes reached the boring state about two years ago. IaaS is important and it’s not a problem Kubernetes tries to solve. It needs IaaS to prosper.
  • The future of technology, in my eyes, has always been serverless and machine learning. Joe seems to echo these thoughts which makes me feel even more confident in my choice of career. At work, I’m focusing on making those my niche so that I may help customers adopt these concepts and get the most out of Kubernetes.