<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Kubernetes on Bill Glover</title>
    <link>https://billglover.me/tags/kubernetes/</link>
    <description>Recent content in Kubernetes on Bill Glover</description>
    <generator>Hugo</generator>
    <language>en-gb</language>
    <managingEditor>hello@bill.dev (Bill)</managingEditor>
    <webMaster>hello@bill.dev (Bill)</webMaster>
    <lastBuildDate>Sun, 28 Sep 2025 22:27:35 +0100</lastBuildDate>
    <atom:link href="https://billglover.me/tags/kubernetes/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Tanzu Application Platform, Pinniped and Auth0</title>
      <link>https://billglover.me/2022/11/04/tanzu-application-platform-pinniped-and-auth0/</link>
      <pubDate>Fri, 04 Nov 2022 09:18:10 +0000</pubDate><author>hello@bill.dev (Bill)</author>
      <guid>https://billglover.me/2022/11/04/tanzu-application-platform-pinniped-and-auth0/</guid>
      <description>&lt;p&gt;This post documents adding authentication to the &lt;a href=&#34;https://tanzu.vmware.com/application-platform&#34;&gt;Tanzu Application Platform&lt;/a&gt; (TAP) using Auth0 and Pinniped.&lt;/p&gt;&#xA;&lt;p&gt;I often take shortcuts with authentication when demonstrating technology. In-part this is because setting up authentication and authorization can be difficult. That said, there are benefits to the flexibility of an administrative user. Like many who work with short-lived Kubernetes clusters, my default is Cluster Admin.&lt;/p&gt;&#xA;&lt;p&gt;One downside to using cluster-admin is that I&amp;rsquo;m unable to explore RBAC capabilities. The capabilities and limitations of RBAC influence the experience of a product. The Tanzu Application Platform (TAP) is no exception. If real world use sees users mapped to one of the &lt;a href=&#34;https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.3/tap/GUID-authn-authz-overview.html&#34;&gt;four default user roles&lt;/a&gt; that come with TAP, why do I use cluster-admin? I needed an instance of TAP configured with an identity provider that allowed me to map users to roles.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Backup and Restore AKS Clusters with Tanzu (Azure File storage)</title>
      <link>https://billglover.me/videos/20221005_1125_backup-restore-azure-file-volumes/</link>
      <pubDate>Wed, 05 Oct 2022 11:25:00 +0000</pubDate><author>hello@bill.dev (Bill)</author>
      <guid>https://billglover.me/videos/20221005_1125_backup-restore-azure-file-volumes/</guid>
      <description>&lt;div style=&#34;position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;&#34;&gt;&#xA;      &lt;iframe allow=&#34;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen&#34; loading=&#34;eager&#34; referrerpolicy=&#34;strict-origin-when-cross-origin&#34; src=&#34;https://www.youtube.com/embed/s7nr1S7jEh8?autoplay=0&amp;amp;controls=1&amp;amp;end=0&amp;amp;loop=0&amp;amp;mute=0&amp;amp;start=0&#34; style=&#34;position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;&#34; title=&#34;YouTube video&#34;&gt;&lt;/iframe&gt;&#xA;    &lt;/div&gt;&#xA;&#xA;&lt;h2 id=&#34;context&#34;&gt;Context&lt;/h2&gt;&#xA;&lt;p&gt;A customer asked how I might backup and restore workloads running on AKS using Azure File Premium storage. Azure Files mount an SMB share backed by an Azure storage account to pods running on AKS. For more details on storage options on AKS see the &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/aks/concepts-storage&#34; title=&#34;Storage options for applications in Azure Kubernetes Service (AKS)&#34;&gt;Azure documentation&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Migrate between Kubernetes clusters with Tanzu (Rancher to Azure)</title>
      <link>https://billglover.me/videos/20221005_1120_migrate-workloads-between-clusters-rancher-to-azure/</link>
      <pubDate>Wed, 05 Oct 2022 11:20:00 +0000</pubDate><author>hello@bill.dev (Bill)</author>
      <guid>https://billglover.me/videos/20221005_1120_migrate-workloads-between-clusters-rancher-to-azure/</guid>
      <description>&lt;div style=&#34;position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;&#34;&gt;&#xA;      &lt;iframe allow=&#34;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen&#34; loading=&#34;eager&#34; referrerpolicy=&#34;strict-origin-when-cross-origin&#34; src=&#34;https://www.youtube.com/embed/NdIjPf5SMO8?autoplay=0&amp;amp;controls=1&amp;amp;end=0&amp;amp;loop=0&amp;amp;mute=0&amp;amp;start=0&#34; style=&#34;position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;&#34; title=&#34;YouTube video&#34;&gt;&lt;/iframe&gt;&#xA;    &lt;/div&gt;&#xA;&#xA;&lt;h2 id=&#34;context&#34;&gt;Context&lt;/h2&gt;&#xA;&lt;p&gt;A customer asked how I might approach the challenge of migrating applications running on one Kubernetes cluster and restore them onto a new cluster running a different Kubernetes distribution. Assuming you don&amp;rsquo;t need to migrate the applications under load, backup and restore is a reasonable option.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Migrate between Kubernetes clusters with Tanzu (Rancher to Tanzu)</title>
      <link>https://billglover.me/videos/20221005_1115_migrate-workloads-between-clusters-rancher-to-tanzu/</link>
      <pubDate>Wed, 05 Oct 2022 11:15:00 +0000</pubDate><author>hello@bill.dev (Bill)</author>
      <guid>https://billglover.me/videos/20221005_1115_migrate-workloads-between-clusters-rancher-to-tanzu/</guid>
      <description>&lt;div style=&#34;position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;&#34;&gt;&#xA;      &lt;iframe allow=&#34;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen&#34; loading=&#34;eager&#34; referrerpolicy=&#34;strict-origin-when-cross-origin&#34; src=&#34;https://www.youtube.com/embed/ctsvte_yXeA?autoplay=0&amp;amp;controls=1&amp;amp;end=0&amp;amp;loop=0&amp;amp;mute=0&amp;amp;start=0&#34; style=&#34;position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;&#34; title=&#34;YouTube video&#34;&gt;&lt;/iframe&gt;&#xA;    &lt;/div&gt;&#xA;&#xA;&lt;h2 id=&#34;context&#34;&gt;Context&lt;/h2&gt;&#xA;&lt;p&gt;A customer asked how I might approach the challenge of migrating applications running on one Kubernetes cluster and restore them onto a new cluster running a different Kubernetes distribution. Assuming you don&amp;rsquo;t need to migrate the applications under load, backup and restore is a reasonable option.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Use the Kubernetes downwards API to set GOMEMLIMIT</title>
      <link>https://billglover.me/2022/09/14/use-the-kubernetes-downwards-api-to-set-gomemlimit/</link>
      <pubDate>Wed, 14 Sep 2022 21:10:06 +0100</pubDate><author>hello@bill.dev (Bill)</author>
      <guid>https://billglover.me/2022/09/14/use-the-kubernetes-downwards-api-to-set-gomemlimit/</guid>
      <description>&lt;p&gt;Go 1.19 introduced the ability to tune the way the garbage collector works by setting a soft memory limit. One recommendation is to use this new memory limit when deploying in a container environment. This post looks at using the Kubernetes Downward API to set this soft limit.&lt;/p&gt;&#xA;&lt;h2 id=&#34;do-i-need-a-soft-memory-limit&#34;&gt;Do I need a Soft Memory Limit&lt;/h2&gt;&#xA;&lt;blockquote&gt;&#xA;&lt;p&gt;Do take advantage of the memory limit when the execution environment of your Go program is entirely within your control, and the Go program is the only program with access to some set of resources (i.e. some kind of memory reservation, like a container memory limit).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Copy Files to and from a Container</title>
      <link>https://billglover.me/notes/kubectl-cp/</link>
      <pubDate>Tue, 24 May 2022 09:56:32 +0000</pubDate><author>hello@bill.dev (Bill)</author>
      <guid>https://billglover.me/notes/kubectl-cp/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; I needed to copy some database files into a container running on Kubernetes without modifying the image or restarting the parent Pod.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; The Kubernetes CLI includes a sub-command for copying files into and out of a running container: &lt;code&gt;kubectl cp /tmp/foo &amp;lt;some-pod&amp;gt;:/tmp/bar&lt;/code&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;&#xA;&lt;p&gt;I&amp;rsquo;ve never found the need to copy files into a running container without issuing an updated image. This is somewhat of an anti-pattern as modifications to containerised filesystems that aren&amp;rsquo;t mounted externally are lost when the container process terminates.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Update Trivy Database in Harbor</title>
      <link>https://billglover.me/notes/harbor-trivy-db-update/</link>
      <pubDate>Thu, 19 May 2022 14:56:00 +0000</pubDate><author>hello@bill.dev (Bill)</author>
      <guid>https://billglover.me/notes/harbor-trivy-db-update/</guid>
      <description>&lt;p&gt;I recently deployed Harbor and Trivy with automatic updating disabled. I hadn&amp;rsquo;t realise that this would prevent images from being scanned at all and so needed to trigger a manual update. This note describes how to manually trigger an update to the Trviy database in Harbor deployed on top of VMware Tanzu Kubernetes Grid.&lt;/p&gt;&#xA;&lt;h3 id=&#34;demo&#34;&gt;Demo&lt;/h3&gt;&#xA;&lt;div id=&#34;trivy-db-update&#34;&gt;&lt;/div&gt;&#xA;&lt;script&gt;&#xA;    AsciinemaPlayer.create(&#xA;        &#39;trivy-db-update.cast&#39;,&#xA;        document.getElementById(&#39;trivy-db-update&#39;),&#xA;        {&#xA;            cols:100,&#xA;            rows:24,&#xA;            autoPlay:false,&#xA;            preload:false,&#xA;            loop:false,&#xA;            speed:2,&#xA;            idleTimeLimit:2,&#xA;            theme:&#34;asciinema&#34;&#xA;        }&#xA;        );&#xA;&lt;/script&gt;&#xA;&lt;h3 id=&#34;instructions&#34;&gt;Instructions&lt;/h3&gt;&#xA;&lt;p&gt;Switch context to the cluster where you have deployed Harbor.&lt;/p&gt;</description>
    </item>
    <item>
      <title>How I Manage Kubernetes Config</title>
      <link>https://billglover.me/2020/06/12/how-i-manage-kubernetes-config/</link>
      <pubDate>Fri, 12 Jun 2020 06:00:00 +0000</pubDate><author>hello@bill.dev (Bill)</author>
      <guid>https://billglover.me/2020/06/12/how-i-manage-kubernetes-config/</guid>
      <description>&lt;figure&gt;&lt;img &#xA;        sizes=&#34;(min-width: 35em) 1200px, 100vw&#34;&#xA;        srcset=&#39;&#xA;        &#xA;            /2020/06/12/how-i-manage-kubernetes-config/kubeconfig_hu_b61e87835fe8f852.png 500w&#xA;        &#xA;        &#xA;        &#xA;        &#39;&#xA;        &#xA;            src=&#34;https://billglover.me/2020/06/12/how-i-manage-kubernetes-config/kubeconfig.png&#34; &#xA;        &#xA;         alt=&#34;Image showing a roll of toilet paper covered in kubeconfig YAML.&#34;/&gt;&#xA;&lt;/figure&gt;&#xA;&lt;p&gt;If you work with Kubernetes, you&amp;rsquo;ll be aware of the config file that defines contexts. This config is what &lt;code&gt;kubectl&lt;/code&gt; uses to gain access to a cluster. I work with a large number of ephemeral clusters and have found that this config is difficult to manage. This post shows how I&amp;rsquo;ve switched to using individual config files for each cluster.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The importance of Health Probes</title>
      <link>https://billglover.me/2020/04/30/the-importance-of-health-probes/</link>
      <pubDate>Thu, 30 Apr 2020 20:35:35 +0000</pubDate><author>hello@bill.dev (Bill)</author>
      <guid>https://billglover.me/2020/04/30/the-importance-of-health-probes/</guid>
      <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&lt;/h2&gt;&#xA;&lt;p&gt;Organisations of all sizes are now deploying applications to platforms such as Kubernetes. Many of these applications do not adopt cloud native practices. Often the excuse is application age or product limitation.&lt;/p&gt;&#xA;&lt;p&gt;Many teams invest in platform infrastructure but failed to capitalise on these benefits. So what gives? Are platforms overhyped? Are the complexities of the enterprise too much for modern platforms?&lt;/p&gt;&#xA;&lt;p&gt;Engineers deploy applications to Kubernetes to take advantage of common platform features. Many of these features claim to help maintain service availability including:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Dive Through the Layers</title>
      <link>https://billglover.me/2020/02/28/dive-through-the-layers/</link>
      <pubDate>Fri, 28 Feb 2020 06:16:35 +0000</pubDate><author>hello@bill.dev (Bill)</author>
      <guid>https://billglover.me/2020/02/28/dive-through-the-layers/</guid>
      <description>&lt;h1 id=&#34;dive-through-the-layers&#34;&gt;Dive Through the Layers&lt;/h1&gt;&#xA;&lt;p&gt;I&amp;rsquo;ve been working with a container image for a Django application and was surprised to find that an image for a simple application was 1.2 GB. This was particularly jarring as, coming from the world of Go, I&amp;rsquo;m used to images that come in at under 20 MB.&lt;/p&gt;&#xA;&lt;p&gt;It&amp;rsquo;s not the size of the container image that&amp;rsquo;s the problem. Layer caching and re-use means that you are rarely transferring the full image around and that storage on disk is usually less than the sum of all your images. The worry I have with an image that is 1.2 GB is that everything that makes up that image needs to be maintained, patched, watched for security vulnerabilities, etc. 1.2 GB is a lot of software.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Sidecar Pattern</title>
      <link>https://billglover.me/2020/01/12/the-sidecar-pattern/</link>
      <pubDate>Sun, 12 Jan 2020 08:16:35 +0000</pubDate><author>hello@bill.dev (Bill)</author>
      <guid>https://billglover.me/2020/01/12/the-sidecar-pattern/</guid>
      <description>&lt;p&gt;The sidecar is a multi-container pattern used to provide additional functionality to a containerised-application without requiring changes to the application itself. The sidecar is the foundation of popular tools like the Istio service mesh. But how does it work?&lt;/p&gt;&#xA;&lt;p&gt;In this post, I will demonstrate how to use the Sidecar pattern to add TLS termination to an existing application using a custom-built proxy server. In reality, there should be no reason to build everything from scratch, I&amp;rsquo;ve done so here to validate my understanding of how things work. This post has been written so that you can read along without implementing the examples, but if you want to get your hands dirty and code along, I&amp;rsquo;ve made a few assumptions:&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
