Skip to main content

Setting up and running a Kubernetes cluster locally with Podman Desktop

· 9 min read
Fabrice Flore-Thebault

In this blog post you will learn to use Podman Desktop to run the Kubernetes documentation example: Deploying PHP Guestbook application with Redis.

On the agenda:

  1. Installing Podman Desktop.
  2. Installing and initializing your container engine: Podman.
  3. Installing and starting your local Kubernetes provider: Kind.
  4. Starting the Redis leader.
  5. Starting and scaling the Redis followers.
  6. Starting and exposing the Guestbook frontend.

Installing Podman Desktop

You need Podman Desktop.

  1. Go to Podman Desktop installation documentation.
  2. Click on your platform name: Windows, macOS, or Linux.
  3. Follow the instructions. Stick to the default installation method.
  4. Start Podman Desktop.

At this point, you have a graphical user interface to:

  • Install Podman and Kind.
  • Control and work with your container engines and Kubernetes clusters.
  • Run your application on your container engine and migrate it to Kubernetes.

Installing and initializing your container engine: Podman

Podman Desktop can control various container engines, such as:

  • Docker
  • Lima
  • Podman

Consider installing the Podman container engine for:

  • Added security
  • No daemon
  • Open source

Containers are a Linux technology.

  • On Linux, you can install Podman natively. See: Installing Podman on Linux.
  • On macOS and Windows, Podman requires to run in a Linux virtual machine: the Podman machine. Use Podman Desktop to install Podman and initialize your Podman machine:

Procedure

  1. Open Podman Desktop Dashboard
  2. The Dashboard displays Podman Desktop was not able to find an installation of Podman.
  3. Click on Install.
  4. Podman Desktop checks the prerequisites to install Podman Engine. When necessary, follow the instructions to install prerequisites.
  5. Podman displays the dialog: Podman is not installed on this system, would you like to install Podman?. Click on Yes to install Podman.
  6. Click on Initialize and start.

Verification

  • The Dashboard displays Podman is running.

    Podman is running

At this point, you can start working with containers.

Installing and starting your local Kubernetes provider: Kind

You want to deploy your application to a local Kubernetes cluster.

Podman Desktop can help you run Kind-powered local Kubernetes clusters on a container engine, such as Podman.

Podman Desktop helps you installing the kind CLI:

  1. In the status bar, click on Kind, and follow the prompts.

  2. When the kind CLI is available, the status bar does not display Kind.

  3. On Windows, configure Podman in rootful mode

    $ podman system connection default podman-machine-default-root
  4. Go to Settings > Resources

  5. In the Podman icon Podman tile, click on the icon to restart the Podman container engine.

  6. In the Kind icon Kind tile, click on the Create new button.

    1. Name: enter kind-cluster.
    2. Provider Type: select podman.
    3. HTTP Port: select 9090.
    4. HTTPS Port: select 9443.
    5. Setup an ingress controller: Enabled
    6. Click the Create button. Create a Kind cluster screen
  7. After successful creation, click on the Go back to resources button

Verification

  1. In Settings > Resources your Kind cluster is running/

    Kind cluster is running

  2. In the Podman Desktop tray, open the Kubernetes menu: you can set the context to your Kind cluster: kind-kind-cluster.

    Kind cluster Kubernetes context in the tray

    At this point, you can start working with containers, and your local Kubernetes cluster.

Additional resources

Starting the Redis leader

The Guestbook application uses Redis to store its data.

With Podman Desktop, you can prepare the Redis leader image and container on your local container engine, and deploy the results to a Kubernetes pod and service. This is functionally equal to the redis-leader deployment that the Kubernetes example propose.

Procedure

  1. Open Images > Pull an image.

    1. Image to Pull: enter docker.io/redis:6.0.5
    2. Click Pull image to pull the image to your container engine local image registry.
    3. Click Done to get back to the images list.
  2. Search images: enter redis:6.0.5 to find the image.

  3. Click to open the Create a container from image dialog.

    1. Container name: enter leader,
    2. Local port for 6379/tcp: 6379.
    3. Click Start Container to start the container in your container engine.
  4. Search containers: enter leader to find the running container.

  5. Click to stop the container, and leave the 6379 port available for the Redis follower container.

  6. Click > Deploy to Kubernetes to open the Deploy generated pod to Kubernetes screen.

    1. Pod Name: enter redis-leader.
    2. Use Kubernetes Services: select Replace hostPort exposure on containers by Services. It is the recommended way to expose ports, as a cluster policy might prevent to use hostPort.
    3. Expose service locally using Kubernetes Ingress: deselect Create a Kubernetes ingress to get access to the ports that this pod exposes, at the default ingress controller location. Example: on a default Kind cluster created with Podman Desktop: http://localhost:9090. Requirements: your cluster has an ingress controller`.
    4. Kubernetes namespaces: select default.
    5. Click Deploy. Deploy generated leader pod to Kubernetes screen
    6. Wait for the pod to reach the state: Phase: Running.
    7. Click Done.

Verification

  • The Pods screen lists the running redis-leader pod.

    leader pod is running

Starting the Redis followers

Although the Redis leader is a single Pod, you can make it highly available and meet traffic demands by adding a few Redis followers, or replicas.

With Podman Desktop, you can prepare the Redis follower image and container on your local container engine, and deploy the results to Kubernetes pods and services. This is functionally equal to the redis-follower deployment that the Kubernetes example propose.

Procedure

  1. Open Images > Pull an image.
    1. Image to Pull: enter gcr.io/google_samples/gb-redis-follower:v2
    2. Click Pull image to pull the image to your container engine local image registry.
    3. Click Done to get back to the images list.
  2. Search images: enter gb-redis-follower:v2 to find the image.
  3. Click to open the Create a container from image dialog.
    1. Container name: enter follower,
    2. Local port for 6379/tcp: 6379.
    3. Click Start Container to start the container in your container engine.
  4. Search containers: enter follower to find the running container.
  5. Click to stop the container: you do not need it to run in the container engine.
  6. Click > Deploy to Kubernetes to open the Deploy generated pod to Kubernetes screen.
    1. Pod Name: enter redis-follower.
    2. Use Kubernetes Services: select Replace hostPort exposure on containers by Services. It is the recommended way to expose ports, as a cluster policy might prevent to use hostPort.
    3. Expose service locally using Kubernetes Ingress: deselect Create a Kubernetes ingress to get access to the ports that this pod exposes, at the default ingress controller location. Example: on a default Kind cluster created with Podman Desktop: http://localhost:9090. Requirements: your cluster has an ingress controller`.
    4. Kubernetes namespaces: select default.
    5. Click Deploy. Deploy generated follower pod to Kubernetes screen
    6. Wait for the pod to reach the state: Phase: Running.
    7. Click Done.
  7. To add replicas, repeat the last step with another Pod Name value.

Verification

  • The Pods screen lists the running redis-follower pods.

    follower pods are running

Starting the default frontend

Now that you have the Redis storage of your Guestbook up and running, start the Guestbook web servers. Like the Redis followers, deploy the frontend using Kubernetes pods and services.

The Guestbook app uses a PHP frontend. It is configured to communicate with either the Redis follower or leader Services, depending on whether the request is a read or a write. The frontend exposes a JSON interface, and serves a jQuery-Ajax-based UX.

With Podman Desktop, you can prepare the Guestbook frontend image and container on your local container engine, and deploy the results to Kubernetes pods and services. This is functionally equal to the frontend deployment that the Kubernetes example propose.

Procedure

  1. Open Images > Pull an image.
    1. Image to Pull: enter gcr.io/google_samples/gb-frontend:v5
    2. Click Pull image to pull the image to your container engine local image registry.
    3. Wait for the pull to complete.
    4. Click Done to get back to the images list.
  2. Search images: enter gb-frontend:v5 to find the image.
  3. Click to open the Create a container from image dialog.
    1. Container name: enter frontend,
    2. Local port for 80/tcp: 9000.
    3. Click Start Container to start the container in your container engine.
  4. Search containers: enter frontend to find the running container.
  5. Click to stop the container: you do not need it to run in the container engine.
  6. Click > Deploy to Kubernetes to open the Deploy generated pod to Kubernetes screen.
    1. Pod Name: enter frontend.
    2. Use Kubernetes Services: select Replace hostPort exposure on containers by Services. It is the recommended way to expose ports, as a cluster policy might prevent to use hostPort.
    3. Expose service locally using Kubernetes Ingress: select Create a Kubernetes ingress to get access to the ports that this pod exposes, at the default ingress controller location. Example: on a default Kind cluster created with Podman Desktop: http://localhost:9090. Requirements: your cluster has an ingress controller`.
    4. Kubernetes namespaces: select default.
    5. Click Deploy. Deploy generated frontend pod to Kubernetes screen
    6. Wait for the pod to reach the state: Phase: Running.
    7. Click Done.

Verification

  1. The Pods screen lists the running frontend pod.

    `frontend` pod is running

  2. Go to http://localhost:9090: the Guestbook application is running.