Build k8s apps running both in and out of Cluster

When building k8s apps, e.g. a reverse proxy to route APIs in given business domain, helm chart is a convenient choice to build and ship the apps. Sometimes if it's a prototype, your teammates or yourself would think of running it in local for quick verification in local cluster or a better debug way. This post described the tips to build an app running both in and out of the cluster

Problem Description

In short, the app is expected to run in local for debug and quick demo using but it shall also be delivered in helm chart and deployed in realistic environment for a further step prototype with downstream services.

ClientSet Configuration

In my practice, the kube client is usually a ClientSet with CRD schemed and for convenience we could keep the k8s core scheme client together with CRD clientset as below. The kube client could be simply declared in the app sub package and injected from main.go.

type KubeClientSet struct {
// Your CRD schemed clientset
Client *cltv1.V1Client
// K8s clientset
K8sClient *kubernetes.Clientset
// Host is extraced from K8sClient for service URL building
Host string
InCluster bool
}

Then in the NewKubeClient() when the clientset is initialized, the host name is filled to adapt in and out of cluster cases. Where the flag InCluster could be resolved as os.Getenv("KUBERNETES_SERVICE_HOST") != "". If an app is running in kubernetes cluster container and it's not intended to prevented to visit the cluster host API, the KUBERNETES_SERVICE_HOST shouldn't be empty.

Another reminder is k8s runtime pkg also defines kubeconfig CLI flag, so readers could use flag.Lookup() to check it first.

func GetKubeConfig() string {
var kubeconfig string

if !InCluster() {

//Running out of cluster
homeDir, _ := os.UserHomeDir()
defaultKc := homeDir + "/.kube/config"
// k8s-sig runtime also defines kubeconfig flag. It might be removed in later version.
kcFlag := flag.Lookup("kubeconfig")
if kcFlag == nil {
flag.StringVar(&kubeconfig, "kubeconfig", defaultKc, "path to Kubernetes config file")
kcFlag = flag.Lookup("kubeconfig")
}
flag.Parse()
kubeconfig = kcFlag.Value.String()
if kubeconfig == "" {
kubeconfig = defaultKc
}
log.Printf("Loading kubeconfig from %s", kubeconfig)
} else {
log.Printf("Running in cluster..")
}

return kubeconfig
}

At last, the NewKubeClient() shall check if it's running out of cluster, fill the Host field with the kube-proxy URL, e.g. localhost:8888 if the kube-proxy for local test is kubectl proxy --port=8888. Otherwise, set the Host to its kubeconfig.Host.

Service URL Building

When a targeted service is located with namespace and service name, the URL could be built for in and out of cluster cases.

If it's running in cluster, the URL to a service is <svc_name> + "." + <svc_ns> + ".svc." + CORE_DNS_HOST. The coreDNS host usually is cluster.local but it's up to cluster configuration.

Otherwise, if it's running out of cluster and connecting to cluster API via kube-proxy, the URL is Host + "/api/v1/namespaces/" + <svc_ns> + "/services/" + <svc_name> + "<:port_name>/proxy". HOST is the host value resolved in NewKubeClient() method.

The tricky point is the <:port_name>. If targeted port of the service is a named port, the port name is required.

Deployment and Running

Assume the RBAC is configured well, the k8s app is able to run out of cluster or be packaged up in a helm chart to be deployed in a cluster. To run it out of cluster for a quick demo, users need to launch kube-proxy first to expose the service from its running node.

Change Log

Oct, 2021: Initial post draft.