Back to Tech Center

CrowdStrike Falcon LogScale on Kubernetes with Google Cloud

November 10, 2022

Tech Center

In this article, we’ll cover how to install CrowdStrike Falcon® LogScale, previously known as Humio, on a GKE Kubernetes cluster. We will use the GKE AutoPilot mode to get a hands-free and efficient experience. Here are the basic steps we will take:

  1. Set up a Google Cloud account
  2. Create a GKE cluster
  3. Install dependencies and packages
  4. Install the Humio operator
  5. Create the LogScale cluster
  6. Test the LogScale cluster

Before we dive in, note the content below is for demonstration purposes only and should not be used in production. 

Set up a Google Cloud Account

If you don’t have a Google account, create a new account and sign in. You’ll need to provide a credit card, but you’ll receive cloud credits to make this demo implementation cost-free, so don’t worry.

Install and configure gcloud

The gcloud CLI is a great tool to manage your Google assets from the command line. We will use it extensively in this walkthrough. Follow these four steps to get it up and running:

  1. Install
  2. Initialize
  3. Authorize it via the browser
  4. Configure

Install Kubectl

Before we create our Kubernetes cluster, let’s install kubectl so we can talk to our cluster. 

Kubectl is the official CLI of Kubernetes. You can interact with any Kubernetes cluster from the command line using Kubectl (as long as you have the proper certificate). If you don’t have it installed already (such as via Rancher Desktop), then follow the instructions here.

Let’s verify kubectl is installed correctly:

$ kubectl version
Client Version: version.Info{Major:“1”, Minor:“23”, GitVersion:“v1.23.4”, GitCommit:“e6c093d87ea4cbb530a7b2ae91e54c0842d8308a”, GitTreeState:“clean”, BuildDate:“2022-02-16T12:38:05Z”, GoVersion:“go1.17.7”, Compiler:“gc”, Platform:“darwin/amd64”}
Server Version: version.Info{Major:“1”, Minor:“23”, GitVersion:“v1.23.4”, GitCommit:“e6c093d87ea4cbb530a7b2ae91e54c0842d8308a”, GitTreeState:“clean”, BuildDate:“2022-03-06T21:39:59Z”, GoVersion:“go1.17.7”, Compiler:“gc”, Platform:“linux/arm64”}

If you see just a Client version, that’s not a problem. It means your kubectl is not yet configured to talk to any Kubernetes cluster. The output will look like this:

$ kubectl version
I0615 18:48:57.753042   14099 versioner.go:58] invalid configuration: no configuration has been provided
Client Version: version.Info{Major:“1”, Minor:“23”, GitVersion:“v1.23.4”, GitCommit:“e6c093d87ea4cbb530a7b2ae91e54c0842d8308a”, GitTreeState:“clean”, BuildDate:“2022-02-16T12:38:05Z”, GoVersion:“go1.17.7”, Compiler:“gc”, Platform:“darwin/amd64”}
The connection to the server localhost:8080 was refused – did you specify the right host or port?

Create a GKE Cluster

GKE offers two types of managed clusters: standard clusters and AutoPilot clusters. GKE will manage the control plane of your Kubernetes cluster in both cases. However, for AutoPilot clusters GKE will also manage nodes and node pools for you.

There are some limitations to AutoPilot clusters, but those limits will only come into play if you are setting up a massive enterprise-level Kubernetes-based system. We will use an AutoPilot cluster for our basic demo purposes, so we won’t need to worry too much about resource requirements. GKE will ensure our workloads have all the resources they need and we only pay for what we use.

Creating an AutoPilot GKE cluster

Let’s create an AutoPilot using gcloud.

PROJECT=playground-161404
REGION=us-central1
$ gcloud container clusters create-auto auto-k8s-1 \
  –region ${REGION} \
  –project=${PROJECT}

This creates and configures the cluster, which can take several minutes.

Note: The Pod address range limits the maximum size of the cluster. Please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr to learn how to optimize IP address allocation.
Creating cluster auto-k8s-1 in us-central1… Cluster is being configured…⠼
NAME        LOCATION     MASTER_VERSION  MASTER_IP      MACHINE_TYPE  NODE_VERSION    NUM_NODES  STATUS
auto-k8s-1  us-central1  1.22.8-gke.202  34.134.58.147  e2-medium     1.22.8-gke.202  3          RUNNING

Next, we need to get the kubeconfig of our new cluster. 

$ export KUBECONFIG=~/.kube/auto-k8s-1-config
$ gcloud container clusters get-credentials auto-k8s-1 \
    –region us-central1 \
    –project=playground-161404
   
Fetching cluster endpoint and auth data.
kubeconfig entry generated for auto-k8s-1.    

Note that we are using a dedicated kubeconfig file to avoid cluttering the default ~/.kube/config file. This is optional.

Let’s verify that our cluster is healthy.

$ kubectl cluster-info
Kubernetes control plane is running at https://34.134.58.147
GLBCDefaultBackend is running at https://34.134.58.147/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
KubeDNS is running at https://34.134.58.147/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
KubeDNSUpstream is running at https://34.134.58.147/api/v1/namespaces/kube-system/services/kube-dns-upstream:dns/proxy
Metrics-server is running at https://34.134.58.147/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

Let’s see what’s running on our cluster:

$ kubectl get pods -A -o wide

NAMESPACE     NAME                                                       READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES                                                                                                                             gke_playground-161404_us-central1_auto-k8s-1 | default
kube-system   anetd-f96hx                                                1/1     Running   0          25m   10.128.0.11   gk3-auto-k8s-1-default-pool-112d9250-p43m   <none>           <none>
kube-system   anetd-hbwtq                                                1/1     Running   0          25m   10.128.0.10   gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   antrea-controller-horizontal-autoscaler-6dccd45548-kfhrx   1/1     Running   0          25m   10.68.0.4     gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   egress-nat-controller-7667b66b8c-vdlwj                     1/1     Running   0          25m   10.68.0.3     gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   event-exporter-gke-5479fd58c8-57rxc                        2/2     Running   0          26m   10.68.0.12    gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   filestore-node-2n5x6                                       3/3     Running   0          25m   10.128.0.10   gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   filestore-node-q2hwb                                       3/3     Running   0          25m   10.128.0.11   gk3-auto-k8s-1-default-pool-112d9250-p43m   <none>           <none>
kube-system   fluentbit-gke-small-sxzzb                                  2/2     Running   0          25m   10.128.0.10   gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   fluentbit-gke-small-t8m94                                  2/2     Running   0          25m   10.128.0.11   gk3-auto-k8s-1-default-pool-112d9250-p43m   <none>           <none>
kube-system   gke-metadata-server-tx8lx                                  1/1     Running   0          25m   10.128.0.10   gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   gke-metadata-server-zqjmn                                  1/1     Running   0          25m   10.128.0.11   gk3-auto-k8s-1-default-pool-112d9250-p43m   <none>           <none>
kube-system   gke-metrics-agent-mrm62                                    1/1     Running   0          25m   10.128.0.10   gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   gke-metrics-agent-rqglm                                    1/1     Running   0          25m   10.128.0.11   gk3-auto-k8s-1-default-pool-112d9250-p43m   <none>           <none>
kube-system   ip-masq-agent-hlrl4                                        1/1     Running   0          25m   10.128.0.11   gk3-auto-k8s-1-default-pool-112d9250-p43m   <none>           <none>
kube-system   ip-masq-agent-pp46g                                        1/1     Running   0          25m   10.128.0.10   gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   konnectivity-agent-85c5b56855-dgttf                        1/1     Running   0          23m   10.68.0.68    gk3-auto-k8s-1-default-pool-112d9250-p43m   <none>           <none>
kube-system   konnectivity-agent-85c5b56855-zqmph                        1/1     Running   0          25m   10.68.0.9     gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   konnectivity-agent-autoscaler-555f599d94-dkzkm             1/1     Running   0          25m   10.68.0.13    gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   kube-dns-56494768b7-d4787                                  4/4     Running   0          23m   10.68.0.67    gk3-auto-k8s-1-default-pool-112d9250-p43m   <none>           <none>
kube-system   kube-dns-56494768b7-n6dtl                                  4/4     Running   0          26m   10.68.0.6     gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   kube-dns-autoscaler-f4d55555-bbwf2                         1/1     Running   0          26m   10.68.0.8     gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   l7-default-backend-69fb9fd9f9-wxwj7                        1/1     Running   0          25m   10.68.0.11    gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   metrics-server-v0.4.5-bbb794dcc-rnkm2                      2/2     Running   0          23m   10.68.0.14    gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   netd-6q8sb                                                 1/1     Running   0          25m   10.128.0.10   gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   netd-wfcqb                                                 1/1     Running   0          25m   10.128.0.11   gk3-auto-k8s-1-default-pool-112d9250-p43m   <none>           <none>
kube-system   node-local-dns-4px5g                                       1/1     Running   0          25m   10.68.0.66    gk3-auto-k8s-1-default-pool-112d9250-p43m   <none>           <none>
kube-system   node-local-dns-ljm8f                                       1/1     Running   0          25m   10.68.0.2     gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none>
kube-system   pdcsi-node-f8wzz                                           2/2     Running   0          25m   10.128.0.11   gk3-auto-k8s-1-default-pool-112d9250-p43m   <none>           <none>
kube-system   pdcsi-node-xdjc7                                           2/2     Running   0          25m   10.128.0.10   gk3-auto-k8s-1-default-pool-cdcad9b5-ls51   <none>           <none

The kube-system namespace has several Kubernetes and GKE components.

Install dependencies and packages

LogScale has several dependencies that we will want to get in place before continuing. Let’s cover them one by one.

Install Helm

Helm is the mainstream Kubernetes package manager. We follow the instructions to install it. Because Helm 2 is obsolete, let’s confirm we have Helm 3 running:

$ helm version
version.BuildInfo{Version:“v3.8.2”, GitCommit:“6e3701edea09e5d55a8ca2aae03a68917630e91b”, GitTreeState:“clean”, GoVersion:“go1.17.5”}

Install Kafka using the Strimzi operator

Apache Kafka is required by LogScale. One of the easiest ways to install Kafka is via the Strimzi operator. Let’s install it. First, let’s create create a kafka namespace:

$ kubectl create ns kafka
namespace/kafka created

Then, we apply the Strimzi YAMl manifest:

$ kubectl create \

  -f ‘https://strimzi.io/install/latest?namespace=kafka’ \

  -n kafka

customresourcedefinition.apiextensions.k8s.io/strimzipodsets.core.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkausers.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkas.kafka.strimzi.io created
clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-namespaced created
clusterrole.rbac.authorization.k8s.io/strimzi-kafka-broker created
customresourcedefinition.apiextensions.k8s.io/kafkaconnects.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkaconnectors.kafka.strimzi.io created
deployment.apps/strimzi-cluster-operator created
customresourcedefinition.apiextensions.k8s.io/kafkarebalances.kafka.strimzi.io created
rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created
clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-global created
customresourcedefinition.apiextensions.k8s.io/kafkabridges.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkamirrormaker2s.kafka.strimzi.io created
rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-entity-operator-delegation created
clusterrole.rbac.authorization.k8s.io/strimzi-kafka-client created
clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-kafka-client-delegation created
clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created
customresourcedefinition.apiextensions.k8s.io/kafkatopics.kafka.strimzi.io created
clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-kafka-broker-delegation created
configmap/strimzi-cluster-operator created
clusterrole.rbac.authorization.k8s.io/strimzi-entity-operator created
customresourcedefinition.apiextensions.k8s.io/kafkamirrormakers.kafka.strimzi.io created
serviceaccount/strimzi-cluster-operator created

Because Strimzi creates a number of objects, let’s wait for the operator to be ready.

$ kubectl wait deployment strimzi-cluster-operator \

  -n kafka \

  —for=condition=Available
deployment.apps/strimzi-cluster-operator condition met

Now, we can create the Kafka cluster. We create a resource definition file called kafka.yaml with the following contents:

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: logscale-cluster
  namespace: kafka
spec:
  kafka:
    version: 3.2.0
    replicas: 1
    listeners:
      – name: plain
        port: 9092
        type: internal
        tls: false
      – name: tls
        port: 9093
        type: internal
        tls: true
    config:
      offsets.topic.replication.factor: 1
      transaction.state.log.replication.factor: 1
      transaction.state.log.min.isr: 1
      default.replication.factor: 1
      min.insync.replicas: 1
      inter.broker.protocol.version: “3.2”
    storage:
      type: jbod
      volumes:
      – id: 0
        type: persistent-claim
        size: 10Gi
        deleteClaim: false
  zookeeper:
    replicas: 1
    storage:
      type: persistent-claim
      size: 1Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

We apply it here:

$ kubectl apply -f kafka.yaml

kafka.kafka.strimzi.io/logscale-cluster created

We wait for Kafka to be ready.

$ kubectl wait kafka/logscale-cluster \

  —for=condition=Ready \

  –timeout=300s \

  -n kafka
kafka.kafka.strimzi.io/logscale-cluster condition met

Let’s verify Kafka is operational by sending a message (123) from a simple producer and receiving it by a simple consumer.

$ kubectl -n kafka run kafka-producer -it \

  –image=quay.io/strimzi/kafka:0.32.0-kafka-3.2.0 \

  –rm=true \

  –restart=Never \

  — bin/kafka-console-producer.sh \

  –bootstrap-server logscale-cluster-kafka-bootstrap:9092 \

  –topic cool-topic

If you don‘t see a command prompt, try pressing enter.
>123
[2022-11-02 01:45:22,361] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 4 : {cool-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

Don’t worry about the warnings. They are harmless. Let’s receive the 123 message:

$ kubectl -n kafka run kafka-consumer -ti \

  –image=quay.io/strimzi/kafka:0.32.0-kafka-3.2.0 \

  –rm=true –restart=Never \

  — bin/kafka-console-consumer.sh \

  –bootstrap-server logscale-cluster-kafka-bootstrap:9092 \

  –topic cool-topic –from-beginning

If you don‘t see a command prompt, try pressing enter.
123

Kafka is up and running! Also, note that the GKE Autopilot automatically sets resource requests for us.

Install Cert-manager

Cert-manager is a X.509 certificate management solution for Kubernetes, and LogScale uses it. Let’s install it:

$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml

The manifest will create a cert-manager namespace and install all the components there.

$ kubectl get all -n cert-manager

I0707 10:43:15.129802   47546 versioner.go:58] no Auth Provider found for name “gcp”

NAME                                           READY   STATUS    RESTARTS   AGE

pod/cert-manager-84d777997b-qsb9m              1/1     Running   0          73s

pod/cert-manager-cainjector-85769f7d45-p2pzk   1/1     Running   0          74s

pod/cert-manager-webhook-67f9bb6b5d-zqfsv      1/1     Running   0          73s

NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE

service/cert-manager           ClusterIP   10.68.128.212   <none>        9402/TCP   74s

service/cert-manager-webhook   ClusterIP   10.68.130.200   <none>        443/TCP    74s

NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE

deployment.apps/cert-manager              1/1     1            1           73s

deployment.apps/cert-manager-cainjector   1/1     1            1           74s

deployment.apps/cert-manager-webhook      1/1     1            1           73s

NAME                                                 DESIRED   CURRENT   READY   AGE

replicaset.apps/cert-manager-84d777997b              1         1         1       73s

replicaset.apps/cert-manager-cainjector-85769f7d45   1         1         1       74s

replicaset.apps/cert-manager-webhook-67f9bb6b5d      1         1         1       73s

Install the Humio Operator

Now that we have a functional Kafka cluster, let’s install the Humio operator, which is the recommended way to install LogScale on Kubernetes. Here are some of the features of the operator:

  • Automates the installation of a LogScale Cluster on Kubernetes
  • Automates the management of LogScale Repositories, Parsers, and Ingest Tokens
  • Automates the management of LogScale, such as partition balancing
  • Automates version upgrades of LogScale
  • Automates configuration changes of LogScale
  • Allows the use various storage mediums, including hostPath or storage class PVCs
  • Automates the cluster authentication and security such as pod-to-pod TLS, SAML, and OAuth

First we need to install the operator’s CRDs.

$ export HUMIO_OPERATOR_VERSION=0.15.0
$ export HUMIO_BASE_URL=https://raw.githubusercontent.com/humio/humio-operator/humio-operator

$ kubectl apply –server-side -f ${HUMIO_BASE_URL}${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioclusters.yaml
$ kubectl apply –server-side -f ${HUMIO_BASE_URL}${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioexternalclusters.yaml
$ kubectl apply –server-side -f ${HUMIO_BASE_URL}${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioingesttokens.yaml
$ kubectl apply –server-side -f ${HUMIO_BASE_URL}${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioparsers.yaml
$ kubectl apply –server-side -f ${HUMIO_BASE_URL}${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humiorepositories.yaml
$ kubectl apply –server-side -f ${HUMIO_BASE_URL}${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioviews.yaml
$ kubectl apply –server-side -f ${HUMIO_BASE_URL}${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioalerts.yaml
$ kubectl apply –server-side -f ${HUMIO_BASE_URL}${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioactions.yaml

customresourcedefinition.apiextensions.k8s.io/humioclusters.core.humio.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/humioexternalclusters.core.humio.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/humioingesttokens.core.humio.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/humioparsers.core.humio.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/humiorepositories.core.humio.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/humioviews.core.humio.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/humioalerts.core.humio.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/humioactions.core.humio.com serverside-applied

Next, we need to add the Humio helm repository:

$ helm repo add humio-operator https://humio.github.io/humio-operatorhttps://github.com/humio-contrib/
“humio-operator” has been added to your repositories

Now we’re ready to install the Humio operator:

$ helm install humio-operator humio-operator/humio-operator \

  –namespace logging \

  –create-namespace \

  –version=”${HUMIO_OPERATOR_VERSION}”

W0707 10:49:01.126592   47966 warnings.go:70] Autopilot increased resource requests for Deployment logging/humio-operator to meet requirements. See http://g.co/gke/autopilot-resources.

NAME: humio-operator

LAST DEPLOYED: Wed Nov  2 10:48:49 2022

NAMESPACE: logging

STATUS: deployed

REVISION: 1

TEST SUITE: None

Let’s check our cluster and see how many nodes we’re using at the moment.

$ kubectl get nodes


NAME                                      STATUS ROLES  AGE VERSION
gk3-auto-k8s-1-nap-1775fchb-2cb29547-l784 Ready  <none> 31m v1.22.8-gke.202
gk3-auto-k8s-1-nap-1775fchb-3480ac0f-78lq Ready  <none> 25m v1.22.8-gke.202
gk3-auto-k8s-1-nap-xou6zl77-efa8cc66-2xhm Ready  <none> 21m v1.22.8-gke.202

Create the LogScale Cluster

To create the LogScale cluster, we apply a HumioCluster CRD with a minimal set of resources. First, let’s create a namespace called logscale:

$ kubectl create ns logscale
namespace/logscale created

Running LogScale on GKE requires a license. You can get a trial license here. Licenses are typically issued within two business days.

Once we have the license, we need to create a Kubernetes secret that contains our license data. In the following command, replace the <REDACTED> string with your license data.

$ kubectl create secret generic logscale-trial-license \

  -n logscale –from-literal=data=<REDACTED> \

  secret/logscale-trial-license created

The next thing we need to know is the service names for Zookeeper and Kafka. We’ll use these values in your LogScale cluster setup.

$ kubectl get svc -n kafka
NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                               AGE
logscale-cluster-kafka-bootstrap    ClusterIP   10.96.127.36    <none>        9091/TCP,9092/TCP,9093/TCP            5h50m
logscale-cluster-kafka-brokers      ClusterIP   None            <none>        9090/TCP,9091/TCP,9092/TCP,9093/TCP   5h50m
logscale-cluster-zookeeper-client   ClusterIP   10.96.170.109   <none>        2181/TCP                              5h52m
logscale-cluster-zookeeper-nodes    ClusterIP   None            <none>        2181/TCP,2888/TCP,3888/TCP    

We’ll use the logscale-cluster-kafka-brokers and logscale-cluster-zookeeper-client service names, along with ports, as values for KAFKA_SERVERS and ZOOKEEPER_URL as we create our LogScale cluster. We create a file called logscale-cluster.yaml, with the following contents:

apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
  name: logscale-cluster
  namespace: logscale
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        – matchExpressions:
          – key: humio_node_type
            operator: In
            values:
            – core
        – matchExpressions:
          – key: kubernetes.io/arch
            operator: In
            values:
            – arm64
        – matchExpressions:
          – key: kubernetes.io/os
            operator: In
            values:
            – linux
  license:
    secretKeyRef:
      name: logscale-trial-license
      key: data
  image: “humio/humio-core:1.56.3”
  nodeCount: 1
  tls:
    enabled: false
  targetReplicationFactor: 1
  storagePartitionsCount: 24
  digestPartitionsCount: 24
  resources:
    limits:
      cpu: “2”
      memory: 4Gi
    requests:
      cpu: “1”
      memory: 2Gi
  dataVolumePersistentVolumeClaimSpecTemplate:
    storageClassName: standard
    accessModes: [ReadWriteOnce]
    resources:
      requests:
        storage: 10Gi
  environmentVariables:
    – name: “HUMIO_MEMORY_OPTS”
      value: “-Xss2m -Xms1g -Xmx2g -XX:MaxDirectMemorySize=1g”
    – name: ZOOKEEPER_URL
      value: logscale-cluster-zookeeper-client.kafka.svc.cluster.local:2181
    – name: KAFKA_SERVERS
      value: logscale-cluster-kafka-brokers.kafka.svc.cluster.local:9092
    – name: AUTHENTICATION_METHOD
      value: “single-user”
    – name: SINGLE_USER_PASSWORD
      value: “password”

Then, we apply the CRD. We already specified the logscale namespace in the CRD, so we don’t need to specify it in our command.

$ kubectl apply -f logscale-cluster.yaml
humiocluster.core.humio.com/logscale-cluster created   

Let’s check the status of our LogScale cluster:

$ kubectl get humiocluster logscale-cluster -n logscale
NAME               STATE     NODES   VERSION
logscale-cluster   Running   1       1.36.1–build-124825–sha-6402d163827020e288913da5ea6441e07946e57e

LogScale is up and running!

Test the LogScale Cluster

We can check out the LogScale web UI by doing a port-forward:

$ kubectl port-forward -n logscale svc/logscale-cluster 8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080

Now, we can browse to https://www.crowdstrike.com:8080, log in and play with the LogScale dashboard.

Congratulations! You now have a working LogScale cluster running on a GKE AutoPilot cluster. 

Next Steps

At this point, you can follow the interactive LogScale tutorial at https://www.crowdstrike.com:8080/tutorial, which uses the built-in Sandbox repository. 

With LogScale up and running, you can start instrumenting your systems to start sending log data to LogScale. LogScale has integrations with several log shippers to make it easy to aggregate logs from all of your different sources. You can also check out some of our how-to guides on getting started with these sources. For example:

  • Importing Logs from Fluentd into LogScale
  • Importing Logs from Logstash into LogScale
  • Importing Logs from Docker into LogScale

Conclusion

In this how-to guide, we walked through a complete installation of LogScale on a Google Cloud GKE cluster using the Humio operator for Kubernetes. We created a GKE AutoPilot cluster, and then installed cert-manager, Strimzi, Kafka, and the Humio operator. Then, we created a LogScale cluster and accessed it through its web UI. You can now learn and play with LogScale, deployed to a self-managed GKE AutoPilot cluster!

 

Content provided by Grant Schofield

Related Content