Skip to content

Setting Up Multiple Replicas With Leader Election and Passing e2e Tests

Leader Election is enabled by default and when deployed will work as expected as of mid 2025 release. However, the e2e tests will fail as it is hardcoded in the scaffolded project to expect 1 replica.

Modifying the Manager Deployment

In the file config/manager/manager.yaml, you must edit replicas to your desired count (3 is most common, 2 is pointless most of time) like so adding some affinity and topologySpreadConstraints

spec:
    replicas: 3
    template:
        spec:
          affinity:
                podAntiAffinity:
                preferredDuringSchedulingIgnoredDuringExecution:
                - weight: 100
                    podAffinityTerm:
                    labelSelector:
                        matchLabels:
                        app.kubernetes.io/name: soar-operator
                    topologyKey: kubernetes.io/hostname
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: ScheduleAnyway
        labelSelector:
          matchLabels:
            app.kubernetes.io/name: soar-operator

Make the e2e Test Not Fail

In file test/e2e/e2e_test.go you will find that replicas are hardcoded to expect 1, set up a constant to easily modify the value like so:

// expectedReplicas is the number of replicas defined in the manager.yaml for the controller manager
const expectedReplicas = 3

and the tell the tests to use that value:

g.Expect(podNames).To(HaveLen(expectedReplicas), fmt.Sprintf("expected exactly %d controller pods running", expectedReplicas))

Making the e2e Test KIND Cluser Have More Nodes

Start by changing the KIND cluster command to reference a file in VCS like so in the test/e2e/e2e_test.go file:

@case "$$($(KIND) get clusters)" in \
    *"$(KIND_CLUSTER)"*) \
        echo "Kind cluster '$(KIND_CLUSTER)' already exists. Skipping creation." ;; \
    *) \
        echo "Creating Kind cluster '$(KIND_CLUSTER)'..."; \
        $(KIND) create cluster --name $(KIND_CLUSTER) --config kind-config-e2e.yaml ;; \
esac

Create the config file like so in the root of the project:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
  - role: worker
  - role: worker
Powered by Vyra, Sweat and Tears™