Skip to content

feat: add support for instance generation, accelerated devices, instance class, and network out#44

Merged
AshleyDumaine merged 12 commits intomainfrom
extra-requirements-support
Jan 26, 2026
Merged

feat: add support for instance generation, accelerated devices, instance class, and network out#44
AshleyDumaine merged 12 commits intomainfrom
extra-requirements-support

Conversation

@AshleyDumaine
Copy link
Contributor

@AshleyDumaine AshleyDumaine commented Jan 22, 2026

With this it should be possible to specify the Linode Class (nanode, standard, highmem, dedicated, gpu), the number of accelerated devices, and the instance generation. (g6, g7, g8).

Testing

Follow https://github.com/linode/karpenter-provider-linode?tab=readme-ov-file#using-karpenter but for the NodePool add an extra requirement like the generation (this asks for the shiny new g8 instance types only):

spec:
  template:
    spec:
      requirements:
        - key: kubernetes.io/arch
          operator: In
          values: ["amd64"]
        - key: kubernetes.io/os
          operator: In
          values: ["linux"]
        - key: karpenter.sh/capacity-type
          operator: In
          values: ["on-demand"]
        - key: karpenter.k8s.linode/instance-generation
          operator: Gt
          values:
          - "7"

When scaling the inflate deployment in the README example, see if you picked a region that has the plan type supported. us-southeast didn't and should reflect that error in the nodeclaim when you check its status:

k get nodeclaim -o yaml
apiVersion: v1
items:
- apiVersion: karpenter.sh/v1
  kind: NodeClaim
  metadata:
    annotations:
      karpenter.sh/nodeclaim-min-values-relaxed: "false"
      karpenter.sh/nodepool-hash: "14335507968772556748"
      karpenter.sh/nodepool-hash-version: v3
    creationTimestamp: "2026-01-23T18:55:02Z"
    finalizers:
    - karpenter.sh/termination
    generateName: default-
    generation: 1
    labels:
      karpenter.k8s.linode/linodenodeclass: default
      karpenter.sh/nodepool: default
    name: default-dmxxt
    ownerReferences:
    - apiVersion: karpenter.sh/v1
      blockOwnerDeletion: true
      kind: NodePool
      name: default
      uid: ca6383d8-1d65-4961-bf33-f953966ec1d5
    resourceVersion: "124039"
    uid: ba203ee6-be1e-4f7a-85ca-58b1c19dfe74
  spec:
    expireAfter: 720h
    nodeClassRef:
      group: karpenter.k8s.linode
      kind: LinodeNodeClass
      name: default
    requirements:
    - key: kubernetes.io/os
      operator: In
      values:
      - linux
    - key: karpenter.sh/capacity-type
      operator: In
      values:
      - on-demand
    - key: karpenter.k8s.linode/instance-generation
      operator: Gt
      values:
      - "7"
    - key: karpenter.sh/nodepool
      operator: In
      values:
      - default
    - key: karpenter.k8s.linode/linodenodeclass
      operator: In
      values:
      - default
    - key: node.kubernetes.io/instance-type
      operator: In
      values:
      - g8-dedicated-128-32
      - g8-dedicated-128-64
      - g8-dedicated-16-8
      - g8-dedicated-256-128
      - g8-dedicated-256-64
      - g8-dedicated-32-16
      - g8-dedicated-32-8
      - g8-dedicated-512-128
      - g8-dedicated-512-256
      - g8-dedicated-64-16
      - g8-dedicated-64-32
      - g8-dedicated-96-24
      - g8-dedicated-96-48
    - key: kubernetes.io/arch
      operator: In
      values:
      - amd64
    resources:
      requests:
        cpu: 5250m
        pods: "8"
  status:
    conditions:
    - lastTransitionTime: "2026-01-23T18:55:02Z"
      message: object is awaiting reconciliation
      observedGeneration: 1
      reason: AwaitingReconciliation
      status: Unknown
      type: Initialized
    - lastTransitionTime: "2026-01-23T18:55:02Z"
      message: 'Failed to create LKE node pool: [400] [type] The Linode plan g8-dedicated-16-8
        is not currently available in the selected region. Please select another region
        or plan type, or contact Support for assistance.'
      observedGeneration: 1
      reason: NodePoolCreationFailed
      status: Unknown
      type: Launched
    - lastTransitionTime: "2026-01-23T18:55:02Z"
      message: Node not registered with cluster
      observedGeneration: 1
      reason: NodeNotFound
      status: Unknown
      type: Registered
    - lastTransitionTime: "2026-01-23T18:55:02Z"
      message: Initialized=Unknown, Launched=Unknown, Registered=Unknown
      observedGeneration: 1
      reason: ReconcilingDependents
      status: Unknown
      type: Ready
kind: List
metadata:
  resourceVersion: ""

If you want to check the other 2 new supported fields, scale inflate to 0 (make sure the nodeclaim goes away), remove the generation requirement from the NodePool, and sub in:

      - key: karpenter.k8s.linode/instance-class
        operator: In
        values:
        - dedicated

then scale back up to request dedicated instances only. Same thing for accelerated devices (again, this might not work in the region you picked, just check the nodeclaim):

      - key: karpenter.k8s.linode/instance-accelerated-devices-count
        operator: Gt
        values:
        - "1"

or network out:

    - key: karpenter.k8s.linode/instance-network-out
      operator: Gt
      values:
      - "1000"

From what I can tell, you can't just edit the NodePool and have the NodeClaim automatically update its list of instances to choose from.

@AshleyDumaine AshleyDumaine changed the title add support for instance generation, accelerated devices, and instanc… feat: add support for instance generation, accelerated devices, and instance class Jan 22, 2026
@codecov-commenter
Copy link

codecov-commenter commented Jan 22, 2026

Codecov Report

❌ Patch coverage is 95.00000% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 67.08%. Comparing base (7be06aa) to head (bbf24a3).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
pkg/fake/linodeapi.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #44      +/-   ##
==========================================
+ Coverage   66.85%   67.08%   +0.23%     
==========================================
  Files          37       37              
  Lines        2311     2315       +4     
==========================================
+ Hits         1545     1553       +8     
+ Misses        616      613       -3     
+ Partials      150      149       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@AshleyDumaine
Copy link
Contributor Author

AshleyDumaine commented Jan 23, 2026

It would seem we can't filter by region for ListTypes?

panic: listing linode instance types, [400] [X-Filter] Cannot filter on region

We apparently would just have to grab everything and filter client side.

EDIT: There's no reliable way to do this so we instead just make sure the error is clear to the user in the NodeClaim status.

… out field supported by LinodeType data, fix instance class
@AshleyDumaine AshleyDumaine changed the title feat: add support for instance generation, accelerated devices, and instance class feat: add support for instance generation, accelerated devices, instance class, and network out Jan 23, 2026
@komer3
Copy link
Contributor

komer3 commented Jan 23, 2026

Could we add some usage docs on the new fields?

fix the CRD versioning to indicate this is still alpha
feat: add support for instance disk and gpu name requirements
komer3 and others added 2 commits January 26, 2026 12:15
)

Removes the annoying `warning: unhandled Platform key FamilyDisplayName` printing to terminal constantly. Check the links in the init_hook to understand the issue more.
@AshleyDumaine AshleyDumaine merged commit 237f3a9 into main Jan 26, 2026
5 checks passed
@AshleyDumaine AshleyDumaine deleted the extra-requirements-support branch January 28, 2026 20:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants