InfraRunBook
    Back to articles

    Kubernetes RBAC Roles and ClusterRoles Explained

    Kubernetes
    Published: Apr 8, 2026
    Updated: Apr 8, 2026

    A practical deep-dive into Kubernetes RBAC Roles and ClusterRoles — how they work, how to bind them, and how to apply least-privilege access in real production clusters.

    Kubernetes RBAC Roles and ClusterRoles Explained

    If you've spent any time managing a Kubernetes cluster in production, you've almost certainly hit an access denied error that made no sense at first glance. Nine times out of ten, RBAC is either the culprit or the cure. Kubernetes Role-Based Access Control is the mechanism that governs what users, service accounts, and automated processes are allowed to do inside your cluster. Get it right and you have a clean, auditable security boundary. Get it wrong and you either lock legitimate workloads out, or — far worse — leave the blast radius of a compromised service account completely wide open.

    This article breaks down the two primary RBAC primitives you'll work with every day: Roles and ClusterRoles. We'll cover what they are, how the authorization model actually works under the hood, how to bind them to subjects, and the real-world patterns that work well in practice — plus the mistakes I've watched burn teams in production.

    What RBAC Actually Is

    RBAC in Kubernetes is an authorization strategy that governs access to the Kubernetes API server. It graduated to stable in Kubernetes 1.8 and is now the default and recommended authorization mode for virtually every cluster. The core idea is straightforward: instead of granting permissions directly to individuals, you define named roles that bundle a set of permissions, then bind those roles to specific subjects — users, groups, or service accounts.

    The permissions themselves are expressed as combinations of verbs (get, list, watch, create, update, patch, delete, deletecollection), resources (pods, services, deployments, configmaps, and so on), and optionally specific resource names. That's the whole model. There's no concept of deny rules in Kubernetes RBAC — everything is allow-only, and if no rule grants access, the request is denied by default. Simple in theory, powerful in practice.

    Roles vs ClusterRoles: Scope Is Everything

    Here's the distinction that trips people up constantly. A Role is namespace-scoped. It exists within a specific namespace, and the permissions it grants only apply to resources in that same namespace. A ClusterRole is cluster-scoped — it lives outside any namespace and its permissions can apply either cluster-wide or be bound down to a specific namespace through a RoleBinding.

    In practice, a Role is the right tool when you're granting a developer or service account access to resources within a single team's namespace. A ClusterRole is the right tool when you need to grant access to cluster-scoped resources — nodes, persistent volumes, namespaces themselves, custom resource definitions — or when you want a single reusable role definition that you'll bind across multiple namespaces.

    There's a subtlety here that's easy to miss: you can bind a ClusterRole using a RoleBinding, which constrains the effective permissions to just the namespace where the RoleBinding lives. This is actually a very useful pattern. Define the ClusterRole once, reuse it via RoleBindings in every namespace where it's needed. No need to duplicate Role definitions across dozens of namespaces.

    The Anatomy of a Role

    Let me show you what these look like on the wire before going deeper. A basic Role granting read access to pods inside a specific namespace:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      namespace: monitoring
      name: pod-reader
    rules:
    - apiGroups: [""]
      resources: ["pods"]
      verbs: ["get", "list", "watch"]

    The

    apiGroups
    field is something that confuses a lot of people early on. Core API resources — pods, services, configmaps, secrets, namespaces — live in the empty string API group (
    ""
    ). Resources from extensions live in their own groups:
    apps
    for deployments and statefulsets,
    batch
    for jobs and cronjobs,
    networking.k8s.io
    for ingresses, and so on. Specify the wrong API group and your rule simply won't match — no error, no warning, just silent failure.

    A ClusterRole looks structurally identical, just without the

    namespace
    field in metadata and using
    kind: ClusterRole
    :

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: node-reader
    rules:
    - apiGroups: [""]
      resources: ["nodes"]
      verbs: ["get", "list", "watch"]
    - apiGroups: ["metrics.k8s.io"]
      resources: ["nodes", "pods"]
      verbs: ["get", "list"]

    Nodes are a cluster-scoped resource. You physically cannot grant access to nodes using a namespaced Role — it would have no effect. This is one of those cases where you must use a ClusterRole.

    RoleBindings and ClusterRoleBindings

    A Role or ClusterRole sitting alone does nothing. It's a definition, not an assignment. To actually grant permissions to a subject, you need a binding. RoleBinding binds a role — either a Role or a ClusterRole — to one or more subjects within a specific namespace. ClusterRoleBinding binds a ClusterRole to subjects at the cluster level, with no namespace restriction.

    Here's a RoleBinding that grants the

    pod-reader
    Role to a service account:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: read-pods-binding
      namespace: monitoring
    subjects:
    - kind: ServiceAccount
      name: infrarunbook-admin
      namespace: monitoring
    roleRef:
      kind: Role
      name: pod-reader
      apiGroup: rbac.authorization.k8s.io

    And here's the same grant, but using a ClusterRole bound through a RoleBinding — effective only within the

    monitoring
    namespace despite the ClusterRole being cluster-scoped:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: read-pods-binding-clusterrole
      namespace: monitoring
    subjects:
    - kind: ServiceAccount
      name: infrarunbook-admin
      namespace: monitoring
    roleRef:
      kind: ClusterRole
      name: pod-reader-cluster
      apiGroup: rbac.authorization.k8s.io

    One critical thing to understand about the

    roleRef
    field: it is immutable. Once a binding is created, you cannot change what role it points to. If you need to change the roleRef, delete and recreate the binding. This isn't a bug — it's intentional. It prevents someone from quietly escalating a binding to a more powerful role by patching an existing object.

    How the Authorization Flow Works

    When a request hits the Kubernetes API server, it passes through several authorization stages. For RBAC, the authorizer collects all RoleBindings and ClusterRoleBindings that reference the requesting subject — the user, service account, or group. It then gathers all rules from the referenced Roles and ClusterRoles, and checks whether any single rule allows the requested verb on the requested resource in the requested namespace. If any rule matches, the request is allowed. If none match, it's denied with a 403.

    There are no priorities, no ordering, no overrides. RBAC rules are purely additive. This is why there are no deny rules — the model doesn't support negation. If you need to prevent a subject from doing something, you simply don't grant the permission. If they're already getting it from somewhere else, like a ClusterRoleBinding to

    cluster-admin
    , removing that binding is the only lever you have.

    Aggregated ClusterRoles

    Kubernetes ships with a feature that doesn't get nearly enough attention: aggregated ClusterRoles. You can define a ClusterRole with an

    aggregationRule
    that automatically pulls in rules from other ClusterRoles based on label selectors. The built-in
    admin
    ,
    edit
    , and
    view
    ClusterRoles use this mechanism — when you install a CRD that ships its own RBAC rules with the right labels, those rules automatically appear in the aggregated roles without any manual intervention.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: monitoring-aggregate
    aggregationRule:
      clusterRoleSelectors:
      - matchLabels:
          rbac.infrarunbook.solvethenetwork.com/aggregate-to-monitoring: "true"
    rules: []
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: prometheus-scrape-rules
      labels:
        rbac.infrarunbook.solvethenetwork.com/aggregate-to-monitoring: "true"
    rules:
    - apiGroups: [""]
      resources: ["pods", "services", "endpoints"]
      verbs: ["get", "list", "watch"]

    The rules in

    prometheus-scrape-rules
    will automatically merge into
    monitoring-aggregate
    . This is enormously useful when managing RBAC across a complex cluster where multiple teams deploy operators and custom resources — each team manages their own ClusterRole fragment without needing to edit a central, shared role definition.

    Real-World Examples

    Let me walk through a few patterns I've actually used in production environments rather than contrived toy examples.

    CI/CD Service Account With Least-Privilege Deploy Access

    A CI runner on sw-infrarunbook-01 needs to deploy into the

    applications
    namespace. It shouldn't be able to touch secrets directly, read other namespaces, or modify RBAC. Here's the role:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: ci-deployer
      namespace: applications
    rules:
    - apiGroups: ["apps"]
      resources: ["deployments", "statefulsets", "daemonsets"]
      verbs: ["get", "list", "watch", "create", "update", "patch"]
    - apiGroups: [""]
      resources: ["services", "configmaps"]
      verbs: ["get", "list", "watch", "create", "update", "patch"]
    - apiGroups: [""]
      resources: ["pods"]
      verbs: ["get", "list", "watch"]
    - apiGroups: [""]
      resources: ["pods/log"]
      verbs: ["get"]
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: ci-deployer-binding
      namespace: applications
    subjects:
    - kind: ServiceAccount
      name: infrarunbook-admin
      namespace: ci-system
    roleRef:
      kind: Role
      name: ci-deployer
      apiGroup: rbac.authorization.k8s.io

    Notice the service account lives in the

    ci-system
    namespace but the binding — and thus the permissions — applies to the
    applications
    namespace. You can bind cross-namespace. The subject's namespace and the binding's namespace don't have to match.

    Read-Only Cluster Access for Observability Tools

    A monitoring agent needs to scrape metrics and read pod status across every namespace. This is a legitimate use case for a ClusterRoleBinding:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: cluster-observer
    rules:
    - apiGroups: [""]
      resources: ["pods", "nodes", "services", "endpoints", "namespaces", "events"]
      verbs: ["get", "list", "watch"]
    - apiGroups: ["apps"]
      resources: ["deployments", "replicasets", "statefulsets", "daemonsets"]
      verbs: ["get", "list", "watch"]
    - apiGroups: ["metrics.k8s.io"]
      resources: ["nodes", "pods"]
      verbs: ["get", "list"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: cluster-observer-binding
    subjects:
    - kind: ServiceAccount
      name: infrarunbook-admin
      namespace: monitoring
    roleRef:
      kind: ClusterRole
      name: cluster-observer
      apiGroup: rbac.authorization.k8s.io

    Namespace Admin Without cluster-admin

    In my experience, one of the most common mistakes is giving developers

    cluster-admin
    because it's "easier." The built-in
    admin
    ClusterRole is almost always sufficient for a namespace owner — it grants full control over all namespaced resources including RBAC within that namespace, but explicitly excludes the ability to modify namespace-level resource quotas or the namespace object itself.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: namespace-admin-binding
      namespace: team-platform
    subjects:
    - kind: User
      name: infrarunbook-admin@solvethenetwork.com
      apiGroup: rbac.authorization.k8s.io
    roleRef:
      kind: ClusterRole
      name: admin
      apiGroup: rbac.authorization.k8s.io

    That user now has full admin rights to the

    team-platform
    namespace and cannot touch anything else in the cluster. Clean, auditable, and genuinely least-privilege.

    Debugging and Auditing RBAC

    When something isn't working, reach for

    kubectl auth can-i
    first. It's a direct query to the RBAC authorizer and returns a simple yes or no:

    # Can the infrarunbook-admin service account list pods in the monitoring namespace?
    kubectl auth can-i list pods \
      --namespace monitoring \
      --as system:serviceaccount:monitoring:infrarunbook-admin
    
    # What can that service account actually do in monitoring?
    kubectl auth can-i --list \
      --namespace monitoring \
      --as system:serviceaccount:monitoring:infrarunbook-admin

    The

    --list
    flag is underused. It dumps every allowed action for a given subject in a namespace, which is invaluable for auditing whether a service account has accumulated more permissions than intended over time. For a broader audit, a quick one-liner to see everywhere a specific account has been granted access:

    kubectl get rolebindings,clusterrolebindings \
      -A -o wide | grep infrarunbook-admin

    The community tools

    rbac-lookup
    and
    kubectl-who-can
    make this significantly more ergonomic in large clusters. Worth having in your toolkit alongside the built-in commands.

    Common Misconceptions

    ClusterRoleBindings are always more powerful than RoleBindings. Not quite. A ClusterRoleBinding to a scoped-down ClusterRole can be more restrictive than a RoleBinding to the built-in

    admin
    ClusterRole. The binding type determines scope; the permissions are determined entirely by the role's rules. Don't conflate "cluster-scoped binding" with "powerful."

    RBAC controls what pods can do at the OS level. It doesn't. RBAC governs access to the Kubernetes API server. What a container process can do at the Linux kernel level is governed by PodSecurityAdmission, seccomp profiles, AppArmor, and similar mechanisms. A pod with no service account token still runs with whatever Linux capabilities the security context allows — RBAC has nothing to say about that.

    Removing a RoleBinding immediately terminates in-progress requests. RBAC decisions happen at the time of each API request. Long-running watches — like a controller listening for pod changes — will re-evaluated periodically, but revocation isn't instantaneous at the application layer. For high-security revocations you should also rotate the associated service account token.

    Wildcards are safe if you trust the user. I've seen teams use

    resources: ["*"]
    and
    verbs: ["*"]
    in roles handed to internal developers because "they're trusted." This is how secrets get bulk-exported, RBAC gets escalated, and audit trails become meaningless. Wildcards are occasionally legitimate for break-glass cluster-admin scenarios. They have no place in roles granted to service accounts or regular users.

    RBAC is too complex to manage at scale, so just use cluster-admin. This is the most dangerous misconception of all. Yes, RBAC management at scale is genuinely hard without tooling. But the answer is to invest in GitOps-driven RBAC management — Flux or ArgoCD syncing Role and RoleBinding manifests from a version-controlled repo — not to flatten everything to cluster-admin and hope for the best. I have seen the blast radius of a compromised cluster-admin service account. It is not something you want to experience firsthand.


    RBAC isn't glamorous and it rarely comes up in demos, but it's one of the most operationally important subsystems in any Kubernetes cluster. Understanding the difference between a Role and a ClusterRole, knowing when to use a RoleBinding versus a ClusterRoleBinding, and building the habit of auditing what service accounts can actually do — these are the skills that separate clusters that are genuinely secure from clusters that merely look secure from the outside.

    The model is elegant once it clicks: define permissions in roles, attach roles to subjects through bindings, let the API server enforce on every request. Start with the least privilege your workloads actually need, layer in

    kubectl auth can-i --list
    checks as part of your deployment reviews, and you'll catch the surprises before they become incidents.

    Frequently Asked Questions

    What is the difference between a Role and a ClusterRole in Kubernetes?

    A Role is namespace-scoped and only grants permissions to resources within a single namespace. A ClusterRole is cluster-scoped and can grant permissions to cluster-wide resources like nodes and persistent volumes, or be reused across multiple namespaces via RoleBindings.

    Can I use a ClusterRole with a RoleBinding instead of a ClusterRoleBinding?

    Yes, and this is a recommended pattern. Binding a ClusterRole with a RoleBinding restricts the effective permissions to just the namespace where the RoleBinding lives, letting you define the role once and reuse it across namespaces without duplicating Role definitions.

    How do I debug RBAC permission errors in Kubernetes?

    Use kubectl auth can-i to test specific permissions: 'kubectl auth can-i list pods --namespace monitoring --as system:serviceaccount:monitoring:my-sa'. The --list flag shows all allowed actions for a subject in a namespace, which is useful for audits.

    Why is the roleRef field in a RoleBinding immutable?

    Kubernetes makes roleRef immutable to prevent privilege escalation attacks where someone patches an existing binding to point to a more permissive role. To change the roleRef you must delete and recreate the binding, which creates a clear audit trail.

    Does Kubernetes RBAC support deny rules?

    No. Kubernetes RBAC is purely additive — rules only allow access, never explicitly deny it. If no rule grants a permission, the request is denied by default. To revoke access, you remove the binding or rule that was granting it.

    What are aggregated ClusterRoles and when should I use them?

    Aggregated ClusterRoles automatically merge rules from other ClusterRoles based on label selectors. They're ideal in multi-team clusters where each team can define their own ClusterRole fragment that automatically contributes to a parent role — used by the built-in admin, edit, and view roles for exactly this reason.

    Related Articles