InfraRunBook
    Back to articles

    Cisco DHCP Relay Not Working

    Cisco
    Published: Apr 17, 2026
    Updated: Apr 17, 2026

    A senior engineer's walkthrough of every common reason Cisco DHCP relay breaks, from missing ip helper-address to ACLs, routing gaps, and layer-2 failures — with real CLI commands and fix procedures.

    Cisco DHCP Relay Not Working

    Symptoms

    You walk in Monday morning and the helpdesk queue is already stacked with tickets: clients on VLAN 30 can't get an IP address. Windows machines are sitting on 169.254.x.x APIPA addresses. Linux hosts running

    dhclient
    just spin and time out. The DHCP server itself is running fine — you can confirm that by testing from its own subnet — but anything going through the relay is dead in the water.

    The tell-tale signs of a broken DHCP relay are consistent: client logs show repeated DHCPDISCOVER packets sent with no DHCPOFFER coming back, IPAM shows zero new leases being issued for that scope, and a packet capture on the client VLAN shows broadcasts going out but no unicast replies returning. If you're seeing any combination of these, the relay chain is broken somewhere. Let's work through every place it can break.


    Root Cause 1: ip helper-address Missing or Misconfigured

    Why It Happens

    This is the most common cause I've seen, and it almost always happens after a network change — a new VLAN gets provisioned, the SVI goes up, routing is confirmed, and then someone marks the ticket done without adding the helper. By default, IOS and IOS-XE silently drop UDP broadcast packets. A DHCP Discover goes out as a layer-2 broadcast (destination MAC ff:ff:ff:ff:ff:ff, destination IP 255.255.255.255). Without

    ip helper-address
    on the SVI, that packet hits the L3 interface and gets discarded. No error, no log message — just silence.

    The

    ip helper-address
    command does two things: it intercepts the UDP broadcast on the ingress interface and re-encapsulates it as a unicast packet destined for the configured DHCP server, and it fills in the giaddr (Gateway IP Address) field in the DHCP packet header so the server knows which subnet pool to allocate from. Without it, neither of those things happens and the client never hears back.

    How to Identify It

    Check the SVI configuration directly. If you're troubleshooting VLAN 30 (client network 10.10.30.0/24), run:

    sw-infrarunbook-01# show running-config interface vlan 30
    Building configuration...
    
    Current configuration : 96 bytes
    !
    interface Vlan30
     ip address 10.10.30.1 255.255.255.0
     no ip proxy-arp
    end

    No

    ip helper-address
    line — that's your problem. You can also confirm from the server side by enabling DHCP packet debugging. If the server sees nothing arriving, the relay is not forwarding:

    sw-infrarunbook-01# debug ip dhcp server packet
    DHCP server packet debugging is on.
    
    ! ... time passes, clients retry DHCPDISCOVER ... nothing appears on server

    How to Fix It

    Add the helper address under the SVI, pointing to your DHCP server at 10.10.20.100:

    sw-infrarunbook-01# configure terminal
    sw-infrarunbook-01(config)# interface vlan 30
    sw-infrarunbook-01(config-if)# ip helper-address 10.10.20.100
    sw-infrarunbook-01(config-if)# end
    sw-infrarunbook-01# write memory

    After adding this, trigger a new DHCP request from a client and watch the debug output. You should immediately see the relay in action:

    DHCPD: DHCPDISCOVER received from client 0100.50b6.1234.56 on interface Vlan30.
    DHCPD: Sending DHCPOFFER to 10.10.30.10 (10.10.30.10).
    DHCPD: unicasting DHCPOFFER to client 0100.50b6.1234.56 at 10.10.30.10.

    If you need redundancy or have multiple DHCP servers, you can stack multiple

    ip helper-address
    statements on the same interface. IOS will forward the DHCP packet to all configured destinations simultaneously.


    Root Cause 2: DHCP Server Not Reachable from the Relay Agent

    Why It Happens

    The helper address is configured, but the relay agent can't actually reach the DHCP server. The relay converts the broadcast to a unicast packet and fires it off, but that packet either has no route, hits a dead next-hop, or is pointed at the wrong IP entirely. This is surprisingly common when DHCP servers get migrated to new subnets and someone updates the server config but forgets to update every

    ip helper-address
    across every SVI on every switch in the environment.

    In my experience, it also happens in environments where the DHCP server sits behind a firewall and a routing policy or firewall rule change silently breaks the path. The relay switch never knows — it faithfully forwards relayed packets into a void.

    How to Identify It

    First, verify the route to the DHCP server exists on the relay device:

    sw-infrarunbook-01# show ip route 10.10.20.100
    Routing entry for 10.10.20.100/32
      Known via "ospf 1", distance 110, metric 20, type intra area
      Last update from 10.10.10.2 on GigabitEthernet1/0/1, 00:15:43 ago
      Routing Descriptor Blocks:
      * 10.10.10.2, from 10.10.10.2, 00:15:43 ago, via GigabitEthernet1/0/1
          Route metric is 20, traffic share count is 1

    If the route is missing, you'll see:

    sw-infrarunbook-01# show ip route 10.10.20.100
    % Network not in table

    Then test reachability with a sourced ping from the SVI that's doing the relay. This step matters — the DHCP relay packet is sourced from the ingress SVI IP, which becomes the giaddr, so the return path must work for that source specifically:

    sw-infrarunbook-01# ping 10.10.20.100 source vlan 30
    Type escape sequence to abort.
    Sending 5, 100-byte ICMP Echos to 10.10.20.100, timeout is 2 seconds:
    Packet sent with a source address of 10.10.30.1
    .....
    Success rate is 0 percent (0/5)

    Zero percent. The relay can't reach the server. Also verify the helper-address IP itself — is it actually pointing at the right server?

    sw-infrarunbook-01# show running-config | include helper
     ip helper-address 10.10.20.101

    The server moved to 10.10.20.100 but the helper still says .101. The switch is faithfully relaying packets to an IP that no longer answers DHCP. This kind of stale config after a server migration is one of the most common root causes I've dealt with in larger environments.

    How to Fix It

    Correct the helper address, then verify end-to-end reachability before testing client renewals:

    sw-infrarunbook-01(config)# interface vlan 30
    sw-infrarunbook-01(config-if)# no ip helper-address 10.10.20.101
    sw-infrarunbook-01(config-if)# ip helper-address 10.10.20.100
    sw-infrarunbook-01(config-if)# end
    sw-infrarunbook-01# write memory

    Re-run the sourced ping. Once it returns 100% success, trigger a client DHCP request and confirm leases are being issued.


    Root Cause 3: Relay Agent Not Forwarding

    Why It Happens

    The helper is configured, the server is reachable, but packets still aren't getting forwarded. The most common culprit here is

    no service dhcp
    having been added to the device globally. This single command disables all DHCP services — including relay forwarding — in one shot. It's often introduced by automated CIS hardening scripts that confuse disabling the local DHCP server function with disabling relay. The scripts see "DHCP server" as a service to turn off, run
    no service dhcp
    , and break relay without realizing it. There's no error, no syslog event. It just stops working.

    A second cause is a conflict with DHCP relay information, also known as Option 82. If the switch inserts Option 82 into relayed packets but the upstream DHCP server isn't configured to accept or trust it, the server will silently drop the packets. Some servers treat unrecognized or untrusted Option 82 as a policy violation.

    How to Identify It

    Check whether the DHCP service has been globally disabled:

    sw-infrarunbook-01# show running-config | include service dhcp
    no service dhcp

    That single line shuts down everything. To investigate Option 82 settings:

    sw-infrarunbook-01# show ip dhcp relay information
    DHCP relay agent is enabled
      Relay option 82 insertion    : enabled
      Relay option 82 removal      : disabled
      Relay option 82 checking     : enabled
      Circuit-ID format            : vlan-mod-port

    If Option 82 checking is enabled and the server doesn't trust the option, it drops the packet. Enable relay debugging to see both directions of the relay conversation:

    sw-infrarunbook-01# debug ip dhcp relay packet
    
    DHCPD: relay forwarding DHCPDISCOVER from 0.0.0.0 to 10.10.20.100
    DHCPD: relay - received DHCPOFFER from 10.10.20.100
    DHCPD: relay forwarding DHCPOFFER to 10.10.30.1

    If you see the forward lines but no replies returning from the server, the problem is between the relay and the server. If you see absolutely nothing despite clients actively sending Discovers, the relay service itself is either disabled or the SVI isn't seeing the broadcasts at the layer-2 level (see Root Cause 4).

    How to Fix It

    Re-enable the DHCP service globally:

    sw-infrarunbook-01(config)# service dhcp

    If Option 82 is causing issues with a DHCP server that doesn't support it, disable the insertion:

    sw-infrarunbook-01(config)# no ip dhcp relay information option

    If your topology uses cascaded or stacked relays and you need the device to trust incoming packets that already contain Option 82:

    sw-infrarunbook-01(config)# ip dhcp relay information trust-all

    Root Cause 4: Broadcast Not Reaching the Relay Agent

    Why It Happens

    The relay is perfectly configured, but the client's DHCP Discover broadcast never reaches the SVI in the first place. This is a pure layer-2 problem: the VLAN may not be active on the switch, the SVI may be down, the access port may be assigned to the wrong VLAN, or the trunk between the access layer and the distribution layer may not be carrying that VLAN ID. The relay agent on the SVI never sees DHCPDISCOVER because the frame simply doesn't make it there.

    This one trips people up because the relay configuration can be absolutely correct and you'll still get nothing. All the debugging on the relay side shows zero activity — from the relay's perspective, nobody is asking for an address. The problem is entirely below it in the stack.

    How to Identify It

    Start by verifying the SVI state:

    sw-infrarunbook-01# show interface vlan 30
    Vlan30 is down, line protocol is down
      Hardware is EtherSVI, address is 0c75.bd12.3400 (bia 0c75.bd12.3400)
      Internet address is 10.10.30.1/24
      MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec

    Down/down on an SVI means there are no active, forwarding ports in VLAN 30 on this switch. Check the VLAN database and trunk configuration:

    sw-infrarunbook-01# show vlan id 30
    
    VLAN Name                             Status    Ports
    ---- -------------------------------- --------- -------------------------------
    30   CLIENT_VLAN                      active
    
    sw-infrarunbook-01# show interfaces trunk | include 30
    ! No output -- VLAN 30 is not in any trunk's allowed list

    VLAN 30 has no member ports and isn't carried on any trunk. Broadcasts from clients stay local to the access switch and never reach the SVI on the distribution layer. Also confirm the client's access port is in the right VLAN:

    sw-infrarunbook-01# show interface GigabitEthernet1/0/5 switchport
    Name: Gi1/0/5
    Switchport: Enabled
    Administrative Mode: static access
    Operational Mode: static access
    Access Mode VLAN: 20   <-- should be 30!

    The port is in VLAN 20, not 30. Client broadcasts go into the wrong VLAN, where either a different relay is configured or no relay exists at all. This is a classic fat-finger during port provisioning — the type of thing that's easy to miss when you're doing a batch of port configs late in the day.

    How to Fix It

    Correct the access port VLAN assignment:

    sw-infrarunbook-01(config)# interface GigabitEthernet1/0/5
    sw-infrarunbook-01(config-if)# switchport access vlan 30
    sw-infrarunbook-01(config-if)# end

    If VLAN 30 wasn't in the allowed list on the uplink trunk, add it explicitly:

    sw-infrarunbook-01(config)# interface GigabitEthernet1/0/1
    sw-infrarunbook-01(config-if)# switchport trunk allowed vlan add 30
    sw-infrarunbook-01(config-if)# end

    After fixing the layer-2 path, the SVI will transition to up/up within a few seconds as active ports join VLAN 30, and client DHCP Discovers will start reaching the relay agent.


    Root Cause 5: ACL Blocking DHCP Traffic

    Why It Happens

    Everything in the relay configuration looks correct, but an access control list is silently dropping the packets. ACLs on SVIs are common — they enforce inter-VLAN access policies. If the ACL was written without explicit provisions for DHCP traffic (UDP ports 67 and 68), the packets hit the implicit deny at the end of every IOS ACL and disappear. Because DHCP drops don't generate syslog entries by default unless you've explicitly appended

    log
    to the deny ACE, these drops leave no trace in the logs.

    I've also seen this bite teams on outbound ACLs applied to the routed uplink toward the DHCP server — the relayed unicast packet from the switch gets dropped before it even leaves the device. Both directions matter: inbound on the client SVI blocks DHCPDISCOVER and DHCPREQUEST, and outbound toward the server blocks the relayed unicast. Either failure mode presents identically to the client.

    How to Identify It

    Check whether ACLs are applied to the relevant interface:

    sw-infrarunbook-01# show ip interface vlan 30
    Vlan30 is up, line protocol is up
      Internet address is 10.10.30.1/24
      Broadcast address is 255.255.255.255
      Inbound  access list is CLIENT_IN
      Outbound access list is not set

    There's an inbound ACL named CLIENT_IN. Inspect its contents and hit counters:

    sw-infrarunbook-01# show ip access-lists CLIENT_IN
    Extended IP access list CLIENT_IN
        10 permit tcp 10.10.30.0 0.0.0.255 any eq 80
        20 permit tcp 10.10.30.0 0.0.0.255 any eq 443
        30 permit tcp 10.10.30.0 0.0.0.255 any eq 22
        40 deny   ip any any (4821 matches)

    No permit for UDP 67 or 68. That 4821-match deny line is the smoking gun — DHCP broadcasts are in there. To confirm that DHCP traffic specifically is hitting the deny, use IP packet debug briefly:

    sw-infrarunbook-01# debug ip packet detail
    ! Trigger a DHCP renew from a client, then watch:
    
    IP: s=0.0.0.0 (Vlan30), d=255.255.255.255, len 328, rcvd 0
        UDP src=68, dst=67
    IP: tableid=0, s=0.0.0.0 (Vlan30), d=255.255.255.255, len 328
        access-list CLIENT_IN denied

    Confirmed. Turn off the debug immediately after collecting that output —

    debug ip packet
    on a busy switch will spike the control plane CPU and make the outage worse:

    sw-infrarunbook-01# no debug ip packet

    How to Fix It

    Insert a permit rule for DHCP before the deny. A DHCPDISCOVER comes from source IP 0.0.0.0 destined for 255.255.255.255, UDP source port 68, destination port 67. Sequence number 5 puts it at the top of the list:

    sw-infrarunbook-01(config)# ip access-list extended CLIENT_IN
    sw-infrarunbook-01(config-ext-nacl)# 5 permit udp any host 255.255.255.255 eq 67
    sw-infrarunbook-01(config-ext-nacl)# end

    Verify the updated ACL and confirm the deny count has stopped climbing:

    sw-infrarunbook-01# show ip access-lists CLIENT_IN
    Extended IP access list CLIENT_IN
        5 permit udp any host 255.255.255.255 eq 67 (22 matches)
        10 permit tcp 10.10.30.0 0.0.0.255 any eq 80
        20 permit tcp 10.10.30.0 0.0.0.255 any eq 443
        30 permit tcp 10.10.30.0 0.0.0.255 any eq 22
        40 deny   ip any any (4821 matches)

    The deny count is frozen and the new DHCP permit is accumulating matches. Clients should start receiving offers within seconds.


    Additional Causes Worth Checking

    DHCP Snooping Dropping Relay Replies

    If DHCP snooping is enabled globally and the uplink carrying relayed server replies isn't configured as a trusted port, the switch will silently drop DHCP OFFERs and ACKs arriving from the server direction. This is a common oversight when deploying DHCP snooping for the first time. Any port carrying server-sourced DHCP traffic — including uplinks to a core router doing relay — must be explicitly trusted:

    sw-infrarunbook-01(config)# interface GigabitEthernet1/0/1
    sw-infrarunbook-01(config-if)# ip dhcp snooping trust

    No Matching Scope on the DHCP Server

    The server receives the relayed packet, but has no scope matching the giaddr. Server logs will show something like "No subnet defined for 10.10.30.1" and the request is dropped without an offer. This isn't a relay problem — it's a server configuration problem — but it presents identically from the client's perspective. Always confirm the DHCP server has a scope covering the client subnet (10.10.30.0/24) with the default gateway option pointing to 10.10.30.1 before assuming the relay is at fault.


    Prevention

    Most DHCP relay failures are preventable with consistent operational habits. Build your VLAN provisioning runbooks so that

    ip helper-address
    is a required, checklist-verified step — not an afterthought. The single most common failure mode is a new VLAN being stood up correctly in every other way except for that one line. If you use configuration management tooling like Ansible or NSO, add a compliance check that flags any client-facing SVI missing a helper address and run it weekly. Catching drift before users do is always cheaper than troubleshooting at 2 AM.

    For ACLs, establish a team standard: every SVI ACL must include an explicit DHCP permit as one of the first entries — treat it the same way you'd treat a mandatory ICMP permit. Network change reviews should verify this before approval. Some teams keep a standard named ACL object-group for DHCP that gets imported into every new SVI policy automatically.

    On the server side, configure SNMP traps or syslog alerting for scope exhaustion and failed allocation events such as "no subnet defined." These server-side signals often surface relay problems faster than waiting for a helpdesk ticket. Pair this with IPAM monitoring so scope utilization is visible in a dashboard — if a scope that's normally 60% utilized drops to zero new assignments overnight, something upstream broke.

    When migrating a DHCP server to a new IP, run a global audit before cutting over:

    show running-config | include helper-address
    across every L3 switch and router in the environment. Find every reference to the old IP, update them all, then migrate the server. That five-minute audit prevents the scenario where clients in three VLANs lose DHCP because someone only updated the primary distribution switch and forgot about the two access-layer SVIs doing local relay.

    Document your relay topology in your IPAM or CMDB: which SVIs relay to which servers, which VLANs are affected, and which physical devices are in the path. When DHCP stops working under pressure, having that map lets you go straight to the right config instead of hunting across dozens of devices hoping to find the misconfiguration before the next escalation call.

    Frequently Asked Questions

    Why does DHCP relay require ip helper-address on a Cisco switch?

    By default, Cisco IOS and IOS-XE drop UDP broadcast packets, which is how DHCP Discover messages are sent. The ip helper-address command intercepts those broadcasts on the ingress SVI interface and re-forwards them as unicast packets to the configured DHCP server. It also sets the giaddr field in the DHCP packet so the server knows which subnet scope to allocate from. Without it, DHCP Discover packets never leave the local VLAN.

    How do I check if an ACL is blocking DHCP traffic on a Cisco IOS switch?

    Run 'show ip interface vlan X' to see which ACLs are applied to the SVI, then run 'show ip access-lists ACL_NAME' to inspect the rules and hit counters. If you see a high match count on a deny any any rule and no explicit permit for UDP port 67 or 68, the ACL is dropping DHCP. You can confirm with 'debug ip packet detail' — look for lines showing 'access-list denied' on UDP 67/68 traffic. Disable the debug immediately after confirming.

    What does 'no service dhcp' do to DHCP relay on Cisco IOS?

    The 'no service dhcp' command globally disables all DHCP processing on the device, including both the local DHCP server and the DHCP relay agent function. Even if ip helper-address is correctly configured on every SVI, no DHCP packets will be forwarded while this command is present. It is commonly introduced by hardening scripts that intend to disable only the local DHCP server but don't account for the relay role. Fix it by running 'service dhcp' in global configuration mode.

    Why is my Cisco SVI showing down/down even though VLAN exists in the database?

    An SVI transitions to up/up only when at least one port assigned to that VLAN is in a forwarding state (connected and not blocked by STP). If the VLAN exists in the VLAN database but all ports are down, the trunk isn't carrying that VLAN, or the access ports are assigned to the wrong VLAN, the SVI stays down. DHCP relay on a down SVI is completely non-functional — broadcasts from clients never reach it. Fix the layer-2 path first: correct the access port VLAN assignment or add the VLAN to the trunk's allowed list.

    How can I verify that DHCP relay is working correctly on Cisco IOS-XE?

    Use 'debug ip dhcp relay packet' to watch relay forwarding activity in real time. You should see lines for 'relay forwarding DHCPDISCOVER' when a client request arrives, and 'relay forwarding DHCPOFFER' when the server replies. Also check 'show ip dhcp relay information' to confirm the relay agent is enabled and Option 82 settings match your server's expectations. For a quick sanity check, run 'ping X.X.X.X source vlan Y' where X.X.X.X is the DHCP server IP and Y is the client VLAN — 100% success is required for relay to work.

    Related Articles