InfraRunBook
    Back to articles

    Juniper MPLS LDP Session Down

    Juniper
    Published: Apr 18, 2026
    Updated: Apr 18, 2026

    A hands-on troubleshooting guide for Juniper MPLS LDP session failures, covering seven root causes with real JunOS CLI commands, actual error output, and step-by-step fixes.

    Juniper MPLS LDP Session Down

    Symptoms

    An LDP session going down on a Juniper router is rarely subtle. Traffic starts black-holing, LSPs drop, and if you're watching your monitoring dashboards you'll see utilization tank on core links that were previously saturated. The first command most engineers reach for is

    show mpls ldp session
    , and what you're hoping not to see is this:

    infrarunbook-admin@sw-infrarunbook-01> show mpls ldp session
    Address          State       Connection   Hold time  I/O  Up/Dn
    10.0.0.2         None        Nonexistent  0          0/0  00:00:00

    Compare that against a healthy session:

    infrarunbook-admin@sw-infrarunbook-01> show mpls ldp session
    Address          State       Connection   Hold time  I/O  Up/Dn
    10.0.0.2         Operational Open         28         2/2  1d 04:33:12

    Beyond the session table, a downed LDP session usually presents with a recognizable cluster of symptoms. LSPs show Dn state in

    show mpls lsp
    . Syslog starts flooding with
    RPD_LDP_NBRDOWN: LDP neighbor 10.0.0.2 (inet) is down, reason: Neighbor down due to holdtime expiry
    . Running
    ping mpls ldp fec 10.0.0.2/32
    returns nothing. And any services that rely on label-switched paths — L3VPNs, pseudowires, TE tunnels — start dropping traffic.

    The tricky part is that not every LDP failure looks identical. Sometimes the session oscillates between None and OpenRec. Sometimes the neighbor never appears at all. Sometimes the session is fully Operational but labeled traffic is still dropping for certain packet sizes. The root cause determines the symptom pattern, and that's why methodical diagnosis matters here.


    Cause 1: LDP Not Enabled on Interface

    Why It Happens

    This is the most common cause I've encountered when bringing up new MPLS links or after a configuration rollback. LDP in Junos requires explicit enablement on each interface that should participate in label distribution. Enabling LDP globally under

    protocols ldp
    doesn't automatically activate it on every interface — you have to list them individually. Miss one interface and that link will never send or receive LDP Hello packets, meaning the neighbor on the other end won't even know you're trying to form a session.

    The same failure mode hits you when MPLS is enabled on an interface but

    family mpls
    isn't configured under the logical unit. Both conditions silently prevent LDP from working on that link while everything else may look normal.

    How to Identify It

    Start by listing which interfaces LDP is currently running on:

    infrarunbook-admin@sw-infrarunbook-01> show mpls ldp interface
    Interface        State       Adjacencies  Hello Interval  Holdtime
    lo0.0            Enabled     0            5               15
    ge-0/0/1.0       Enabled     1            5               15

    If the interface facing your LDP peer is absent from this list, that's your problem. Now cross-reference with the MPLS interface table to see what's enabled at the MPLS layer:

    infrarunbook-admin@sw-infrarunbook-01> show mpls interface
    Interface        State       Administrative groups
    ge-0/0/0.0       Enabled     <none>
    ge-0/0/1.0       Enabled     <none>

    An interface showing up under

    show mpls interface
    but not
    show mpls ldp interface
    is a clear configuration gap. Confirm it in the running config:

    infrarunbook-admin@sw-infrarunbook-01> show configuration protocols ldp
    interface lo0.0;
    interface ge-0/0/1.0;

    ge-0/0/0.0 is missing from LDP even though MPLS is enabled on it.

    How to Fix It

    Add the missing interface to LDP and verify that

    family mpls
    is present on the unit:

    infrarunbook-admin@sw-infrarunbook-01# set protocols ldp interface ge-0/0/0.0
    infraunbook-admin@sw-infrarunbook-01# set interfaces ge-0/0/0 unit 0 family mpls
    infraunbook-admin@sw-infrarunbook-01# commit

    After committing, LDP will immediately start sending Hello packets out that interface. Within the default hello interval of 5 seconds, you should see the neighbor appear:

    infrarunbook-admin@sw-infrarunbook-01> show mpls ldp neighbor
    Address          Interface        Label space ID     Hold time
    10.1.1.2         ge-0/0/0.0       10.0.0.2:0         14

    Cause 2: Hello Holdtime Expired

    Why It Happens

    LDP relies on periodic UDP Hello messages to discover and maintain neighbor relationships. Link hellos go out every 5 seconds by default, with a holdtime of 15 seconds. If a router stops receiving Hellos from its neighbor before that holdtime runs out, it tears down the LDP session. Targeted hellos have different defaults — 15-second intervals and a 45-second holdtime — but the same principle applies.

    Holdtime expiry is often a symptom of something else: a flapping physical link, a CPU spike causing UDP drops, or a misconfigured holdtime on one side of the session. In my experience, the pure timer-mismatch case is less common than people expect. What's more common is intermittent packet loss on the link causing Hellos to go missing long enough to trigger expiry.

    How to Identify It

    The syslog message is very explicit when holdtime expiry is the trigger:

    Apr 19 09:22:14  sw-infrarunbook-01 rpd[1423]: RPD_LDP_NBRDOWN: LDP neighbor 10.0.0.2 (inet) is down, reason: Neighbor down due to holdtime expiry

    Check the configured hello parameters on both sides:

    infrarunbook-admin@sw-infrarunbook-01> show configuration protocols ldp
    interface ge-0/0/0.0 {
        hello-interval 5;
        hold-time 15;
    }
    interface lo0.0;

    If the remote peer has a different holdtime, LDP negotiates down to the lower value. You can see what was actually negotiated in the detailed session output:

    infrarunbook-admin@sw-infrarunbook-01> show mpls ldp session detail
    Address          State       Connection   Hold time
    10.0.0.2         OpenSent    Nonexistent  0
      Negotiated hold time: 0
      Peer's hold time: 0
      Local hold time: 15

    Also check for physical layer issues that might be dropping Hello packets before they arrive:

    infrarunbook-admin@sw-infrarunbook-01> show interfaces ge-0/0/0 detail | match "Input errors|Output errors|transitions"
        Input errors:
          Errors: 0, Drops: 0, Framing errors: 0, Runts: 0
        Output errors:
          Carrier transitions: 7, Errors: 0, Drops: 0

    Seven carrier transitions is the tell-tale sign. That link is flapping and your Hellos aren't making it through consistently.

    How to Fix It

    For a pure timer mismatch, align the hello parameters on both peers:

    infrarunbook-admin@sw-infrarunbook-01# set protocols ldp interface ge-0/0/0.0 hello-interval 5
    infraunbook-admin@sw-infrarunbook-01# set protocols ldp interface ge-0/0/0.0 hold-time 15
    infraunbook-admin@sw-infrarunbook-01# commit

    If the link is physically unstable, increasing the holdtime buys you time while you fix the underlying physical problem — but don't treat that as a permanent solution. Fix the fiber, the SFP, or the upstream switch port. You can also raise the holdtime considerably (60 seconds or more) to make LDP sessions resilient against brief interruptions while you wait for a maintenance window to address the hardware.


    Cause 3: IGP Route to Peer Missing

    Why It Happens

    LDP sessions in Junos are established between router loopback addresses by default. The TCP connection that carries LDP messages — the actual session — is sourced from and destined to these loopbacks. For that TCP connection to come up, the IGP must have a valid, active route to the peer's loopback address. If OSPF or IS-IS hasn't converged, or if the loopback isn't being advertised into the IGP, LDP will never progress beyond the Hello exchange phase.

    I've seen this happen after IGP configuration changes where someone removed a loopback from the OSPF area by accident, or after a route policy change broke loopback advertisement. The physical link is up, Hellos are flowing, but the session stubbornly refuses to reach Operational state because the TCP handshake can't complete without a route to the peer's transport address.

    How to Identify It

    The session will sit in OpenRec, cycling repeatedly without ever establishing:

    infrarunbook-admin@sw-infrarunbook-01> show mpls ldp session
    Address          State       Connection   Hold time  I/O  Up/Dn
    10.0.0.2         OpenRec     Nonexistent  14         0/0  00:00:00

    Check whether you actually have a route to the peer's loopback:

    infrarunbook-admin@sw-infrarunbook-01> show route 10.0.0.2/32
    
    inet.0: 15 destinations, 15 routes (15 active, 0 holddown, 0 hidden)
    
    10.0.0.2/32        *[Direct/0] 1d 04:33:12
                        > via lo0.0

    A Direct route to 10.0.0.2/32 via lo0.0 is wrong — you're seeing your own loopback, not a learned route to the peer. Now check if OSPF is up and what it's advertising:

    infrarunbook-admin@sw-infrarunbook-01> show ospf neighbor
    Address          Interface              State     ID               Pri  Dead
    10.1.1.2         ge-0/0/0.0             Full      10.0.0.2         128    38
    
    infraunbook-admin@sw-infrarunbook-01> show route protocol ospf
    inet.0: 15 destinations, 15 routes (15 active, 0 holddown, 0 hidden)
    
    10.1.1.0/30        *[OSPF/10] 00:12:33, metric 2
                        > to 10.1.1.2 via ge-0/0/0.0

    OSPF adjacency is Full but only the transit link prefix is being learned — no loopback. Check the peer's OSPF configuration:

    infrarunbook-admin@peer-router> show ospf interface lo0.0
    OSPF interface lo0.0 is not found

    There it is. The peer's loopback isn't participating in OSPF at all.

    How to Fix It

    Add the loopback to OSPF on the peer router. Loopbacks should always be passive — they don't form adjacencies, they just advertise a reachability prefix:

    infrarunbook-admin@peer-router# set protocols ospf area 0.0.0.0 interface lo0.0 passive
    infraunbook-admin@peer-router# commit

    Within seconds, the route should appear on sw-infrarunbook-01:

    infrarunbook-admin@sw-infrarunbook-01> show route 10.0.0.2/32
    
    inet.0: 16 destinations, 16 routes (16 active, 0 holddown, 0 hidden)
    
    10.0.0.2/32        *[OSPF/10] 00:00:08, metric 1
                        > to 10.1.1.2 via ge-0/0/0.0

    With a valid IGP route to the peer loopback, the LDP TCP connection establishes and the session comes up to Operational state shortly after.


    Cause 4: Authentication Mismatch

    Why It Happens

    LDP supports MD5 authentication on its TCP sessions using the same mechanism as BGP. When one side has authentication configured and the other doesn't — or when both sides have different keys — the TCP handshake fails at the kernel level. LDP never even gets to negotiate because the connection is torn down before LDP can exchange initialization messages. This is particularly frustrating to diagnose because LDP itself has nothing to report; the failure is happening one layer below it.

    This crops up most often during new neighbor provisioning where someone configures authentication on the existing router but doesn't add it on the new one, or during password rotation where one side gets updated and the other is missed. Don't underestimate how common that second scenario is in organizations running manual change management.

    How to Identify It

    The session will show no connection despite Hellos flowing normally:

    infrarunbook-admin@sw-infrarunbook-01> show mpls ldp session detail
    Address          State       Connection   Hold time
    10.0.0.2         None        Nonexistent  0
      Negotiated hold time: 0
      Peer's hold time: 0
      Local hold time: 15
      LDP Hello: 10.1.1.2 -> 224.0.0.2, active, ge-0/0/0.0

    Hellos are active (neighbor discovery is working fine) but no TCP connection is forming. Check whether authentication is configured locally:

    infrarunbook-admin@sw-infrarunbook-01> show configuration protocols ldp
    authentication-key "$9$abc123encryptedvalue"; ## SECRET-DATA
    interface ge-0/0/0.0;
    interface lo0.0;

    Authentication is set locally. Now look at LDP statistics for repeated TCP connection failures:

    infrarunbook-admin@sw-infrarunbook-01> show ldp statistics
      TCP:
        Connections established: 0
        Connections failed: 18
        Keepalives sent: 0
        Keepalives received: 0

    Eighteen failures with zero established connections is a strong indicator of TCP-layer rejection. Confirm it with a packet capture:

    infrarunbook-admin@sw-infrarunbook-01> monitor traffic interface ge-0/0/0 matching "tcp and port 646" count 10
    09:45:12.334521  10.0.0.2.49223 > 10.0.0.1.646: Flags [S], seq 2847362910
    09:45:12.334788  10.0.0.1.646 > 10.0.0.2.49223: Flags [R.], seq 0
    09:45:17.891023  10.0.0.2.49445 > 10.0.0.1.646: Flags [S], seq 3021748833
    09:45:17.891301  10.0.0.1.646 > 10.0.0.2.49445: Flags [R.], seq 0

    TCP SYN followed immediately by RST — classic authentication rejection at the TCP layer.

    How to Fix It

    Ensure both sides have the same authentication key configured under

    protocols ldp
    . On sw-infrarunbook-01:

    infrarunbook-admin@sw-infrarunbook-01# set protocols ldp authentication-key "SharedLDP2024!"
    infraunbook-admin@sw-infrarunbook-01# commit

    On the peer router, set the identical key:

    infrarunbook-admin@peer-router# set protocols ldp authentication-key "SharedLDP2024!"
    infraunbook-admin@peer-router# commit

    If you'd rather remove authentication entirely on both sides, that's equally valid — just be consistent:

    infrarunbook-admin@sw-infrarunbook-01# delete protocols ldp authentication-key
    infraunbook-admin@sw-infrarunbook-01# commit

    Once the keys match (or are both absent), the TCP session will establish within seconds and LDP will progress to Operational state.


    Cause 5: MTU Issue on MPLS Path

    Why It Happens

    MPLS adds a 4-byte label header per label to every packet it forwards. If your transit links are running at standard 1500-byte Ethernet MTU, large packets get fragmented or silently dropped the moment MPLS labels push them over that limit. The LDP session itself will look completely healthy — keepalives are small packets and sail through without issue — while your production traffic is black-holing for anything near full MTU size. This is one of the most frustrating MPLS failure modes precisely because the session state gives you no indication that anything is wrong.

    For a single-label stack you need at least 1504 bytes. For MPLS VPNs using a two-label stack, you need at least 1508. The safe, consistent answer is to configure jumbo frames everywhere in the MPLS domain and never think about it again.

    How to Identify It

    Look for application-level symptoms: large TCP transfers hanging after the three-way handshake, application timeouts that resolve when PMTUD kicks in, or explicit ICMP fragmentation-needed messages. Check the current interface MTU:

    infrarunbook-admin@sw-infrarunbook-01> show interfaces ge-0/0/0 detail | match MTU
        Link-level type: Ethernet, MTU: 1500, MRU: 1514, Speed: 1000mbps

    1500 bytes on an MPLS-forwarding interface is a red flag. Test with explicit size pings to find the breakpoint:

    infrarunbook-admin@sw-infrarunbook-01> ping 10.0.0.2 size 1472 do-not-fragment count 5
    PING 10.0.0.2 (10.0.0.2): 1472 data bytes
    36 bytes from sw-infrarunbook-01: frag needed and DF set (MTU 1500)
    36 bytes from sw-infrarunbook-01: frag needed and DF set (MTU 1500)
    
    infraunbook-admin@sw-infrarunbook-01> ping 10.0.0.2 size 1400 do-not-fragment count 5
    PING 10.0.0.2 (10.0.0.2): 1400 data bytes
    1408 bytes from 10.0.0.2: icmp_seq=0 ttl=64 time=0.8 ms
    1408 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.9 ms

    Packets fail above 1472 bytes but succeed at 1400. With MPLS label overhead in the path, the effective payload limit is being pushed below 1472 bytes. Confirm the path MTU seen from the LSP perspective:

    infrarunbook-admin@sw-infrarunbook-01> show mpls lsp detail | match "Path MTU"
          Path MTU: 1496

    1496 bytes path MTU on a 1500-byte interface confirms a label is being pushed and large frames won't traverse the LSP intact.

    How to Fix It

    Increase the MTU on all MPLS-facing interfaces. Go to 9000 bytes — jumbo frames eliminate this class of problem entirely and you won't have to revisit it as your label stack depth grows:

    infrarunbook-admin@sw-infrarunbook-01# set interfaces ge-0/0/0 mtu 9000
    infraunbook-admin@sw-infrarunbook-01# commit

    This change must be consistent across every router in the MPLS domain. A single 1500-byte hop anywhere in the path still causes the same problem. Verify after committing:

    infrarunbook-admin@sw-infrarunbook-01> show interfaces ge-0/0/0 detail | match MTU
        Link-level type: Ethernet, MTU: 9000, MRU: 9014, Speed: 1000mbps

    Then retest with the previously failing packet size:

    infrarunbook-admin@sw-infrarunbook-01> ping 10.0.0.2 size 1472 do-not-fragment count 5
    PING 10.0.0.2 (10.0.0.2): 1472 data bytes
    1480 bytes from 10.0.0.2: icmp_seq=0 ttl=64 time=0.9 ms
    1480 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.8 ms

    Cause 6: Transport Address Mismatch

    Why It Happens

    By default, Junos uses the loopback address as the LDP transport address — the source IP for the TCP session. If a router is configured with an explicit transport address that doesn't match what the peer expects, or if the transport address isn't reachable via the IGP, the TCP handshake will fail even though Hellos are flowing and all interfaces look correct. This is a subtle failure mode because everything at the LDP Hello level looks healthy.

    How to Identify It

    infrarunbook-admin@sw-infrarunbook-01> show mpls ldp session detail
    Address          State       Connection   Hold time
    10.0.0.2         None        Nonexistent  0
      Peer's transport address: 172.16.0.2
      Local transport address: 10.0.0.1

    The peer is advertising 172.16.0.2 as its transport address. Check if that address is reachable:

    infrarunbook-admin@sw-infrarunbook-01> show route 172.16.0.2/32
    
    inet.0: 16 destinations, 16 routes (15 active, 0 holddown, 1 hidden)
    
    172.16.0.2/32 (1 entry, 0 announced)
            *[OSPF/150] 00:00:00, metric 0, tag 0
             Unusable

    The route exists but is marked Unusable — it's not being installed into the forwarding table. Check the LDP configuration on both sides to verify transport addresses match the reachable loopbacks:

    infrarunbook-admin@sw-infrarunbook-01> show configuration protocols ldp
    transport-address 10.0.0.1;
    interface ge-0/0/0.0;
    interface lo0.0;

    How to Fix It

    Ensure both sides configure their transport address to match their loopback IP as advertised in IGP. Either correct the configuration on the peer side or update the local router to use an address that matches what the peer can reach:

    infrarunbook-admin@peer-router# set protocols ldp transport-address 10.0.0.2
    infraunbook-admin@peer-router# commit

    Once both transport addresses are set to their respective reachable loopbacks and those loopbacks are in the IGP, the TCP session will establish normally.


    Cause 7: Firewall Filter Blocking LDP

    Why It Happens

    LDP uses UDP port 646 for Hello discovery and TCP port 646 for session maintenance. If a firewall filter — whether on the loopback, a transit interface, or an upstream security device — is blocking these ports, LDP fails silently. The most common scenario is a security hardening effort where someone applies a restrictive protect-RE filter to lo0 without explicitly permitting LDP. Hello packets never arrive, the neighbor never appears, and there's nothing in the LDP logs to point you toward a firewall as the cause.

    How to Identify It

    Check for any firewall filters applied to the loopback or relevant interfaces:

    infrarunbook-admin@sw-infrarunbook-01> show interfaces lo0 detail | match filter
        Input Filter: protect-re

    A filter on lo0 is a classic LDP killer. Examine what that filter is accepting and counting:

    infrarunbook-admin@sw-infrarunbook-01> show firewall filter protect-re
    Filter: protect-re
      Counters:
      Name                                                Bytes              Packets
      accept-bgp                                          1842330            14234
      accept-ospf                                         98442              1243
      accept-ssh                                          34211              421
      discard-all                                         61440              768

    No LDP counter, and discard-all is catching 768 packets. Those are almost certainly your LDP UDP hellos and TCP connection attempts. Enable tracing to confirm:

    infrarunbook-admin@sw-infrarunbook-01# set protocols ldp traceoptions file ldp-trace
    infraunbook-admin@sw-infrarunbook-01# set protocols ldp traceoptions flag packets detail
    infraunbook-admin@sw-infrarunbook-01# commit
    
    infraunbook-admin@sw-infrarunbook-01> monitor log ldp-trace
    Apr 19 10:03:11 LDP sent PDU on ge-0/0/0.0 to 224.0.0.2
    Apr 19 10:03:16 LDP sent PDU on ge-0/0/0.0 to 224.0.0.2
    [no received PDUs logged]

    Sending but never receiving — the firewall is dropping inbound LDP traffic before it reaches the LDP process.

    How to Fix It

    Add a term to the filter permitting LDP traffic before the discard-all term:

    infrarunbook-admin@sw-infrarunbook-01# edit firewall filter protect-re
    infraunbook-admin@sw-infrarunbook-01# set term accept-ldp from protocol [tcp udp]
    infraunbook-admin@sw-infrarunbook-01# set term accept-ldp from destination-port 646
    infraunbook-admin@sw-infrarunbook-01# set term accept-ldp then accept
    infraunbook-admin@sw-infrarunbook-01# insert term accept-ldp before term discard-all
    infraunbook-admin@sw-infrarunbook-01# commit

    After committing, the accept-ldp counter should start incrementing immediately and the LDP session should come up within a few hello intervals. Don't forget to disable the traceoptions when you're done diagnosing.


    Prevention

    Most LDP session failures are preventable with a few consistent operational habits. The big one: validate LDP state immediately after any interface configuration change, any IGP modification, or any security policy update. These three categories of change account for the majority of LDP breakage in production environments, and a 30-second validation at commit time is vastly cheaper than a 3 AM incident call.

    Build a commissioning checklist for new MPLS links that requires explicit sign-off on

    show mpls ldp interface
    ,
    show mpls ldp session
    , and
    show route protocol ospf
    for peer loopbacks. If an interface shows in MPLS but not in LDP, the configuration isn't done. If the peer loopback isn't in the routing table via IGP, don't expect LDP to work. Make these checks mandatory before closing any MPLS provisioning ticket.

    For MTU, the simplest prevention is a blanket policy: all MPLS-facing interfaces run at 9000-byte MTU. Don't try to calculate minimum label overhead for each deployment — just configure jumbo everywhere in the MPLS domain and remove the variable entirely. Document it in your interface standards so it applies automatically to all future provisioning without anyone having to think about it.

    On authentication, if you're using LDP MD5 keys, manage them through automation rather than manual configuration. A secrets management system that pushes consistent keys to both peers simultaneously eliminates the mismatch scenario. If automation isn't available, build a two-person validation step into your change process — one engineer configures, another verifies both sides match before closing the change.

    Monitoring deserves a specific mention. Alert directly on

    RPD_LDP_NBRDOWN
    syslog events — don't rely solely on interface-up/down alerts to surface LDP failures. A session can go down without any interface state change, particularly in the authentication and IGP-route scenarios covered above, and you want to know about it in seconds rather than when service desk tickets start arriving.

    Finally, consider enabling LDP graceful restart across your core routers with

    set protocols ldp graceful-restart
    . It won't prevent sessions from failing, but it gives neighboring routers a window to retain forwarding state while the session recovers. In a network with redundant paths, graceful restart dramatically reduces the traffic impact of transient LDP disruptions — turning what would be a full LSP teardown and reconvergence into a brief hold while the session re-establishes.

    Frequently Asked Questions

    How do I check if LDP is enabled on a specific interface in Junos?

    Run 'show mpls ldp interface' to list all interfaces actively participating in LDP. If your interface is missing from that output, run 'show configuration protocols ldp' to verify it's listed under the LDP configuration stanza. You also need 'family mpls' configured on the logical unit under 'interfaces ge-x/x/x unit 0'.

    Why is my LDP session stuck in OpenRec state and never reaching Operational?

    OpenRec typically means LDP Hellos are working (neighbor discovery is fine) but the TCP session can't establish. The most common causes are a missing IGP route to the peer's loopback address, an authentication mismatch, or a transport address that's not reachable. Run 'show route <peer-loopback>/32' to verify the route exists and is active, then check if authentication is configured on one side but not the other.

    Will LDP session failures always cause traffic loss?

    Not always immediately — if LDP graceful restart is configured, neighboring routers will maintain their forwarding state for a hold period while waiting for the session to recover. Without graceful restart, any LSPs that relied on the failed LDP session will tear down immediately, and traffic using those label-switched paths will drop until the session re-establishes and labels are redistributed.

    What MTU should I configure on Juniper interfaces for MPLS?

    For a single-label stack you need a minimum of 1504 bytes (1500 + 4 bytes per label). For MPLS VPNs with a two-label stack, you need at least 1508 bytes. In practice, configure 9000 bytes (jumbo frames) on all MPLS-facing interfaces to eliminate MTU as a variable entirely. This avoids recalculating requirements as your label stack depth changes in the future.

    How do I diagnose an LDP authentication mismatch on Juniper?

    Run 'show ldp statistics' and look for a high number of failed TCP connections with zero established connections. Then use 'monitor traffic interface ge-x/x/x matching "tcp and port 646"' to capture packets — you'll see TCP SYN from the peer followed immediately by RST from your router, which is the kernel rejecting the connection due to authentication failure. Check 'show configuration protocols ldp' on both sides to compare authentication-key presence.

    Related Articles