How to explain low RTT between extremely long (10s) ping intervals


I am using the ping utility to troubleshoot what seems to be a slow Internet connection on my home network, but I'm finding the results unusual and difficult to interpret.

I have a Linksys wireless router, and have tried the following simultaneous tests: pinging the the router from my computer, pinging from the router, and pinging from my computer, via the router. Pinging the router from the computer, and pinging Google from the router both work as expected, with minimal packet loss, and low round-trip time ( min/avg/max = 1.601/3.465/9.926, and 20/20/70 respectively).

However, pinging Google from my computer, via the router, reports something that seems very strange to me. It reports a low RTT, and minimal packet loss, but the interval of each ping request, which should be the default 1s, is more like 10s. What this looks like is about a 10s delay between each time ping prints some output. But the resulting RTT is low, e.g.:

64 bytes from icmp_seq=31 ttl=52 time=29.2 ms

When I run this side-by-side with the other tests, the other tests will have sent about 100 requests at the time that this test will have sent 10. This seems to contradict the low RTT reported, so I'm not sure how to make sense of this.

I'd appreciate any insight anyone can offer.

Best Answer

  • It seems that ping does a name resolution before each ICMP request is sent.

    Maybe it is implemented that way, so that a long-running ping will continue to ping the correct host, even when the host<->IP mapping changes during runtime.

    If there is a delay in name resolution (for example, because the RR has a very low TTL and your caching DNS server does not enforce a minimum TTL), then you will likely see long delays between each ICMP request, but individual responses having a low TTL.

    Long story short, as suggested elsewhere, try ping -n.

  • Related Question