NTP Server’s Delta Time

This is a guest blogpost by Jasper Bongertz. His own blog is at blog.packet-foo.com.


Running your own NTP server(s) is usually a good idea. Even better if you know that they’re working correctly and serve their answers efficiently and without a significant delay, even under load. This is how you can use Wireshark to analyze the NTP delta time for NTP servers:

This article is one of many blogposts within this NTP series. Please have a look!

Update your Wireshark, please …

Looking at NTP server request/response performance used to be a little problematic before Wireshark version 3.0, because that’s the version that added a field called “ntp.delta_time”. The problem with delta time calculations is always, that Wireshark needs to do it for request/response packet pairs; it’s not something you can do yourself with a filter. Which means that a developer needs to track and match response packets for requests, calculate the delta time, and put it into a meta field in the response packet decode:

The good thing about those meta fields is that you can use them just like any other field in the decode: you can search for them, filter on them and graph them in the IO-Graph:

By the way, you can lookup all fields Wireshark knows about (including the version numbers needed to see them) at https://www.wireshark.org/docs/dfref/.

Setting up your analysis environment

So, after filtering away everything else you can take a look at the NTP delta times, e.g. by adding a custom column to Wireshark:

You might notice that I forced Wireshark to replace the NTP server IPv6 address with a much shorter name. I did that by putting a hosts file into the “profiles” directory used by Wireshark, which can easily be found by checking out the “About dialog”:

The file itself is just a normal hosts file, like the one used by the operating system, e.g.:

The advantage of putting it into a Wireshark profile directory is that you don’t have to change your system communication behavior and that you can keep different hosts files per analysis task simply by putting them into different profiles. You might need to enable the network name resolution in the “View Menu” -> “Name Resolution” -> “Enable Network Name Resolution”. To make that permanent you can also configure that setting in the Wireshark preferences dialog.

NTP server performance

One of the things you may want to know about your NTP servers is their performance. So if you capture all their packets you can use the new ntp.delta_time field to easily read the delay for each request. But first, you have to make sure that you only look at requests that were answered by your server – because it may act as a client itself sometimes, which would then muddy the waters.

The testbed

Johannes provided me capture files taken at four of his NTP servers:

  • ntp2.weberlab.de, 2003:de:2016:330::6b5:123, Raspberry Pi 1 B+ w/ GPS Receiver
  • ntp3.weberlab.de, 2003:de:2016:330::dcfb:123, Meinberg LANTIME M200 Appliance with a DCF77 Receiver
  • ntp4.weberlab.de, 2003:de:2016:333:1130:d52a:ece2:33fe, Raspberry Pi 3 B
  • ntp5.weberlab.de, 2003:de:2016:333:221:9bff:fefc:8fe1, a Dell PowerEdge R200

All four servers were part of the NTP pool project, hence received thousands of requests per minute while within their round-robin DNS.

Isolating NTP server communication

The problem with NTP is that it uses UDP port 123 for both client and server, so it’s less easy to find out who the server and who the client is (compared to, let’s say HTTP – the node responding from port 80 or 443 is the server with very high certainty). Fortunately, NTP has a field that will tell you what kind of message you’re looking at (client or server):

With that field and using the IP addresses of your servers, you can isolate all packets where they either receive a client packet or send a server packet. For example, if your NTP server has the IP address 2003:de:2016:330::dcfb:123 you would use a display filter like this:

Meaning: all packets sent to that IP address need to be client packets (mode 3), and all packets coming from that IP address need to be server packets (mode 4).

Comparing the response times

Let’s take a look at the “I/O graph” (found in the statistics menu of Wireshark) of the four servers – but I have to warn you:

If you do this with as many requests and servers like I did, chances are high that Wireshark will crash under some circumstances. I found it best to let it graph everything before changing settings or closing the graph dialog window again. It’s also a good idea to prepare the I/O graph first without letting it draw anything (don’t use the checkboxes on the left at first) by entering all the settings you need, then close Wireshark to make it store the setup. That way a crash doesn’t mean you have to redo the setup each time, which can be very annoying for complicated settings.

I have graphed all four server NTP deltas in separate graphs with different colors, and opted for the logarithmic scale because otherwise we would only see a couple of peaks and not much else:

I think it’s pretty obvious that NTP2 is by far the slowest server under load, compared to the graph showing the amount of request packets each server had to respond to. This graph isn’t logarithmic for the simple reason that I wanted to show the peaks prominently (and that the red graph drowns out everything in the lower ranges):

As you can see the delta time peaks match with heavy loads of the servers, meaning that when many requests arrive all servers show higher delays in their response time.

Now let’s take a closer look at the peak of the red and blue servers (NTP2 and NTP5) right next to each other, around 2pm. Zoomed in it looks like this for the response time:

And this is the packet ratio:

So, we can deduce that the packet ratio in both bursts is quite similar, but NTP2 has much more significant delays in its answer packets. This isn’t surprising because a Raspberry Pie 1 is like David vs the Dell PowerEdge Goliath. And from my point of view, David is still doing a pretty good job for a small system like that.

Meinberg: Client Logging vs. No Client Logging

The Meinberg NTP server has a feature to log client requests, which could introduce additional stress to a busy server. To be able to compare the delta times of the server with both client logging enabled and disabled I chose two packet peaks that were as similar as possible within the capture files I head, close to 100 packets/s. As you can see in the delta time graphs it doesn’t really matter that much if client logging is enabled or not; the delta time increase doesn’t look significant with an increase of only around 10ms. On higher loads, this might get worse, so you might want to disable Client logging when you don’t really have a use case for it.

Packet Ratio, with client logging enabled:

 

Packet Ratio, without client logging:

Client Logging enabled, Delta Time:

Client Logging not enabled, Delta Time:

NTP Server Performance Min/Max/Average

Comparing the performance of the four servers was performed by pulling all response times out of the capture files using tshark, e.g. for NTP2:

The parameters used here were:

  • -r <filename>: reads a file instead of capturing from a network card
  • -2: forces a 2-pass analysis, which is important to get the delta time calculations. Without it, the field would stay empty (thanks to @PacketDetective to remind me)
  • -Y <filter>: filters a file, in this case for all packets that are answers from the server
  • -TFields: asks tshark to print only specific fields
  • -e <fieldname>: specifies the fields I need, in this case, the NTP delta time

Sending those field values into a CSV file, one per server, allowed me to use Excel to generate Min/Max/Average values for me:

NTP2NTP3NTP4NTP5
Raspi 1 B+Meinberg M200Raspi 3 BDell PowerEdge R200
Min [ms]0,8270,1460,3510,199
Max [ms]29,43215,12711,4524,422
Average [ms]1,4610,5660,7900,425

The fastest server – unsurprisingly – is the Dell PowerEdge server. The slowest is the Raspberry Pi 1, being about three times slower than the Dell on average.

Photo by Markus Lompa on Unsplash.

Leave a Reply

Your email address will not be published. Required fields are marked *