Sql-server – Why is GETUTCDATE earlier than SYSDATETIMEOFFSET

datetimesql server

Alternatively, how did Microsoft make time travel possible?

Consider this code:

DECLARE @Offset datetimeoffset = sysdatetimeoffset();
DECLARE @UTC datetime = getUTCdate();
DECLARE @UTCFromOffset datetime = CONVERT(datetime,SWITCHOFFSET(@Offset,0));
    Offset = @Offset,
    UTC = @UTC,
    UTCFromOffset = @UTCFromOffset,
    TimeTravelPossible = CASE WHEN @UTC < @UTCFromOffset THEN 1 ELSE 0 END;

@Offset is set before @UTC, yet it sometimes has a later value. (I've tried this on SQL Server 2008 R2 and SQL Server 2016. You have to run it a few times to catch the suspect occurrences.)

This does not appear to be simply a matter of rounding or lack of precision. (In fact, I think the rounding is what "fixes" the issue occasionally.) The values for a sample run are as follows:

  • Offset
    • 2017-06-07 12:01:58.8801139 -05:00
  • UTC
    • 2017-06-07 17:01:58.877
  • UTC From Offset:
    • 2017-06-07 17:01:58.880

So datetime precision allows the .880 as a valid value.

Even Microsoft's GETUTCDATE examples show the SYS* values being later than the older methods, despite being SELECTed earlier:

SELECT 'GETDATE()          ', GETDATE();  
/* Returned:  
SYSDATETIME()            2007-05-03 18:34:11.9351421  
SYSDATETIMEOFFSET()      2007-05-03 18:34:11.9351421 -07:00  
SYSUTCDATETIME()         2007-05-04 01:34:11.9351421  
CURRENT_TIMESTAMP        2007-05-03 18:34:11.933  
GETDATE()                2007-05-03 18:34:11.933  
GETUTCDATE()             2007-05-04 01:34:11.933  

I presume this is because they come from different underlying system information. Can anyone confirm and provide details?

Microsoft's SYSDATETIMEOFFSET documentation says "SQL Server obtains the date and time values by using the GetSystemTimeAsFileTime() Windows API" (thanks srutzky), but their GETUTCDATE documentation is much less specific, saying only that the "value is derived from the operating system of the computer on which the instance of SQL Server is running".

(This isn't entirely academic. I ran into a minor issue caused by this. I was upgrading some procedures to use SYSDATETIMEOFFSET instead of GETUTCDATE, in hopes for greater precision in the future, but I started to get odd ordering because other procedures were still using GETUTCDATE and occasionally "jumping ahead" of my converted procedures in the logs.)

Best Answer

The issue is a combination of datatype granularity / accuracy and source of the values.

First, DATETIME is only accurate / granular to every 3 milliseconds. Hence, converting from a more precise datatype such as DATETIMEOFFSET or DATETIME2 won't just round up or down to the nearest millisecond, it could be 2 milliseconds different.

Second, the documentation seems to imply a difference in where the values come from. The SYS* functions use the high-precision FileTime functions.

SYSDATETIMEOFFSET documentation states:

SQL Server obtains the date and time values by using the GetSystemTimeAsFileTime() Windows API.

while the GETUTCDATE documentation states:

This value is derived from the operating system of the computer on which the instance of SQL Server is running.

Then, in the About Time documentation, a chart shows the following two (of several) types:

  • System Time = "Year, month, day, hour, second, and millisecond, taken from the internal hardware clock."
  • File Time = "The number of 100-nanosecond intervals since January 1, 1601."

Additional clues are in the .NET documentation for the StopWatch class (emphasis in bold italics mine):

  • Stopwatch Class

    The Stopwatch measures elapsed time by counting timer ticks in the underlying timer mechanism. If the installed hardware and operating system support a high-resolution performance counter, then the Stopwatch class uses that counter to measure elapsed time. Otherwise, the Stopwatch class uses the system timer to measure elapsed time.

  • Stopwatch.IsHighResolution Field

    The timer used by the Stopwatch class depends on the system hardware and operating system. IsHighResolution is true if the Stopwatch timer is based on a high-resolution performance counter. Otherwise, IsHighResolution is false, which indicates that the Stopwatch timer is based on the system timer.

Hence, there are different "types" of times that have both different precisions and different sources.

But, even if that is a very loose logic, testing both types of functions as sources for the DATETIME value proves it. The following adaptation of the query from the question shows this behavior:

    Offset = @Offset,
    UTC2 = @UTC2,
    UTC = @UTC,
    UTCFromOffset = @UTCFromOffset,
    TimeTravelPossible = CASE WHEN @UTC < @UTCFromOffset THEN 1 ELSE 0 END,
    TimeTravelPossible2 = CASE WHEN @UTC2 < @UTCFromOffset THEN 1 ELSE 0 END;


Offset                 2017-06-07 17:50:49.6729691 -04:00
UTC2                   2017-06-07 21:50:49.673
UTC                    2017-06-07 21:50:49.670
UTCFromOffset          2017-06-07 21:50:49.673
TimeTravelPossible     1
TimeTravelPossible2    0

As you can see in the above results, UTC and UTC2 are both DATETIME datatypes. @UTC2 is set via SYSUTCDATETIME() and is set after @Offset (also taken from a SYS* function), but prior to @UTC which is set via GETUTCDATE(). Yet, @UTC2 appears to come before @UTC. The OFFSET part of this is completely unrelated to anything.

HOWEVER, to be fair, this still isn't proof in a strict sense. @MartinSmith traced the GETUTCDATE() call and found the following:

enter image description here

I see three interesting things in this call stack:

  1. Is starts with a call to GetSystemTime() which returns a value that is only precise down to the milliseconds.
  2. It uses DateFromParts (i.e. no formula to convert, just use the individual pieces).
  3. There is a call to QueryPerformanceCounter towards the bottom. This is a high-resolution difference counter that could be being used as a means of "correction". I have seen some suggestion of using one type to adjust the other over time as long intervals slowly get out of sync (see How to get timestamp of tick precision in .NET / C#? on S.O.). This does not show up when calling SYSDATETIMEOFFSET.

There is much good info here regarding the different types of time, different sources, drift, etc: Acquiring high-resolution time stamps.

Really, it is not appropriate to compare values of different precisions. For example, if you have 2017-06-07 12:01:58.8770011 and 2017-06-07 12:01:58.877, then how do you know that the one with less precision is greater than, less than, or equal to the value with more precision? Comparing them assumes that the less precise one is actually 2017-06-07 12:01:58.8770000, but who knows if that is true or not? The real time could have been either 2017-06-07 12:01:58.8770005 or 2017-06-07 12:01:58.8770111.

However, if you have DATETIME datatypes, then you should use the SYS* functions as the source as they are more accurate, even if you lose some precision as the value is forced into a less precise type. And along those lines, it seems to make more sense to use SYSUTCDATETIME() rather than call SYSDATETIMEOFFSET() only to adjust it via SWITCHOFFSET(@Offset, 0).