What’s the difference between GetTickCount and timeGetTime?

I’ve always believed that the most frequently used multimedia API in winmm.dll was the PlaySound API.  However I recently was working with the results of some static analysis tools that were run on the Windows 7 codebase and I realized that in fact the most commonly used multimedia API (in terms of code breadth) was actually the timeGetTime API.  In fact almost all the multimedia APIs use timeGetTime which was somewhat surprising to me at the time.

The MSDN article for timeGetTime says that timeGetTime “retrieves the system time, in milliseconds. The system time is the time elapsed since the system started.”.

But that’s almost exactly what the GetTickCount API returns “the number of milliseconds that have elapsed since the system was started, up to 49.7 days.” (obviously timeGetTime has the same 49.7 day limit since both APIs return 32bit counts of milliseconds).

So why are all these multimedia APIs using timeGetTime and not GetTickCount since the two APIs apparently return the same value?  I wasn’t sure so I dug in a bit deeper.

The answer is that they don’t.  You can see this with a tiny program:

 int _tmain(int argc, _TCHAR* argv[])
 {
     int i = 100;
     DWORD lastTick = 0;
     DWORD lastTime = 0;
     while (--i)
     {
         DWORD tick = GetTickCount();
         DWORD time = timeGetTime();
         printf("Tick: %d, Time: %d, dTick: %3d, dTime: %3d\n", tick, time, tick-lastTick, time-lastTime);
         lastTick = tick;
         lastTime = time;
         Sleep(53);
     }
     return 0;
 }

If you run this program, you’ll notice that the difference between the timeGetTime results is MUCH more stable than the difference between the GetTickCount results (note that the program sleeps for 53ms which usually doesn’t match the native system timer resolution):

Tick: 175650292, Time: 175650296, dTick:  46, dTime:  54

Tick: 175650355, Time: 175650351, dTick:  63, dTime:  55

Tick: 175650417, Time: 175650407, dTick:  62, dTime:  56

Tick: 175650464, Time: 175650462, dTick:  47, dTime:  55

Tick: 175650526, Time: 175650517, dTick:  62, dTime:  55

Tick: 175650573, Time: 175650573, dTick:  47, dTime:  56

Tick: 175650636, Time: 175650628, dTick:  63, dTime:  55

Tick: 175650682, Time: 175650683, dTick:  46, dTime:  55

Tick: 175650745, Time: 175650739, dTick:  63, dTime:  56

Tick: 175650792, Time: 175650794, dTick:  47, dTime:  55

Tick: 175650854, Time: 175650850, dTick:  62, dTime:  56

That’s because GetTickCount is incremented by the clock tick frequency on every clock tick and as such the delta values waver around the actual time (note that the deltas average to 55ms so on average getTickCount returns an accurate result but not with spot measurements) but timeGetTime’s delta is highly predictable.

It turns out that for isochronous applications (those that depend on clear timing) it is often important to be able to retrieve the current time in a fashion that doesn’t vary, that’s why those applications use timeGetTime to achieve their desired results.

Comments

  • Anonymous
    September 02, 2009
    So I'm curious... you say that GetTickCount is incremented by the clock tick; how does timeGetTime do it? (esp. since it's more accurate)

  • Anonymous
    September 02, 2009
    timeGetTime isn't necessarily more accurate, it's more regular.   Ultimately it's the difference between KeGetTickCount (http://msdn.microsoft.com/en-us/library/ms801938.aspx) and KeQueryInterruptTime (http://msdn.microsoft.com/en-us/library/ms801940.aspx) - This isn't 100% accurate but it's close enough.

  • Anonymous
    September 02, 2009
    Sooo.. what happens when you hit 49.7 days?

  • Anonymous
    September 02, 2009
    Glen: The count wraps of course.

  • Anonymous
    September 02, 2009
    There is another benefit of using timeGetTime() - you can improve the accuracy by calling timeBeginPeriod() if you need to.

  • Anonymous
    September 02, 2009
    With GetTickCount() you cannot measure time deltas less then 15-16 ms, while with timeGetTime() this is possible to get close to 1 ms precision (especially when using timeBeginPeriod(1)).

  • Anonymous
    September 03, 2009
    Adam: calling timeBeginPeriod increases the accuracy of GetTickCount as well. using timeBeginPeriod is a hideously bad idea in general - we've been actively removing all of the uses of it in Windows because of the power consumption consequences associated with using it. There are better ways of ensuring that your thread runs in a timely fashion.

  • Anonymous
    September 03, 2009
    "There are better ways of ensuring that your thread runs in a timely fashion." OK, now it's me who is curious: Which ones? I need a special thread to run every 40ms, target OSes are XP, Vista, and 7. Achieving this with a timer and timeBeginPeriod is what we do currently, what do you propose instead?

  • Anonymous
    September 03, 2009
    Ooh: If you want to run every 40ms, use timeBeginPeriod(40), not timeBeginPeriod(1). timeBeginPeriod(1) is evil.

  • Anonymous
    September 03, 2009
    Btw for XP, it doesn't really matter what you use - the power consumption on XP is so bad that timeBeginPeriod doesn't make a difference. Why do you need isoch behavior?  Most of the time isoch requirements are tied to multimedia rendering, is this the case for your solution?

  • Anonymous
    September 03, 2009
    timeGetTime is much better for off-the-cuff profiling.  Want to know how long a computation is taking, call timeGetTime before and after and subtract.  Of course using the performance counter is even more accurate, but slightly more difficult to use from script (you need to marshal a structure instead of a single 32-bit integer).

  • Anonymous
    September 03, 2009
    "using timeBeginPeriod is a hideously bad idea in general" But given a thread that needs to run every 4ms on XP/Vista/7, what are the alternatives?

  • Anonymous
    September 03, 2009
    Larry, out of interest could you quantify the "power consumption consequences associated with using it", are we talking halving the battery life here?  10% hit? And, other than power consumption, is there another reason to avoid timeBeginPeriod?

  • Anonymous
    September 03, 2009
    The comment has been removed

  • Anonymous
    September 03, 2009
    The comment has been removed

  • Anonymous
    September 03, 2009
    Larry, yes ours is a multimedia app (video) operating on several PAL streams (25 fps == 40ms per frame). timeBeginPeriod works like a charm on Vista and 7, however we had a lot of problems using it on XP. In a small test app we tried every single value between 1 and 40 and the only thing that worked reliably on several different computers was timeBeginPeriod(1), so that's what we currently use. At the moment the user base on Vista and 7 is not that big, but hopefully that'll change in the next 6 months or so. With that in mind I'll make the change to timeBeginPeriod(40) tomorrow :-) What about the timer of our callback function? Once upon a time you wrote about the time* APIs and recommended use of Timer Queues instead of timeSetEvent. Is that still the way to go on Win7 or is there also something like WASAPI in "timer driven" mode on the video side of multimedia? Thanks!

  • Anonymous
    September 03, 2009
    Ooh: I need to play around and see.  Most of the time a video rendering app also has a source of a hardware interrupt that can be used to drive the rendering engine (video interlace), they often key off of that. I'll ask our video pipeline folks to see how they handled the issue.

  • Anonymous
    September 04, 2009
    I know some people are using the timeBeginPeriod technique for improving the performance of games. For example, many games will see between 3 - 5% improvements by just running an app in the background that calls timeBeginPeriod(1).

  • Anonymous
    September 09, 2009
    >>calling timeBeginPeriod increases the accuracy of GetTickCount as well. Not on Windows 2000 or XP.   On 2000, GetTickCount() is just: return tick counter * tick interval constant in strange units / 2^18; and is unaffected by calling timeBeginPeriod.

  • Anonymous
    September 12, 2009

  1. Code is missing includes: #include <tchar.h> #include <stdio.h> #include <windows.h>
  2. GetTickCount() has 16ms granularity which you can verify by changing Sleep() to 1ms (you will see either 0 or 16 in dTick column).
  • Anonymous
    September 18, 2009
    The comment has been removed

  • Anonymous
    September 24, 2009
    The comment has been removed

  • Anonymous
    September 24, 2009
    Coleman, if you don't care about battery life, then timeBeginPeriod(1) is just fine. Given that you have a protocol that requires 5ms accuracy, then you almost by definition don't care about battery life (because the protocol is designed in a manner that's almost certainly going to cause the batteries of all the machines implementing the protocol to be drained quickly).

  • Anonymous
    September 24, 2009
    Larry, you're right, it's not designed to be run on a battery powered machine. Thanks for the feedback.   My concern is that the OS will, eventually, not allow this kind of thing.  It seems to be going that way for sure.  

  • Anonymous
    September 25, 2009
    Coleman: It's not the OS which is going that way, it's the industry.  The number of laptops sold is greater than the number of desktop units sold.  On laptops, battery life is paramount.

  • Anonymous
    October 23, 2009
    "...that in mind I'll make the change to timeBeginPeriod(40) tomorrow" Pointless since it runs at 15ms (in XP anyway, 1ms in Win95...98Me?), and I presume similar (15, maybe 10) in Vista+.  You can only set it lower (or remove your beingperiod w/endperiod), not higher.  It's all in the docs.