Mail Archives: djgpp/2013/02/14/01:35:26
I'm not sure which "math I used" you're referring to, but...
In the documentation, it specifies UCLOCKS_PER_SEC value ticks per
second, and also says that it cannot be used across two midnights
(which limits you to a max of 48 hours - from just after one midnight
to just before the third) and should not be used for 24 hours or more
(no period less than 24 hours long has two midnights in it). This has
nothing to do with overflow and everything to do with the system's
inability to "count" midnights (it's just a flag, not a counter).
The 64-bit value will wrap after (2^63)/UCLOCKS_PER_SEC seconds, or
about 232,228 years (but has a period of twice that)
An unsigned 32-bit value will wrap after (2^32)/UCLOCKS_PER_SEC seconds,
or about 3599 seconds.
As for where UCLOCKS_PER_SEC come from, well, that's more like "pc
hardware lore" than actual math.
The most reliable way to do math, other than to leave the values as
64-bit integers, is to cast them to double precision floating point
values. These have 53 bits of precision, enough for about 239 years.
- Raw text -