How do computers keep time?

How do computers keep time?


Hi there! My name is Xavier, and I recently found this:
the first laptop that I ever owned and I wondered if it would still work today. So I took it out of its box, plugged it into
an outlet, hit the power button, and sure enough, it still works! However there was something wrong: the machine
thought the date was January 1st, 1970. The machine was effectively running 48 years
behind… So that got me thinking: how do computers
keep track of time? They don’t have springs inside them like
regular watches. So how do they keep track of time, even when
they are turned off? And why did my old laptop only lose track
of time after years of being in storage and not every time I turned it off? Funny enough, the first successful computers
like the original Apple 2 didn’t even bother with keeping track of time. If an application needed to know the time,
it had to ask the user for it. Once the user had typed in the current time,
the computer could keep track of it by interrupting the processor on a steady interval. Wow, that seems complicated. So let’s break it down. The processor of your computer runs at a specific
frequency also called the clock speed. It is used to synchronize the various components
inside a CPU, so they all work nicely together. The clock speed is expressed in Hertz, a unit
of frequency. One Hertz means 1 cycle per second. Now let’s assume that a single instruction
on a processor needs 1 clock cycle to complete. Then a single core processor with a clock
speed of 1Hz can execute 1 instruction every second. That’s painfully slow if you consider that
calculating 2 times 2 requires between 1 and 7 cycles or in this case, it would take the
processor 1 to 7 seconds to come up with the answer. Luckily modern processors are clocked at multiple
GHz so a 1GHz CPU can execute 1 billion operations per second. Fun side fact: you cannot use clock speeds
to compare different processors. A CPU with a clock speed of 2GHz isn’t necessarily
twice as fast as a CPU clocked at 1GHz. It actually depends on the architecture of
the processor and on how many cycles it needs to perform simple operations. It’s possible that the “faster” processor
needs twice as many cycles to complete a multiplication, making it just as fast as the “slow” processor. But I digress, back to keeping time. Let’s say we have a processor clocked at
2GHz which means it can execute 2 billion instructions per second. If we want to keep track of time, we can count
how many cycles have passed. If we count 1 billion cycles on this particular
processor, we can calculate that about half a second of time has passed. However, counting every clock cycle isn’t
possible. If we use every cycle of the processor to
run a time keeping program, there would be no room left for other programs to run. To make this more efficient, computers use
a clock oscillator. This takes the clock speed and slows it down
to a more manageable frequency like 100Hz or 100 times per second. This slower signal can then be used to interrupt
the processor so that it can run a program that keeps track of time. Great, it seems like this solves the problem! Except that with this setup there are new
problems. First: the accuracy of our clock depends on
the accuracy of the clock oscillator. In this example, our interrupt is fired 100
times per second or every 10 milliseconds, so that means that our accuracy can never
be smaller than 10ms. And secondly, we haven’t solved the problem
of power loss. If we turn the computer off, the processor
is turned off, and then it can’t keep track of time anymore. To solve that last issue, computers have an
RTC or Real Time Clock. This is a tiny chip that keeps track of time
by putting a tiny amount of electricity through a quartz crystal. This makes it vibrate at a specific frequency
(32,768Hz) and when you count these vibrations you can keep track of time. Quartz is commonly used in watches because
it’s cheap, uses very little power and it’s relatively accurate. Just like with watches, the RTC is powered
by a tiny battery, which keeps it running, even when your computer is turned off. So the reason why my old laptop lost of track
of time was just that the battery on the RTC had died. Something that is expected after years of
being in storage. When the system booted up, it couldn’t read
the time from the RTC and fell back to its default value, which in the case of my old
Powerbook was 1970. The RTC in modern devices is pretty small,
in fact it would be hard to spot in a laptop or phone. But in the past, they were huge. As I said before, the original Apple ll could
not keep track of time, but there were aftermarket RTC modules available such as the Thunderclock. And look at how huge it was! It even had to be powered by two big batteries! But I’m getting distracted again.. Thanks to the RTC we can turn off our computers
without fearing that it won’t remember the time when we power it back on. However, an RTC isn’t 100% accurate. In fact, depending on the model, it can gain
or lose up to 15 seconds every 30 days meaning that after a while it can run ahead or behind. This also happens to many watches or home
appliances, so every now and then we have to correct them. This phenomenon is called “clock drift”
and it happens because most clocks have a limited precision. Luckily our computers can compensate for this. When your machine is connected to the internet,
it can use the Network Time Protocol to synchronize its clock with a super accurate source such
as an atomic clock. NTP is even designed to compensate for latency
or slowdowns on the internet. In fact, over the public internet, it is accurate
within a few tens of milliseconds and on local networks that can be reduced to less than
1 millisecond. All right, so far we’ve discussed several
ways a computer keeps track of time. But how do they all work together? Well when you start your device, the operating
system will fetch the time from the RTC, which has been keeping track of it while your machine
was turned off. At this point, the operating system will now
keep track of time separately by interrupting the processor and it will periodically sync
its time with NTP to compensate for any clock drift. And finally, when you turn your PC off, the
RTC continues to track time. All right so now you know how computers keep
track of time but can they do it forever? Well as it turns out, there are limitations. In the late 1990’s people feared that the
new millennium would bring problems for many computer programs because they truncated the
first two digits of the year. The year 1999, for instance, was represented
as just 99, so when the new year would roll over, many programs would think the year was
00 or 1900. This was called the Y2K or millennium bug
and if left unpatched could render many systems unusable. But it didn’t end up being a big problem
because most software companies provided fixes for it. A more recent bug is the year 2038 problem
that affects Linux systems. They represent time by counting how many seconds
have passed since January 1st, 1970 and store it in a signed integer. Here is an example of such a counter. The first bit is used to tell if the number
is positive or negative. Here the zero means that it’s positive. However, on the 19th of January 2038 the counter
will run out of space and flip this zero into a one, turning the counter into a negative
number. A system affected by this bug will then think
the year is 1901. Luckily though, as consumers we don’t have
to worry because we’re running 64 bit systems, which can keep track of time, pretty much
forever. It is however a more serious problem for embedded
devices or old legacy systems that can’t easily be upgraded. But we still have some time to come up with
solutions! Time to wrap up then! As you can see, keeping track of time is not
as easy as you might think. And now you know how a computer does it! I hope you found this video interesting and
if you did hit that thumbs up button and get subscribed. Thank you so for much for watching and I’ll
see you in the next video.

32 thoughts to “How do computers keep time?”

  1. I really like this channel. Have a lot of interesting information that isn't complicated understand.

  2. I have been working with computers my whole life and did not expect to learn something new from this video. However, the way you explained these concepts, such as the limitations of a 32-bit system for keeping time, really helped me understand this for the first time. You have a real talent for explaining complicated concepts in a way that is easy to digest. Thanks for making such helpful content!

  3. Great video once again 🙂
    I envy you your ability tho explain things so well and understandably.

    I have two small corrections though: Jan 1st, 1970 is not just your mac's default, it is actually the start date for unix time.
    And, secondly, the 2038 problem is not just limited to linux but rather all 32-bit unix based systems, windows versions and applications

  4. iBook G4s have a capacitor to power the RTC. Thought the PowerBook was doing the same !
    So when you took off the battery for a certain time, you had to set time (and WiFi/Airport…) again :/

  5. Hi @Savjee, Really excellect video once again.. I have two queries :

    1. As I understand, RTC is a counter which keeps track of the no of oscillations of the quartz crystal you've mentioned that when the computer boots up the OS will read time from RTC. How can the OS determine the exact time without the offset.

    Let's say I shutdown my system at 5 PM and switched it back on at 6 PM, meanwhile let's say RTC's counter has incremented 555 times. In this example, 555 is the duration. OS can only know its 6 PM if it had stored the offset 5 PM. [Let me know if I'm not clear]

    2. How does OS account for network delays when syncing with atomic clocks using NTP ?

  6. Why the heck did they make the time value signed on those Linux devices? It would have doubled the possible lifetime!

  7. Hei Simply explained! Looove the format of your channel, I have some suggestions of themes for that format :
    – API : simply explained
    – GDPR : simply explained

  8. Very important correction: 1GHz processor doesn't necessarily mean it's 1billion instructions per second, but rather 1billion cycles per second. The cycle is literally an electrical wave (imagine it as "sin" graph). One instruction, however, can be measured in number of cycles and that's dependent of the machine architecture. So for example if a machine architecture decided that "addition" instruction needs 2 cycles, and that machine has a a processor of 1GHz:
    1GHz —> 1billion cycles per second —> 500million addition instructions per second. A processor usually has many more instructions each of which requires different number of cycles. So measuring a processor's frequency in number of instructions per second is hard unless every instruction cost exactly 1 cycle, which is not realistic.
    Any way, I love your channel. You rock!

Leave a Reply

Your email address will not be published. Required fields are marked *