CS173: Intro to Computer Science - Epoch Time Overflow Calculator (3 Points)
Developed by Professor Tralie and Professor Mongan.Exercise Goals
The goals of this exercise are:- To perform arithmetic using the Java programming language
Math.floor(x)
to convert round x
down to the nearest whole number.
Enter your Ursinus netid before clicking run. This is not your ID number or your email. For example, my netid is wmongan
(non Ursinus students can simply enter their name to get this to run, but they won't get an e-mail record or any form of credit)
Netid |
EpochTimeOverflow.java
Output
There was a lot of scare and hype leading up to the year 2000 because of the so-called “Y2K Bug”. Early computers had severely limited storage space, so programmers used only 2 digits to store the year, and it was assumed that the full year would be 19XX, where XX where the two digits. But as the year 2000 approached, people realized that most software with this assumption would suddenly assume it was the year 1900, which is a particularly big problem, for example, in banking software that computes interest rates.
Due to the diligent work of countless programmers updating legacy software, this never materialized into a major crisis. However, there is an even bigger, more subtle problem lurking on the horizon. Another common scheme for storing time is so-called “Unix Time” or “Epoch Time.” In this scheme, every timestamp is represented as an integer, which is the number of seconds that have elapsed since midnight on December 1, 1970. For example, the time and date of 11:59PM on Friday 2/21/2020, is 1582329540
. As you may recall, there are limits to the possible values that can be stored by different variable types. In particular, if one uses the a 32-bit integer int
type in Java to store Unix time, this will lead to overflow within the next 20 years, and the time will suddenly jump negative back to the early 1900s. Thus, the problems will be very similar to the Y2K bug, though possibly more far-reaching (every embedded device in cars, planes, buildings, etc is at risk). I expect we will see an FDR-esque public works project to update infrastructure in the years leading up to this problem.
The largest signed 32-bit integer that can be represented is Math.pow(2, 31) - 1 = 2147483647
. This is because the sign requires one bit to encode, leaving 31 bits to represent the value. The Unix operating system stores the current date and time by storing the number of seconds that have elapsed since a particular date, called the epoch. This “epoch” date is January 1, 1970. The Unix time data type has historically been a 32-bit signed integer (more recently, it is being expanded to a 64-bit value, for reasons that will become clear shortly!). In this exercise [1], you will compute the number of years represented by the maximum number of seconds that a 32-bit signed integer can store. After adding this to the epoch start year of 1970, you will print out this value as the year in which a 32-bit Epoch counter will “overflow” and start counting upwards from the minimum 32 bit signed integer.
Incidentally, the smallest signed 32 bit integer is -2147483648
. Notice that this value differs by 1 from the largest possible positive value shown above; that’s because one of the Math.pow(2, 32)
binary values must represent 0, so there is an odd number of values remaining on the number line!.
NOTE: A similar problem happens with GPS devices nearly ever 20 years (most recently in 2019), because they store the number of weeks elapsed since 1980 as a 10-bit unsigned integer.
-
Developed by Prof. Chris Tralie ↩