The IT Law Wiki

Y2K problem

32,196pages on
this wiki
Add New Page
Talk0 Share

Ad blocker interference detected!

Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.

Y2K was the first global challenge caused by information technology. Left unaddressed it would have significantly disrupted everyday life in many parts of the world.[1]

Definition Edit

The term Y2K problem (also referred to as the "Year 2000 problem", the "Year 2000 computer bug" and the "Millennium bug") "refers to the inability of certain computer software to accurately process dates after December 31, 1999."[2]

Overview Edit

At 12:01 on New Year’s morning of the year 2000, many computer systems could either fail to run or malfunction — thereby producing inaccurate results — simply because the equipment and software were not designed to accommodate the change of date to the new millennium.[3]

"The use of two digits to represent a four-digit year, and the inherent fault of '00' being interpreted as 1900 instead of 2000, was a standard programming practice throughout the computer industry that had the potential to affect millions of information systems around the world."[4] For instance, 1968 or 1974 would be stored and processed as 68 and 74, respectively. The number 19, indicating years in the 1900s, was implied.

This worked smoothly until users started to input dates occurring after December 31, 1999. Computers ran into problems when required to calculate a number based on the difference in two dates, such as the interest due on a mortgage loan. Computers continued to assume that the prefix 19 was implied, so dates such as 00 or 01 were treated as 1900 or 1901. Consequently, computers could not correctly calculate the difference between a year in the 20th century and a year in the 21st century.

"The mechanics involved in making any one of these systems capable of correctly processing the Year 2000 date were fairly straightforward, but the scope of the work — identifying, fixing, and testing millions of systems and data exchange points in a global economy — was daunting."[5]

Reasons for the Y2K problem Edit

There are many reasons for this, some historical, some sociological, and some anecdotal. However, the bottom line was that unless the problem was fixed, the Y2K problem would have begun to manifest itself in computer errors, crashes, data corruption, or other problems.

There were many reasons why computer system designers and programmers abbreviated the date field, including the prevalent use of punched cards, the expense of storage, and the programming methodologies used in these early systems.

Punched cards Edit

In the late 1800s, the U.S. government was facing a data processing crisis of sorts. It was taking the U.S. Census Bureau more than a decade to perform and report the results of the census, which is required by law to be taken every ten years. Something had to be done to tame the flood of census data that was overwhelming the manual tabulation methods used.

Herman Hollerith was a Census Bureau employee, who developed an electronic tabulating system using punched tapes, and later punched cards.[6] His system represented information as a series of holes on a punched card.[7] Each column of the card could represent to a single fact, yes or no answer, or number. In 1884, Hollerith filed a patent on the punched tape system, and later the punched card system. Hollerith’s company (The Tabulating Machine Company), through a series of acquisitions and name changes, eventually became IBM.

The Hollerith machines used what became the ubiquitous 80-column “IBM Hollerith cards.”[8] Unfortunately, the 80-column cards imposed limitations on the amount of data that could be recorded on each card. Only 80 bytes of data (1 byte equals 1 character of data) were available. If the data could be shortened, then there would be more available space for additional information.[9]

One solution to squeezing more data on each card was to eliminate the first two digits of the century code, so that 1915 would be entered as simply “15.” If there were several dates on one card, two bytes could be saved for each date. As long as the century stayed the 20th Century, there would be no problem. No computer programmer or manager ever thought that such systems would last for so many years, or even get close to the Year 2000.

Computer storage Edit

In the beginning of computer history (while officially 1948, the relevant time period actually starts in the 1960s), resources were very limited and very expensive. For instance, the cost of computer memory was $1.00 per byte on a Univac mainframe in 1971. That cost, coupled with technology limitations, meant that even the largest computers could only have a limited amount of memory. Despite this fact, programmers still had to create complex programs, instructions, and data structures to meet the intricate business needs of their clients/employers. In fact, programmers learned to develop rather challenging programs in as little as 4,000 bytes.

The nature of the early programmers and their challenges and psychology was very different from today’s programmers. These early programmers developed tricks to cleverly fool the system into processing so much logic, with so little resources — yet with acceptable performance. Today’s programmers most often just buy more memory, faster processors, and more disk storage, to avoid performance and space problems.

In addition to the cost of memory, the cost of storing information on disk drives, back in the 1970s and early 1980s, was also very expensive, with a megabyte of storage space costing as much as 10,000 times what a megabyte would cost today.

The cost of memory and storage encouraged programmers to take shortcuts as well. One of them was to truncate century dates to the same two digits as was done for punched cards. Again, most programmers in the 1970s and 1980s had no idea that the programs they were developing would still be in use a quarter of a century later.

Programming methodology Edit

Before there were structured programming, CASE tools, fuzzy logic, and automated testing suites, programmers were a very asocial breed. With virtually no tools and little history before them, they took on the challenges of developing complex software in tremendously small amounts of memory, and with very expensive and limited database capability. To get the job done, they had to make the computer do things that the resources were otherwise incapable of doing. They developed short cuts and tricks to enable their programs to survive and operate in the smallest of machines and using all of the available computer resources. Documentation took up precious space and was an anathema to most programmers.

These men and women were brilliant, but the code they left was virtually impossible to maintain, being a beehive of complex and non-intuitive logic intended to make the computer perform beyond its intended capabilities. It is this unstructured, clever, unique spaghetti code that was a great part of the difficulty of fixing and testing Year 2000 problems.

Legacy systems Edit

"Information technology hardware and software have evolved so continuously over the past 25 years that new applications have been incorporated by modifying slightly older versions of programming and records. The result is that very few systems are completely devoid of code or records from the 1970s and 1980s, when the century turnover seemed too distant to worry about. Further, in the rigorous and non-reflective manner in which microprocessors and computers operate, even one line of code that has not been touched in decades can disrupt or shut down a system, or produce error-laden results."[10]

Embedded chips Edit

"Billions of microprocessors produced over the past 30 years include clock chips . . . set at 'absolute time' beginning with a two-digit year-date followed by months, days, hours, seconds and sometimes milliseconds. Most of these chips have been used in watches or other applications that do not carry risks of causing a cascade of problems. Many . . . have been embedded in a wide range of equipment as convenient timing devices for industrial applications. By subtracting one absolute time from another, for instance, a typical device derives elapsed time for the flow rates in a pump, the timing circuit in a traffic light, or the interval between moving trains. If the device is operating when the year date for the end time shifts from '99 to '00, a malfunction may occur as the device reads an extremely long interval of time or time running backwards. However, if the device is not in operation when the clock turns from 99 to 00, both the start time and the end time may register the same century and the device could continuing operating correctly. These vagaries in results mean that diagnostic costs for this problem are often steep and the consequences unknown. For these reasons, many businesses are dealing with embedded chips by replacing the equipment or by adopting an approach of simply fixing devices after they fail."[11]

Systems affected Edit

The Y2K problem affects two general classes of equipment. The first class comprises business systems or mainframe systems. The second class of equipment has several common names, including embedded chips, embedded processors and embedded control systems.

Impact on organizations Edit

Y2K-related failures in business systems will generally cause an enterprise to lose partial or complete control of critical processes. In the private sector, loss of business systems means that a company may have difficulty managing its finances, making or receiving payments and tracking inventory, orders, production or deliveries.

To be certain that a system is Y2K compliant, a programmer must often search for the date error in millions of lines of software code, and in data sets where only 2 digits are available for birth years and the like. In an industrial application, technicians must determine whether embedded chips — often the proverbial black boxes of modern machinery are susceptible to the date error, and apply one of several possible fixes to remove or work around the problem. Remediated software and hardware must be extensively checked to insure that new problems are not introduced by the correction. The majority of the total costs of fixing Y2K are entailed in this search for errors and testing the remedy applied. The actual fixes and work-arounds are relatively inexpensive.[12]

In the public sector, government organizations may be severely hindered in performing basic functions such as paying retirement and medical benefits, maintaining military readiness, responding to state and local emergencies, controlling air traffic, collecting taxes and customs and coordinating law enforcement efforts.

Solutions to any individual Y2K problem were usually not technically challenging. Given the sheer volume of computerized systems and the lack of proper documentation, however, the work became complex, time consuming, and expensive. The tasks involved (a) evaluating the potential impact of Y2K on each of millions of different digital systems around the world, (b) fixing or replacing each essential system, (c) testing the fixed or new systems, and (d) developing and testing contingency plans in case problems were missed or fixes did not work properly. On a global scale, the task of simultaneously correcting millions of software problems was an enormous management challenge.[13]


  1. Y2K: Starting the Century Right!, at ii.
  2. Peerless Wall & Window Coverings, Inc. v. Synchronics, Inc., 85 F.Supp.2d 519, 522 n.1 (W.D. Pa. 2000).
  3. High-Risk Series: Information Management and Technology, at 37-38.
  4. The Journey to Y2K: Final Report of the President's Council on Year 2000 Conversion, at 2.
  5. The Journey to Y2K: Final Report of the President's Council on Year 2000 Conversion, at 2.
  6. For additional information on Hollerith, see Geoffrey Austrain, Herman Hollerith (Columbia Univ. Press 1982).
  7. The concept of punched cards did not originate with Hollerith, but rather with a French inventor, Jacquard, who invented punched cards to control the selection of strands on a loom in 1804. See generally Punched Cards (Robert S. Casey & James W. Perry eds. 1951).
  8. To avoid litigation, Univac Corporation developed a 90-column punched card, which had the same limitations as the 80-column card. Later IBM introduced a 120-column card for its System 3 Minicomputer line.
  9. If data for a particular record spanned more than one card, the developer/programmer would have to sequentially number the cards so that they could be sorted correctly and organized properly if the cards were accidentally dropped. This meant that even more data was used up for control purposes, as opposed to unique data associated with a particular record.
  10. The Economics of Y2K and the Impact on the United States, at 1.
  11. Id.
  12. The Economics of Y2K and the Impact on the United States, at 1.
  13. The infoDev Y2K Initiative: Scope, Impact, and Lessons Learned, at 5.

See also Edit

Also on Fandom

Random Wiki