Computer History

Computers are big, powerful calculators which use switches (just like the light switch on the wall) to count. Switches have only "off" and "on" positions, so "off" = 0 and "on" = 1. Using nothing but 0's and 1's (this is called "binaryYou should know what "binary" means. Read the section behind this link and make sure that you know how to count using binary numbers!" counting), computers perform complex calculations. Please read and understand the section on binary counting.

Before going on, please read these two sections:

 

The First Mechanical Computer

The first mechanicalNot electronic computer was invented in 1822 by Charles Babbage. Later, he would design a programmable version which used punch-hole cards. The first computer programmer was the Lady Ada Lovelace. These computers were not fully finished until the late 1800's.

Other mechanical computers were built, sold, and used in the last few decades of the 19th century. For example, the Hollerith computer was built for the U.S. government to help them count and record the population of the United States. The 1880 census, performed without a computer, took eight years. With Hollerith's computer, the 1890 census took only one year to complete.

Hollerith's company, the Tabulating Machine Company, grew throughout the years, and eventually became known as IBM.

 

The Electronic Computer

In the 1940's, engineers started using vacuum tubesAlso known as "radio tubes" because they had been used in radios for many years previously to add an electronic component to computers. At first, the electronic components were combined with mechanical parts. But in 1942, the "ABC" (Atanisoff-Berry Computer) was made using only electronic parts. However, it could not be programmed, so it was not flexible enough for most uses, and did not become very well-known.

The first fully electronic and programmable computers--without any mechanical parts or components doing the calculating--were built by the British during World War II, in 1943 and 1944. They were code-named "Collosus," and were used for breaking Nazi codes which otherwise could not be broken. The Collosus did not become famous until much later, however, because it was kept secret until the 1970's.

As a result, most people believe that ENIAC (1946) was the first electronic computer. While ENIAC was the first "general purpose" programmable computer (meaning it could be used for a variety of problems), it was not the first fully electronic computer.

Vacuum tubes represent the first generation of electronic computer technologyThere are four generations:
(1) vacuum tubes,
(2) transistors,
(3) IC chips, and
(4) microprocessors
. The disadvantages of the vacuum tubes were that they created much waste heat and even light (they often looked like light bulbs), and they burned out after a while and had to be replaced. They were also very big; each single vacuum tube would store a "0" or "1," meaning that in order to store the number "256," you would need eight of these big, heavy, hot tubes.

These first-generation computers were giant: they often filled an entire building, even though they are less powerful than the weakest micro-computers made today. One job that had to be done was to walk through the inside of the computer and replace burnt-out vacuum tubes.

 

The Transistor

The first computer which used new transistorTransistors were smaller than vacuum tubes, but still much larger than what we have today technology as the 0-1 switches was built in 1953 by Manchester University. These computers could be much smaller and more powerful than the older vacuum tube computers. Instead of taking up an entire building, a computer would only fill a single room.

Transistors often look like little tripods or miniature water towers. Their advantage was that they were smaller, required less energy, and produced less heat. Also, because they were solid-state"Solid state" means that an electronic device has no moving parts, and so it is much more reliable, they did not break down as much.

The transistor made not only computers but also many electronic devices smaller--including the radio. The Japanese company Sony became successful by producing the world's first successful transistor radios, popular because of their small size and portability.

Transistors represent the second generation of electronic computer technologyThere are four generations:
(1) vacuum tubes,
(2) transistors,
(3) IC chips, and
(4) microprocessors
. Despite being smaller and cooler than vacuum tubes, transistors were still too large to allow for millions of bits to be worked with. They still required very heavy machinery which could not be used on spacecraft, which more and more needed smaller and faster computers.

 
 

The Integrated Circuit (IC) Chip

In 1958, engineers created the first very small IC chipsIC chips marked the incredible improvement of computers, making space flight more possible and leading to the creation of PCs, which had micro-transistors on a small semi-conducting chip. Although the first IC chips had only a few transistors, it began a new generation of technology which would eventually lead to billions of transistors on a tiny piece of silicon. Instead of having to add multiple transistors to a machine, IC chips made it possible to have many transistors within a single solid-state element. This helped make computers smaller, lighter, cheaper, and more energy-efficient.

The first IC chips only had a few transistors on a "monolithic" chip; throughout much of the 1960's, they contained no more than just a few dozen transistors. In the late 1960's, chips with hundreds of transistors were made. In the early 1970's, "LSI" (large-scale integration) technology allowed thousands of transistors, and by the mid-70's, tens of thousands. The final step was "VSLI" technology, allowing for hundreds of thousands of transistors, leading to millions and billions later on.

The increase in the number of transistors was famously described by Moore's LawGordon E. Moore is the co-founder of Intel, the company that makes most of the CPUs for personal computers today, which predicted in 1965 that the number of transistors would double every one to two years. This law has mostly come true, although we are now reaching the limits of that growth as transistors become too tiny to make much smaller.

IC chips represent the third generation of electronic computer technologyThere are four generations:
(1) vacuum tubes,
(2) transistors,
(3) IC chips, and
(4) microprocessors
. They made it possible to store and process data at very small sizes, but these chips were too simple to do much by themselves.

 

The Microprocessor

Later, in the 1960's and early 1970's, IC chips became complex enough that whole computer designs could be placed on them. These were called “microprocessors.” Previously, the computer's processor (CPU) was made of many small chips which worked together; this was not very efficient. By putting all of these together on one chip, computers could be built much more cheaply and could operate much faster. The first microprocessor, the Intel 4004, went on sale commercially in 1971.

Since then, we have seen a flood of microprocessors, each generation being faster, smaller, and cheaper than the last--but they all follow the basic design of the microprocessor, developed 40 years ago. The gap between the transistor ond the IC chip was just five years; the gap between the IC chip and the microprocessor was 13 years. On that scale, 40 years is a long time. Microprocessors represent the fourth generation of electronic computer technologyThere are four generations:
(1) vacuum tubes,
(2) transistors,
(3) IC chips, and
(4) microprocessors
, where we are still in today.

It is commonly assumed that the next generation of computers will involve Quantum ComputingQuantum computing will allow more than just a "0" and a "1" to be kept in one place; the new name for this data unit is the "Qubit," which allows for incredibly complex calculations to be made with just a small number of qubits working together. Quantum computers would be able to make millions of calculations at the same time, while current computer cores can only do one at a time. A 30-cubit computer would be faster than any desktop PC, with billions upon billions of bits, used today. Quantum technology is still fairly distant, however, and so we'll stay in the fourth generation for a bit longer.

 

First Personal Computer

In the mid-1970's, there was a boom in computer production. Many different computers were called the “first” personal computera "personal computer," or "PC," is important because it allows any single person to own a computer. Before the PC, only governments, universities, or companies owned computers.. Although various kits were available since the early 1970's, the first complete, home-affordable personal computers hit the marketplace in 1977, with what is called the “1977 Trinity” of personal computers: the Apple II, the Commodore PET, and the Tandy TRS-80. All three came with a monitor, keyboard, and data storage.

The most important points about these computers were price and usability. Selling for under $3000, computers became affordable for the common person. However, price means nothing if the machine is too complex to use. The new PCs were not easy to learn, but they were the first computers that an average person could operate.

One of the first affordable computers to be sold was the Altair 8800. This computer was sold in 1975, but it would be hard to call it a "personal computer," in that only experts could really operate one. There was no keyboard or monitor; data was entered through a number of physical switches, and data was returned from the computer using blinking red lights. One had to be able to communicate in binary code to use the machine--meaning the average person would probably have to spend weeks or even months to learn more than the most basic operations.

 

So, when real personal computers came out, they were successful--not just because of price, but because most people could hope to use them.