A Short History On Computers Essay, Research Paper
In 1671, Gottfried Wilhelm von Leibniz invented a computer that was built in 1694. It could add, and, after changing some things around, multiply.
While Thomas of Colmar was developing the desktop calculator, a series of very interesting developments in computers was started in Cambridge, England, by Charles Babbage (left, of which the computer store “Babbages” is named), a mathematics professor. With financial help from the British government, Babbage started fabrication of a difference engine in 1823. It was intended to be steam powered and fully automatic, including the printing of the resulting tables, and commanded by a fixed instruction program.
Babbage continued to work on it for the next 10 years, but in 1833 he lost interest because he thought he had a better idea — the construction of what would now be called a general purpose, fully program-controlled, automatic mechanical digital computer. Babbage called this idea an Analytical Engine.
The plans for this engine required an identical decimal computer operating on numbers of 50 decimal digits (or words) and having a storage capacity (memory) of 1,000 such digits. The machine was supposed to operate automatically, by steam power, and require only one person there.
Babbage’s computers were never finished. After Babbage, there was a temporary loss of interest in automatic digital computers.
A strong need thus developed for a machine that could rapidly perform many repetitive calculations.
Use of Punched Cards by Hollerith
A step towards automated computing was the development of punched cards, which were first successfully used with computers in 1890 by Herman Hollerith (left) and James Powers, who worked for the US. Census Bureau. They developed devices that could read the information that had been punched into the cards automatically, without human help. Because of this, reading errors were reduced dramatically, work flow increased, and, most importantly, stacks of punched cards could be used as easily accessible memory of almost unlimited size. Furthermore, different problems could be stored on different stacks of cards and accessed when needed.
These advantages were seen by commercial companies and soon led to the development of improved punch-card using computers created by International Business Machines (IBM), Remington (yes, the same people that make shavers), Burroughs, and other corporations. These computers used electromechanical devices in which electrical power provided mechanical motion — like turning the wheels of an adding machine. Such systems included features to:
ofeed in a specified number of cards automatically oadd, multiply, and sort ofeed out cards with punched results
As compared to today’s machines, these computers were slow, usually processing 50 – 220 cards per minute, each card holding about 80 decimal numbers (characters). At the time, however, punched cards were a huge step forward.
Electronic Digital Computers
The start of World War II produced a large need for computer capacity, especially for the military. In 1942, John P. Eckert, John W. Mauchly (left), and their associates at the Moore school of Electrical Engineering of University of Pennsylvania decided to build a high – speed electronic computer to do the job. This machine became known as ENIAC (Electrical Numerical Integrator And Calculator)
The size of ENIAC’s numerical “word” was 10 decimal digits, and it could multiply two of these numbers at a rate of 300 per second, by finding the value of each product from a multiplication table stored in its memory. ENIAC was therefore about 1,000 times faster then the previous generation of relay computers.
ENIAC used 18,000 vacuum tubes, about 1,800 square feet of floor space, and consumed about 180,000 watts of electrical power. It had punched card I/O, 1 multiplier, 1 divider/square rooter, and 20 adders using decimal ring counters, which served as adders and also as quick-access (.0002 seconds) read-write register storage.
The Modern Stored Program EDC
Fascinated by the success of ENIAC, the mathematician John Von Neumann (left) undertook, in 1945, an abstract study of computation that showed that a computer should have a very simple, fixed physical structure, and yet be able to execute any kind of computation by means of a proper programmed control without the need for any change in the unit itself.
Von Neumann contributed a new awareness of how practical, yet fast computers should be organized and built. These ideas, usually referred to as the stored – program technique, became essential for future generations of high – speed digital computers and were universally adopted.
The Stored – Program technique involves many features of computer design and function besides the one that it is named after. In combination, these features make very – high – speed operation attainable. If each instruction in a job program were used once in consecutive order, no human programmer could generate enough instruction to keep the computer busy.
Also, it would clearly be helpful if instructions could be changed if needed during a computation to make them behave differently.
The all – purpose computer memory became the assembly place in which all parts of a long computation were kept, worked on piece by piece, and put together to form the final results. The computer control survived only as an “errand runner” for the overall process.
The first generation of modern programmed electronic computers to take advantage of these improvements were built in 1947. This group included computers using Random – Access – Memory (RAM), which is a memory designed to give almost constant access to any particular piece of information. . These machines had punched – card or punched tape I/O devices and RAM’s of 1,000 – word capacity and access times of .5 Greek MU seconds (.5*10-6 seconds). Physically, they were much smaller than ENIAC. The first – generation stored – program computers needed a lot of maintenance, reached probably about 70 to 80% reliability of operation (ROO) and were used for 8 to 12 years. This group of computers included EDVAC (above) and UNIVAC (below) the first commercially available computers.
Advances in the 1950’s
Early in the 50’s two important engineering discoveries changed the image of the electronic – computer field, from one of fast but unreliable hardware to an image of relatively high reliability and even more capability. These discoveries were the magnetic core memory and the Transistor – Circuit Element. These technical discoveries quickly found their way into new models of digital computers. RAM capacities increased from 8,000 to 64,000 words in commercially available machines by the 1960’s, with access times of 2 to 3 MS (Milliseconds). Magnetic drums, magnetic – disk packs, or magnetic tapes were usually used. When the computer finishes with a problem, it “dumps” the whole problem (program and results) on one of these peripheral storage units and starts on a new problem.
Another mode for fast, powerful machines is called time-sharing. In time-sharing, the computer processes many jobs in such rapid succession that each job runs as if the other jobs did not exist, thus keeping each “customer” satisfied.
Advances in the 1960’s
In the 1960’s, efforts to design and develop the fastest possible computer with the greatest capacity reached a turning point with the LARC machine, built for the Livermore Radiation Laboratories of the University of California by the Sperry – Rand Corporation, and the Stretch computer by IBM. The LARC had a base memory of 98,000 words and multiplied in 10 Greek MU seconds.
More Recent Advances
The trend during the 1970’s was, to some extent, moving away from very powerful, single – purpose computers and toward a larger range of applications for cheaper computer systems. Most continuous-process manufacturing, such as petroleum refining and electrical-power distribution systems, now used computers of smaller capability for controlling and regulating their jobs.
In the 1960’s, the problems in programming applications were an obstacle to the independence of medium sized on-site computers, but gains in applications programming language technologies removed these obstacles. Applications languages were now available for controlling a great range of manufacturing processes, for using machine tools with computers, and for many other things. Moreover, a new revolution in computer hardware was under way, involving shrinking of computer-logic circuitry and of components by what are called large-scale integration (LSI) techniques. Many companies, some new to the computer field, introduced in the 1970s programmable minicomputers supplied with software packages.
Many companies, such as Apple Computer and Radio Shack, introduced very successful PC’s in the 1970s, encouraged in part by a fad in computer (video) games. By the late 1980s, some personal computers were run by microprocessors that, handling 32 bits of data at a time, could process about 4,000,000 instructions per second.
Microprocessors equipped with read-only memory (ROM), which stores constantly used, unchanging programs, now performed an increased number of process-control, testing, monitoring, and diagnosing functions, like automobile ignition systems, automobile-engine diagnosis, and production-line inspection duties.
Cray Research and Control Data Inc. dominated the field of supercomputers, or the most powerful computer systems, through the 1970s and 1980s. New programming techniques, such as object-oriented programming, have been developed to help relieve this problem.
The computer field continues to experience huge growth. Computer networking, computer mail, and electronic publishing are just a few of the applications that have grown in recent years. Advances in technologies continue to produce cheaper and more powerful computers offering the promise that in the near future, computers or terminals will reside in most, if not all homes, offices, and schools.