History Of Compuers Essay, Research Paper
Modern computing can probably be traced back to the ‘Harvard Mk I’ and Colossus (both of 1943). Colossus was an electronic computer built in Britain at the end 1943 and designed to crack the German coding system – Lorenz cipher. The ‘Harvard Mk I’ was a more general purpose electro-mechanical programmable computer built at Harvard University with backing from IBM. These computers were among the first of the ‘first generation’ computers.
First generation computers were normally based around wired circuits containing vacuum valves and used punched cards as the main (non-volatile) storage medium. Another general purpose computer of this era was ‘ENIAC’ (Electronic Numerical Integrator and Computer) which was completed in 1946. It was typical of first generation computers, it weighed 30 tonnes contained 18,000 electronic valves and consumed around 25KW of electrical power. It was, however, capable of an amazing 100,000 calculations a second.
The next major step in the history of computing was the invention of the transistor in 1947. This replaced the inefficient valves with a much smaller and more reliable component. Transistorised computers are normally referred to as ‘Second Generation’ and dominated the late 1950s and early 1960s. Despite using transistors and printed circuits these computers were still bulky and strictly the domain of Universities and governments.
The explosion in the use of computers began with ‘Third Generation’ computers. These relied Jack St. Claire Kilby’s invention – the integrated circuit or microchip; the first integrated circuit was produced in September 1958 but computers using them didn’t begin to appear until 1963. While large ‘mainframes’ such as the I.B.M. 360 increased storage and processing capabilities further, the integrated circuit allowed the development of Minicomputers that began to bring computing into many smaller businesses. On November 15, 1971, Intel released the world’s first microprocessor, the 4004 – and a technology on which the fourth generation of computers are based. The microprocessor locates much of the computers processing abilities on a single (small) chip. Coupled with another of Intel’s inventions – the RAM chip (Kilobits of memory on a single chip) – the microprocessor allowed fourth generation computers to be even smaller and faster than ever before. The 4004 was capable of 60,000 instructions per second, but later processors (such as the 8086 that all of Intel’s processors for the IBM PC and compatibles is based) brought ever increasing speed and power to the computers. Supercomputers of the era were immensely powerful, like the Cray-1 which could calculate 150 million floating point operations per second. The microprocessor allowed the development of microcomputers, personal computers that were small and cheap enough to be available to ordinary people. The first such personal computer was the MITS Altair 8800, released at the end of 1974, but it was followed by computers such as the Apple I & II, Commodore PET and eventually the original IBM PC in 1981.
Although processing power and storage capacities have increased beyond all recognition since 1972 the underlying technology of LSI (large scale integration) or VLSI (very large scale integration) microchips has remained basically the same, so it is widely regarded that most of today’s computers still belong to the fourth generation.
History of the Internet
The internet’s history can be traced back to ARPANET – which was started by the US Dept. of Defence for research into networking sometime in 1969.
Many people wanted to put their ideas into the standards for communication between the computers that made up this network, so a system was devised for putting forward ideas. Basically you wrote your ideas in a paper called a ‘Request for Comments’ (RFC for short), and let everyone else read it. People commented on and improved your ideas in new RFCs. The first RFC (RFC0001) was written on April 7th, 1969 – that this is probably the closest thing to a ’start date’ for the internet. There are now well over 2000 RFCs, describing every aspect of how the internet functions.
ARPAnet was opened to non-military users later in the 1970s, and early takers were the big universities – although at this stage it resembled nothing like the internet we know today. International connections (i.e. outside America) started in 1972, but the internet was still just a way for computers to talk to each other and for research into networking, there was no World-Wide-Web and no email as we now know it.
It wasn’t until the early to mid 1980s that the services we use most now started appearing on the internet. The concept of ‘domain names’, things like ‘www.microsoft.com’ (Microsoft’s web server), wasn’t even introduced until 1984 – before that all the computers were just addressed by their IP addresses (numbers). Most protocols for email and other services appeared after this.
The part of the internet most people are probably most familiar with is the World-Wide-Web. This is a collection of hyperlinked pages of information distributed over the internet via a network protocol called HTTP (hyper-text-transfer-protocol). This was invented by Tim Berners-Lee in 1989. He was a physicist working at CERN, the European Particle Physics Laboratory, and wanted a way for physicists to share information about their research – the World-Wide-Web was his solution. So the web was started, although at this time it was text-only. Graphics came later with a browser called NCSA Mosaic. Both Microsoft’s Internet Explorer and Netscape were originally based on NCSA Mosaic.
The graphical interface opened up the internet to novice users and in 1993 it’s use exploded as people were allowed to ‘dial-in’ to the internet using their computer at home and a modem to ring up an ‘Internet Service Provider’ (ISP) to get their connection to this (now huge) network. Before this the only computers connected were at Universities and other large organisations that could afford to hire cables between each other to transfer the data over – but now anyone could use the internet and it evolved into the ‘Information Superhighway’ that we know and (possibly) love today.
History of Windowing Systems
The first concept of a windowing (or WIMP – windows, icons and pointers) system appeared in the ‘Xerox Start’ system in 1981. This idea was then copied by Apple in 1984 as they developed the MacOS operating system for use on their Apple Macintosh, and later by Microsoft who wrote the first version of Windows in 1985. Windows was a GUI (graphic user interface) for their own operating system (MS-DOS) that had been shipped with IBM PC and compatible computers since 1981. Windows was designed to look a bit like MacOS but unfortunately it was so similar that Apple decided to take Microsoft to court over it .. a court case that was to run for many years.
This first version of Windows wasn’t very powerful and so not incredibly popular. Microsoft Windows 2 came out in 1987, and was a bit more popular that the original version. The first really popular version of Windows was version 3.0, released in 1990. This benefited from the improved graphics available on PCs by this time, and also from the 80386 processor which allowed ‘true’ multitasking of the Windows applications. This made it more efficient and more reliable when running more than one piece of software at a time. It would even allow you to run and multitask older MS-DOS based software. Windows 3 made the IBM PC a serious piece of competition for the Apple Mac. Various improvements – Windows 3.1 and Windows 3.11 were released, although they didn’t really provide many significant improvements to the way windows looked or worked.
Also available at a similar time to Windows 3 was IBM’s OS/2 (which was actually written in partnership with Microsoft). OS/2 Warp was also released which was a full 32 bit operating system – it came out long before Windows 95, and boasted many similar features. Unfortunately IBM failed to market it successfully enough and it didn’t catch on.
Windows 95 was released in 1995 (no surprises there) in August. Although it shared much code with Windows 3 and even MS-DOS, Windows 95 had 2 big advantages. First, it was an entire Operating System, you no-longer needed to buy MS-DOS and then install Windows on top of it. Second it was specially written for 80386 and better processors and made ‘full’ use of the 32 bit facilities. In this respect Windows 95 moved closer to Windows NT.
Windows NT (New Technology) was developed alongside Windows for use on servers and businesses. It is designed to be more reliable and secure than Windows 95, but as a trade-off it is less compatible with older MS-DOS based software (crucially for the home market it won’t run many video games).
1998 (June 25) saw the release of Windows 98, which is very similar to Windows 95, except that it provided a new method of storing data on disks, a method that is more efficient and that supports disks larger than the 2 GB allowed by the first release of Windows 95.
It is Microsoft’s aim – with Windows 2000 – to merge the two versions of Windows (Windows 95/8 and Windows NT) into one product.
Which just leaves the question of the court case between Apple and Microsoft, the one Apple started in 1985 by trying to sue Microsoft for copying the ‘look and feel’ of their operating system. Well the answer was that in 1997, August 6, after 18 months of losses by Apple, Microsoft ‘bailed’ them out of serious financial trouble by buying 100,000 non-voting shares in the company for $150 million. Microsoft had several political reasons for doing this, but one condition was that Apple had to drop this long-running court case.
It is also worth mentioning another windowing system, developed in the late 1980s, ‘X Windows’. This was developed at MIT for use on graphics workstations, and due largely to the availability of the source code used to write it, it has become the standard graphical interface on most Unix based systems – including most Linux distributions.