регистрация / вход

The History Of Electronic Musical Instruments Essay

, Research Paper Electronic instruments and digital audio have changed the world s musical paradigm forever. The advent of consumer electronics in the 1920 s gave musicians and composers alike, the ability to both create new sounds and the devices to manipulate them by electrical means.

, Research Paper

Electronic instruments and digital audio have changed the world s musical paradigm forever. The advent of consumer electronics in the 1920 s gave musicians and composers alike, the ability to both create new sounds and the devices to manipulate them by electrical means.

The 20th century has seen the vastest evolution of music styles and instruments, most of which have been heavily influenced by the electronic and digital mediums.

Since the early 1920 s many electronic renditions of acoustic instruments have become widely popular and available to the average musician. Instruments varying from electric guitars and basses, electronic pianos, synthesized drums and the ever popular drum machines and bass synthesizers including instruments that in themselves can create synthesis of multiple acoustic instruments and sounds by recreating the waveform (or shape of the sound waves) produced by playing them.

Electronics has not only offered a means to alter the sounds of already existing instruments, but also as a way to generate new sounds, effects, tones and timbres that would never be possible to be produced in a natural setting.

In the years following the first electronic instruments and synthesizers was what was called the Digital Era . Employing computers to do operations similar to that of electronic devices required conversion of an electronic signal, called an analogue signal, to a series of 1 s and 0 s that computers use to calculate information, hence the term digital.

Seeming that the computers allowed musicians to arrange synthesized sounds and samples (various snippets of a recording) in a way never before fathomable, it was only a natural reaction for it to expand itself manually.

Soon amateurs and musicians alike, were making types and genres of music never heard before by anyone in the mainstream media.

Therefore giving the average music listener a new musical experience. A description of such technology is what follows.

A Russian military scientist, Lev Termin in Petrograd (Now St. Petersburg) in 1919, while working on a means to locate enemy radio transmitters for the military using vacuum tubes, noticed that his body was able to detune the radio receiver he was working on, based upon the range of which he stood to it. Termin, being a trained classical cellist, immediately recognized the musical importance that his discovery held. He began playing several tunes for his colleagues, later creating the first prototype to be used as a musical instrument (later dubbed the Theremin , an anglicized version of Termin) the most amazing feature of this radio receiver turned musical instrument, was that to play it, didn t require a person to touch it! In order to use this alternative musical instrument, the musician would manipulate hand movements through two different electromagnetic fields (one changing pitch, the other changing the amplitude of sound.)

By the late 20 s the Theremin had been implemented in the works of many classical compositions and concert music. It s invention as the first electronic musical instrument inspired a whole new field of instrumentation, including the Ondes Martenot, the electric organ, and finally the synthesizers we use today. This invention discovered by complete mistake may have been the driving force or inspirational mechanism behind today s synthesizers. Perhaps it was ahead of its time, but a wide array of modern electronic sounds and devices can be accredited to this revolutionary invention by mistake.

The beginnings of actual electronic music began in the 1950s and 60s. The initial aim of this technology was to transmit, store, and reproduce the live experience of sound. Earlier in the century some electronic instruments of a limited capability were being invented and developed (i.e. The Theremin), the most familiar of these being the electronic organ. Others, such as the Ondes Martenot – an instrument producing sounds by means of an electronic oscillator and operated from a keyboard – used only occasionally in concert music. These nontraditional instruments were leading the way for future developments of electronic music.

Synthesizers provided the second step in this genre of musical devices. A synthesizer, built especially for making natural sounding sounds or “synthesis” and modification, is a device, which combines sound generators and sound modifiers all together in one, with a single control system. The first and most complex out of these synthesizers was the Electronic Music Synthesizer by RCA. It was first released in 1955. A more complicated model, the Mark II, was installed in 1959 at the Columbia University Studio in New York, and is still there today. It is an enormous machine that is capable of generating any imaginable sound or combination of them, with an infinite variety of pitches, durations, and rhythmic patterns far beyond the abilities of traditional instruments that we are familiar with. The Mark IIs ability was demonstrated in a recording called Milton Babbitt’s Ensembles for Synthesizer in 1964. The synthesizer represented an enormous step forward for the composer, because thay could now change all the characteristics of the sound beforehand by means of a punched paper tape, and therefore taking care of most of the time-consuming tasks associated with tape-recorder music. The development of smaller, portable synthesizers which can be played directly have made live electronic performance possible. The Moog synthesizers built by Dr. Robert Moog in the mid 60 s were based upon a combination of this technology and that of the Theremin.

After Columbia owned the RCA Synthesizer in 1959 and the music of Milton Babbitt from Princeton had been created, this studio became the Columbia-Princeton Electronic Music Center . Many famous composers have worked at this studio since then. Including Babbitt, Wuorinen, and Davidovsky. Electronic music studios are now common in universities and colleges across the world, however today they obviously employ a greater arsenal of elite music producing, and recording technologies.

The third stage of electronic music s life, a stage that continues to grow even today as new technology is developed all the time, involves the use of the computer as a sound generator. The basic idea of computer music is the fact that the shape of any sound wave can be written on a graph, and this graph can in turn be described by a series of numbers (coordinates), each of which can represent a point on the graph. A series of numbers on the graph can be translated by a device known as a digital-to-analogue converter into a sound tape that can be played back on a tape recorder, or stored digitally on the computer s hard-disk. As composers obviously do not think in terms of the shape of sound waves, computer programs were written that could translate musical specifics, including pitches, durations, and dynamics into the numbers on the graph representing the shape of the sound (waveform). Computer sound-generation is the most flexible of all these electronic mediums.

Some creativity enhancing features of this electronic music have compelled many musicians to use this new medium. First, the composer is able to create “new sounds” by means of using entirely computer generated sound or sampling others music and rearranging it to take the form of a new composition. Also, the composer is able to work directly with the sounds and can produce a finished track/song without the help of a live performer. Serial music, (music totally controlled and specified by many different procedures), using a computer, seems to work well with electronic music, because it frees the composer from the limitations of the traditional instruments. An entire electronic work (or track) is fixed in the form that the composer/writer has written it as, music to be. Computers can also be programmed to make random selections within certain limits (for example, taking a sample and randomly generating it at different pitches) and, in accordance with instructions provided by the programmer (ie. telling the computer to add effects to existing samples). This type of music can produce many variations of the same song, a lot of artists today would use a technique similar to this, to do what is called remixing a song.

However, a combination of electronic sounds with live music has also been used more often in the last few decades. Live performers supply an important visual factor and at the same time provide a link to more conventional music. Most people would not be interested in watching a computer make music, as it would not be very enjoyable. The most interesting feature of electronic music is that it has also influenced live music, challenging performers to reinvent themselves to produce new types of sounds that a traditional musician might not think of, suggesting to composers and musicians new ways of thinking about acoustic instruments. Technology, which some people believe may some day, replace live performances, has in fact re-inspired it. Electronic music does at its best , create sounds/experiences that cannot be created/composed by any other medium, although those who write computer music are free to compose in any musical style that they like, just as an acoustic musician would do.

By the late 1970s, synthesizers had been established as viable musical instruments, mostly in the pop rock genre (a lot of more experimental type rock bands used these, such as Pink Floyd and The Doors). One of the greatest capabilities of the synthesizer was that it could generate and shape electrical oscillations in a variety of ways (meaning you could create the shape of a sound). Basically, it could emulate/recreate the sounds of many different instruments or could even create the sounds of as yet unimagined instruments, which seemed to be the greatest marvel of these!

Performers and composers soon learned to appreciate the strengths of various instruments made by different companies. For example, the string sounds made by one model of a synthesizer might be extremely impressive, while the filter effects of another might be appealing. A filter effect was when an electrical signal was sent through a series of transforming devices within the synthesizer, to add effects such as noise, distortion, or echoes. Seeming as though early synthesizers did not produce sounds as rich as those made by natural instruments, some of these musicians developed a technique of playing two or more synthesizers simultaneously to overlap or fatten the available sound. It became quite common for musician to have several types of synthesizers available to meet his musical requirements as effectively as possible.

Manufacturers continued to introduce a variety of alternative devices to enhance the live performance aspect of electronic music. Synthesizer expanders, sequencers and drum machines all became part of a conventional musician’s collection. However, it was extremely difficult to synchronize complicated gathering of gadgets and technology. To play live together would require the devices to communicate somehow. Almost every device was designed to function by itself, based on the assumptions of how it would be used (by each different band member). Also, different manufacturers designed their instruments using differing electrical plans, connectors, and other types of moderations. An electronic-acoustic musician had to use an assortment of interface boxes to even begin to use any of his instruments together.

Some manufacturers began to make their own products compatible with each other, but rarely could a player put devices by different manufacturers together without great problems and errors. A solution to this problem was needed.

So, in 1981, conversations between Japanese and American companies at the NAMMC (National Association of Music Merchants Conference) led to the idea of a standard interface for electronic musical instruments. Six major companies on the edge of the electronic music technology, Kawai, Korg, Roland, Yamaha, Oberheim and Sequential Circuits wanted to discuss the idea further. In 1982 the SCP 600 was introduced as the first synthesizer to include a new standard interface – called MIDI, (Musical Instrument Digital Interface). A public demo 1983, showed a Roland synthesizer and the SCP connected/interfaced for the first time. A standard that they called the “MIDI 1.0 Specification” was agreed upon in August of 1983, and then this standard was made available to all other interested manufacturers.

Within a year, MIDI was well established and becoming widely popular and was being included in dozens of new products. It continues to be extremely popular to this day, and has extended the capabilities of many instruments and studios. MIDI-compatible computers can be used to record and play back music performed on MIDI instruments. Musicians and instrument manufacturers alike, have benefited from this advance in music technology.

After MIDI technology being recently implemented in computer music technology, only one can assume the distance the computer/electronic music industry will be able to take aspiring artists and experimentalists in the same hand. Now that the computer and the synthesizer have been united as one object enabling artists to interface them accordingly, and seamlessly, in the future perhaps a music listener will not be able to tell the difference between authentic acoustic sound or a synthesized emulation. But until then, the vast medium of electronic sound reproduction continues to grow at a more rapid rate than any other genre of music before it.

ОТКРЫТЬ САМ ДОКУМЕНТ В НОВОМ ОКНЕ

ДОБАВИТЬ КОММЕНТАРИЙ  [можно без регистрации]

Ваше имя:

Комментарий

Другие видео на эту тему