Смекни!
smekni.com

Intelligence Genetic And Environmental Factors Essay Research (стр. 1 из 2)

Intelligence: Genetic And Environmental Factors Essay, Research Paper

Intelligence: Genetic and Environmental Factors

One of the most interesting and controversial areas in behavioral genetics, human intelligence is currently assumed to be subject to both genetic and environmental influences.

While this assumption is accepted by a majority of geneticists and behavioral scientists, there is great disagreement on the degree of influence each contributes. Arguments for environmental influences are compelling; at the same time there is growing evidence that genetic influence on intelligence is significant and substantial (Eyesenck, 1998; Mackintosh, 1998; Plomin, 1994; Steen, 1996). The purpose of this paper is to explore the question: “How is intelligence influenced by heredity and environment?”

What is Intelligence?

It is often difficult to remember that intelligence is purely a social construct, and as such is limited to operational definitions. Binet & Simon (1905, as cited in Mackintosh) defined it purely in terms of mental ability: “the ability to judge well, to comprehend well, to reason well.” Wechsler (1944, as cited in Mackintosh) added behavioral factors: “the aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with the environment.” Sternberg (1985) synthesizes the previous definitions, defining intelligence as “the mental capacity of emitting contextually appropriate behavior at those regions in the experiential continuum that involve response to novelty or automatization of information processing as a function of metacomponents, performance components, and knowledge acquisition components.” Gardner (1993) took the definition to a societal level, as “the ability or skill to solve problems or to fashion products which are valued within a cultural setting.”

Measurement of Intelligence: IQ Tests

Alfred Binet developed the first IQ tests to identify children who would not benefit from public school instruction. His concept involved the idea that certain mental tasks are appropriate to certain ages, such as the ability to recite the names of the months: while expected of a ten year old, such ability would be rare in a three year old. Binet quantified intelligence as the Intelligence Quotient (IQ): the ratio of mental age to chronological age, multiplied by 100. Reasoning that low intelligence stemmed from improper development, Binet envisioned the test as a first step in treatment: a diagnostic instrument used to detect children with inadequate intelligence in order to treat them using “mental orthopedics.”

Binet argued forcefully against the idea that intelligence is fixed or innate: “We must protest and react against this brutal pessimism (Lewontin, Rose, & Kamen, 1984).” However, those who translated his test into English tended to disagree, arguing that the test measured an innate and immutable, genetically inherited characteristic. After Binet’s death in 1911, the Galtonian eugenicists assumed control, shifting the focus firmly toward genetic explanations by insisting that differences in intelligence between social classes and races were due to inherent genetic differences.

Over time, the tests were standardized to correspond to a priori conceptions of intelligence by including items that correlated well with school performance. Test items that differentiated between gender were removed; items that differentiated between social classes were left in “because it is these differences that the tests are meant to measure (Lewontin, Rose & Kamin, 1984).”

There are many criticisms of the use of IQ test as a measure of intelligence. IQ tests limit our definition of intelligence: they are powerful predictors only in the fields in which literacy and mathematical ability are of central importance. Mental aptitudes not requiring excellence in these two abilities are left out. The result is that we tend to view creative abilities such as art, music, dance, cooking, and raising children as having little connection with IQ. Other criticisms are more serious: there is a long and ugly history of using IQ tests for eugenic purposes. One of the more benign eugenic programs involves sorting people into categories for educational purposes.

In these programs (”tracking” in the US, “streaming” in England) children are sorted into “fast” and “slow” learners and placed in classes accordingly, which may seriously impact career and life choices.

Another use for IQ tests is to predict outcomes. Eyesenck (1998) cites a study in which all five year olds on the Isle of Wight were given IQ tests and final school grades were predicted. At age sixteen, the children were tested again; IQ scores changed very little and grades had been “very accurately” predicted. Eyesenck takes exception to the idea that IQ tests do not measure anything more than the ability to take IQ tests: he emphatically states that IQ predicts achievement. While it is not difficult to see a relationship between achievement and intelligence, defining intelligence as achievement precludes the possibility that some children of lesser intelligence have greater motivation to succeed. This author can cite several personal examples of brilliant students who lack motivation, and less than stellar students who, through determination, achieve great successes. Lewontin, Rose, and Kamen (1984) dismiss outright the idea that IQ tests alone are good predictors of future social success, preferring to attribute it to family environment.

Numerous IQ tests exist; by 1978 there were at least 100 tests described as measuring intelligence or ability (Mackintosh, 1998). All of them are validated by how well they agree with older standards such as the Stanford-Binet (Lewontin, Rose, & Kamen, 1984). The Wechsler Adult Intelligence Scale (WAIS-R) and the Wechsler Intelligence Scale for Children (WISC?III) are the most often administered IQ tests today, and are composed of eleven subtests and two subscales: verbal (information, vocabulary, comprehension, arithmetic, similarities, and digit span) and performance (picture completion and arrangement, block design, object assembly, and digit symbol). The next most popular test, the Stanford-Binet, employs fifteen subtests with four subscales: verbal reasoning (e.g. “How are a porpoise, dolphin and whale different from a shark?”, abstract/visual reasoning (completion of matrix problems), quantitative reasoning (e.g. sequential questions such as, “What comes next: 5 10 9 18 17 34 33 __ __”), and short-term memory (repeating digits forwards and backwards). Raven’s Matrices relies entirely on nonverbal methods to measure IQ, by requiring the test subject to discern the relationships between different objects. While both the Wechsler and Stanford-Binet are administered to individuals, Raven’s Matrices may be given to a large number of people at the same time. Interestingly, despite the vast differences between the two tests, correlations between Raven’s and Wechsler scores in the general population fall between 0.40 and 0.75 (Burke, 1958, 1985; Court & Raven, 1995; as cited in Mackintosh, 1998).

Theories of Intelligence

Factor analysis has been employed to find patterns in the correlation matrices of IQ tests, with varying results (Mackintosh, 1998). In 1927, Charles Spearman developed a two-factor theory of intelligence, which involves the idea that there is a general intelligence (g) that may be measured with varying degrees of accuracy by tests. He proposed that the reason why all tests of ability correlate to a certain extent is because they all measure g. Other theories began to arise: by 1941, Thurstone & Thurstone suggested that IQ tests must measure a number of independent factors, which they called “primary mental abilities,” and that researchers ought to focus on creating factorially pure tests. They found six distinct primary mental abilities: numerical, verbal comprehension, word fluency, space, reasoning, and memory. Guilford may hold the record on distinguishing numbers of factors, which he defined in 1967 as the completion of a specific type of operation on a specific type of product with a specific type of content. He found five operations, six products, and four kinds of content, for a whopping 120 (5 x 6 x 4) factor total. Unfortunately, Horn & Knapp (1973; as cited in Mackintosh, 1998) applied his factorial procedures to his test data and found they supported randomly generated factorial theories just as well.

Contemporary factor analysts generally cite nine different kinds of general mental ability: fluid reasoning (critical thinking ability), acculturation knowledge (breadth and depth of knowledge of the dominant culture), quantitative knowledge (mathematical ability), short-term memory (involving events in the last minute or so), long-term memory, spatial ability (measured in tasks such as comparing rotated objects for similarity), auditory processing (perception of sound patterns under distraction or distortion), processing speed (speed of response given an intellectually simple task), and correct decision speed (speed of response given an intellectually challenging task).

Factor analysis is an incomplete solution to the problem of intelligence: while it describes relationships between different IQ tests, it cannot tell us much about the structure of human abilities. The existence of a general factor describing the correlation of IQ tests does not imply that it measures a cognitive process; it is quite possible that the tests measure any number of processes that happen to overlap, thus the various theories mentioned above.

Genetic factors

Heritability of intelligence (or any other characteristic, for that matter) is the proportion of the total variation in the characteristic in a population that can be attributed to genetic differences between members of the population. Estimates of heritability are made by attempting to separate genetic from environmental sources of variance. Evidence exists that intelligence runs in families: the correlation of IQ scores of genetically related people increases by the closeness of relationship, and “this correlation pattern remains even when controlling for social class, education, race, gender, and the like (Herrnstein & Murray 1994).” Lykken, Bouchard, & McGue (1993) found an average correlation of about .45 for IQ scores of biological parents and offspring and for siblings living together.

Most of our understanding of the genetics of intelligence is grounded in twin and adoption studies, which have documented significant and substantial genetic influence (Plomin, 1994, Plomin & Petrill, 1997, Steen, 1996): for example, correlation between scores of monozygotic (MZ) twins reared together is higher (approximately .85) than correlations of dizygotic (DZ) twins and less closely related siblings (Plomin & Petrill, 1997). There is also a high correlation in IQ scores between MZ twins who were raised apart (Joseph, 1998). Dudley (1991) found a higher correlation between IQ scores of adopted children and their biological parents than with their adoptive parents. A major concern with both twin and adoption studies, however, is the amount of correlation in the environments of adopting and biological families (Lewontin, Rose, & Kamen, 1984). There have also been a number of methodological problems (Steen, 1996), as well as several instances of fraud (Cyril Burt, for example). Consequently, it is difficult to use these studies to bolster arguments of heritability of intelligence.

Exactly how much intelligence is attributable to genetics is unknown, and estimates vary widely. Arthur Jensen (1969, as cited in Eyesenck, 1998) placed heritability of intelligence at 80 percent, Eyesenck (1998) at 70 percent, Herrnstein and Murray (1994) between 60 and 80 percent, and Plomin and Petrill (1997) at 50 to 60 percent. However, attempts to quantify heritability have serious problems in explaining some of the data. Interestingly, heritability estimates vary with age: A study of 60 year old (average age) Swedish twins (some reared together and others apart) indicates that heritability increases from about 40 percent in childhood to about 60 percent in early adulthood and about 80 percent in later life (Plomin, 1994, Plomin & Petrill, 1997). Eyesenck argues that this is because “we structure our environment based on genetic drives (p. 42).” He reasons that environment exerts a greater influence on children, who have little choice; as they age, diversity and availability of choices expands, “and if these choices are at least partially determined by genetic factors, the influence of environment is thereby diminished.” Benno (1990) suggests the difficulty in determining relative contributions lies in the interdependency of genes and environment.

Lewontin, Rose, and Kamen (1984) suggest that heritability of IQ is irrelevant and unimportant, as heritability is not synonymous with unchangeability. They attribute this confusion to a general misunderstanding about genes and development. They assert that the genotype is inherited and unchanging; the phenotype is in a constant state of flux, involving morphological, physiological, and behavioral properties. In simpler terms, the loss of a limb is irreversible, but not heritable; Wilson’s disease is heritable but not irreversible. They may be mistaken about the heritability of genotype: Stoltenberg and Hirsch (in press) explain that parental genotypes may not be passed down because they are broken up at meiosis and a new genotype is formed at conception.

One of the consequences of the Human Genome Project, tasked with sequencing the entire human complement of DNA, is a public perception that scientists are developing a molecular understanding of the human condition. Seldom a month goes by without a media article trumpeting a new “genetic link” to a behavior or disease. Everything from schizophrenia to television watching is postulated to be “linked” to genetics, yet scientists are a long way from being able to explain the ramifications of the human genome sequence. Kaye (1992) suggests that phrasing used by the media such as “gene for alcoholism” is misleading: Noble and Blum “had only suggested a possible genetic component contributing indirectly to the alcoholism of some individuals.” As yet scientists have been unable to fully trace a chain of events leading from genes to behavior, however, scientists have recently begun to identify specific genes that influence cognitive abilities and disabilities, most of which involve rare single-gene disorders such as phenylketonuria (PKU), Fragile X mental retardation and the early-onset familial form of Alzheimer’s 5 disease (Plomin & Petrill, 1997; Steen, 1996).

Just how important are genes? The “Central Dogma” of molecular biology, attributed to Francis Crick, is that “DNA makes RNA, RNA makes protein, and ?proteins (to oversimplify just a bit) are us’ (Kaye, 1992).” If this true, then DNA is simply a “blueprint” that determines our humanity, but things are definitely more complicated. For example, we have evidence that mutations do not arise solely from genes, but may arise from cell structures as well: consider Sonneborn’s melon-striped paramecia experiment (Goodwin, 1994). He was able to remove patches of cilia from the surface of a normal paramecium and put it back in reverse orientation, creating a “melon stripe.” All future generations of this paramecium were melon striped with the same reversed row of cilia. Sonneborn demonstrated the same effect, cytoplasmic inheritance, on an asexually reproducing worm (Stenostomum).

In sexual reproduction, one cell from the female and one from the male unite to produce the cell for the new organism. The male contribution consists solely of chromosomes, which contain genes that influence development and form of the new organism. Chromosomes are composed of deoyxribonucleic acid (DNA) and are located in the nucleus of the cell. In the sex cells (testes and ovaries in humans), meiosis creates gametes with haploid chromosome sets: each gamete contains half of the chromosomal information necessary to create a new organism. The moment of conception occurs when the male and female gametes unite, forming a zygote which has a complete, diploid chromosome set. While the male contributes only his gamete, the female in addition to her gamete contributes cytoplasm which nourishes the developing zygote as well as specific proteins that direct cell differentiation. Along with mutation, meiosis assists in maintaining diversity of a population: the homologues of each chromosome pair are split so that each gamete receives one from each pair, assuring independent assortment; the homologues also exchange genetic material during recombination.

The genotype of an organism, uniquely formed at conception, contains its complete genetic endowment; its phenotype is dependent on its interaction with the environment and consists of its appearance, structure, physiology and behavior. While the phenotype is dependent on the genotype, it may not be assumed that the genotype determines the phenotype ? a genotype may result in different phenotypes depending on the environment.

Quantitative genetics was developed to study traits such as behaviors that are continuously distributed in a population (Stoltenberg & Hirsch, in press). To assess the resemblance between relatives in terms of specific traits, the overall phenotypic variance is partitioned into genetic and environmental components. Genotypic variance is then partitioned into additive, dominance, and interactive variance components. Additive genetic variance, “breeding value,” is not strictly additive in the mathematical sense, as it is entirely dependent on the population from which the mate is selected. The dominance deviation value is the difference of the genotypic values and additive values, and is caused by the effects of one allele over another in the classic Mendelian dominant/recessive sense. Interaction deviations occur when there are nonadditive relationships between loci. Broad sense heritability is an estimate of the extent to which the genotype determines the phenotype. Narrow sense heritability is used to predict the outcome when selecting for a specific trait in a population, and is estimated as the ratio of additive to phenotypic variance. Stoltenberg and Hirsch caution that heritability is not the same as inherited: while heritability is an estimate of the degree of relationship between genotype and phenotype, it does not give us the proportion of a trait that is genetically determined.