Developmental
Psychology
Developmental
psychology studies change in psychological structures and processes during
the life cycle. Although traditionally focused on childhood and
adolescence, it has extended its scope to adulthood and old age as well.
Two factors stimulated the rise of developmental psychology towards the end
of the nineteenth century. First, Darwin’s claim of continuity between
humans and nature revived the discussion among philosophers such as Locke,
Kant and Rousseau regarding the origins of mind. It was hoped that the
study of childhood would unlock the secrets of the relationship between
animal and human nature. Darwin himself kept notebooks on the development
of his first child, setting a trend that was to be followed by many of the
great names in the discipline.
Given Darwin’s
role in the genesis of developmental psychology, it is hardly surprising
that biological views, in which development is regarded as the unfolding of
genetically programmed characteristics and abilities, have been strongly
represented in it. (Indeed, the etymological root of the word development
means ‘unfolding’.) Typical of such views are Stanley Hall’s theory that
development recapitulates evolution, Sigmund Freud’s account of the stages
through which sexuality develops, Arnold Gesell’s belief in a fixed
timetable for growth, John Bowlby’s notion of attachment as an instinctive
mechanism and Chomsky’s model of inborn language-processing abilities.
Since 1990 the field of ‘behavioural genetics’ (see Plomin et al. 2001) has
received a boost from the spectacular advances in molecular biology that
have made possible the identification of the approximately 30,000 genes in
human DNA. However, most of the evidence for genetic influences on
individual differences in development still comes from more traditional
quantitative studies involving twins and adopted children: there have been
few successes up to now in identifying the genes responsible for specific
traits, though this field is still in its infancy.
However, those
who place their faith in the influence of the environment have also had a
major influence on developmental psychology. Behaviourism, which attributes
all change to conditioning, may have failed to account satisfactorily for
many developmental phenomena, but its attempts to imitate the methodology
of the natural sciences have left an indelible imprint on the discipline.
Most developmental psychologists now eschew extreme nature or nurture
viewpoints and subscribe to one form or other of interactionism, according
to which development is the outcome of the interplay of external and
internal influences. The second factor that stimulated the growth of
developmental psychology was the hope of solving social problems.
Compulsory education brought about a growing realization of the inadequacy
of traditional teaching methods and a call for new ones based on scientific
understanding of the child’s mind. The failures of nineteenth-century
psychiatry prompted a search for the deeper causes of mental disturbances
and crime, widely assumed to lie in childhood. The application of
developmental psychology to these social problems led to the creation of
cognitive, emotional and social subdisciplines, with the unfortunate
side-effect that the interrelatedness of these facets of the person has
often been overlooked.
In the field
of cognitive development, the contribution of the Swiss psychologist Jean
Piaget (1896–1980) has been unparalleled, though by no means unchallenged.
Piaget developed his own form of interactionism (constructivism) in which
the child is biologically endowed with a general drive towards adaptation
to the environment or equilibrium. New cognitive structures are generated
in the course of the child’s ongoing confrontation with the external world.
Piaget claimed that the development of thought followed a progression of
discrete and universal stages, and much of his work was devoted to mapping
out the characteristics of these stages. His theory is often regarded as a
biological one – yet for Piaget the developmental sequence is not constrained
by genes, but by logic and the structure of reality itself.
Full
recognition of Piaget’s work in the USA had to await a revival of interest
in educational problems during the mid-1950s and the collapse of faith in
behaviourism known as the cognitive revolution that followed in 1960.
Subsequently, however, developmental psychologists such as Jerome Bruner
challenged Piaget’s notion of a solitary epistemic subject and revived
ideas developed in Soviet Russia half a century before by Lev Vygotsky
(1896–1934). As a Marxist, Vygotsky had emphasized the embeddedness of all
thought and action in a social context; for him, thinking was a collective
achievement, and cognitive development largely a matter of internalizing
culture. His provocative ideas exert an increasing fascination on
developmental psychologists. Others, however, maintain Piaget’s emphasis on
the child as solitary thinker, and seek to understand the development of
thought by reference to computer analogies or neurophysiology. The study of
social and emotional development was mainly motivated by concerns over
mental health, delinquency and crime. The first serious developmental
theory in this area was that of Sigmund Freud; invited to the USA by
Stanley Hall in 1909, his psychoanalytic notions were received
enthusiastically for a time. For Freud, however, the origin of
psychopathology lay in the demands of civilization (above all the incest
taboo), and no amount of Utopian social engineering could – or should –
hope to remove these. From 1920 onwards, US psychology increasingly
abandoned Freud in favour of the more optimistic and hard-nosed
behaviourism. However, the emotional needs of children were of scant
interest to behaviourists, and it was not until the 1950s that John Bowlby
(1907–90) established a theoretical basis for research on this topic by
combining elements of psychoanalysis, animal behaviour studies and system
theory into attachment theory. Bowlby’s conception of the biological needs
of children was informed by a profoundly conservative vision of family
life, and his initial claims about the necessity of prolonged, exclusive
maternal care were vigorously challenged in the rapidly changing society of
the 1960s and 1970s. Nevertheless, his work helped to focus attention on
relationships in early childhood, which have now become a major topic in
developmental psychology. New awareness of the rich and complex social life
of young children has undermined the traditional assumption that the child
enters society (becomes socialized) only after the core of the personality
has been formed. Critics of developmental psychology point to its tendency
to naturalize middle-class, Western ideals as universal norms of
development, to its indifference to cultural and historical variations, and
to the lack of a unified theoretical approach. However, its very diversity
guarantees continued debate and controversy, and its own development shows
no signs of coming to a halt.
Fluid Intelligence
Fluid
intelligence is the set of cognitive processes that people bring to solving
novel tasks and representing, manipulating, and learning new information.
Consequently, fluid intelligence is an important construct in educational
psychology because it attempts to describe and explain aspects of the
individual that influence how, and how well, people solve unfamiliar
problems and learn previously unfamiliar material. The history, nature, and
current controversies surrounding fluid intelligence are herein reviewed.
Early research
in intelligence proposed that intelligence was composed of a single,
unitary characteristic (known as general intelligence, or g) and a
relatively large number of specific abilities. Whereas g was viewed as
broad ability having a profound effect on learning, problem solving, and
adaptation, specific abilities were viewed as narrow and largely trivial.
However, subsequent research differentiated intellectual abilities that
were based, in large part, on culturally specific, acquired knowledge
(known as crystallized abilities, or gc) and intellectual abilities that
were less dependent on prior knowledge and cultural experiences (known as
fluid abilities, or gf ). Although this work was primarily influenced by
factor analysis of relationships among cognitive tests, prediction of
future learning, experimental studies, and other forms of evidence also
supported the crystallized versus fluid distinction. More modern research
has identified other abilities in addition to crystallized and fluid
abilities (e.g., working memory, quantitative reasoning, visualization),
although scholars have not yet agreed on the exact number and nature of
these abilities and whether these abilities are independent faculties or
subordinate to g: In contrast, there is strong consensus on the distinction
between fluid and crystallized intellectual abilities and their substantial
roles in human learning and adaptation.
Contemporary
neuroscience defines fluid intelligence as cognitive processing independent
of specific content. Fluid intelligence is characterized by the ability to
suppress irrelevant information, sustain cognitive representations, and
manage executive processes. Measures of fluid intelligence are strong
predictors of cognitively demanding tasks, including learning, education,
vocational performance, and social success, particularly when such
performance demands new learning or insight rather than reliance on
previous knowledge. Research also suggests strong biological influences on
the development of, and individual differences in, fluid intelligence. For
example, studies demonstrate that (a) fluid intelligence is more heritable
than most other cognitive characteristics; (b) localization of fluid intelligence operations in the
prefrontal cortex, anterior cingulate cortex, amygdala, and hippocampus;
(c) life-span changes associating neurotransmitter decreases with
decrements in fluid intelligence; (d) moderate associations between neural
speed of response/ conduction and (untimed) measures of fluid intelligence;
(e) that unusual exposure to language (e.g., deafness, nonstandard language
background) has little effect on the development and performance of fluid
intellectual abilities; and (f) fluid abilities have been rising steadily
in Western countries for over a century in contrast to relatively stable
crystallized abilities (i.e., the Flynn effect). Although there is an
association between environmental advantages (e.g., parental education,
socioeconomic status) and fluid intelligence, this association may be
partly or entirely explained by gene-environment correlations. There is
little evidence to suggest that deliberate environmental interventions
(e.g., compensatory education programs) substantially influence fluid
intelligence, although such programs may have at least shortterm effects on
crystallized intelligence.
Nearly all
major clinical tests of intelligence include measures of fluid and
crystallized intelligence. Most notably, tests that historically invoked
different models of intellectual processes have recently adopted a
hierarchical model in which measures of fluid and crystallized intelligence
(and sometimes other abilities) are viewed as subordinate to general
intelligence.
Mental Age
Mental age is
a central concept in the study of intelligence measurement. Jerome Sattler
defined mental age as ‘‘the degree of general mental ability possessed by
the average child of a chronological age corresponding to the MA score’’
(p. 172). As an example, a child assessed with a mental age of 9 is viewed
as having the general mental ability of an average 9-year-old child.
From the
perspective of intelligence measurement, each individual has two ages: a
chronological age that is the number of years that the individual has been
alive, and a mental age that is the chronological age of persons for which
the test performance of the individual is the average test performance.
The mental age
and the chronological age of an individual need not be the same. For
example, if the mental age of an individual is greater than the
chronological age of the individual, then one can infer that the individual
has above-average intelligence or higher mental ability.
The mental age
and the chronological age for an individual are used to determine the ratio
IQ (intelligence quotient) of the individual. To compute the ratio IQ, one
divides the mental age of an individual by the chronological age of the
same individual and then multiplies that ratio by 100. For example, if a
child has a mental age of 12 and a chronological age of 10, then the ratio
IQ for that 10-year-old child is 120 (i.e., 12/10×100=120).
The ratio IQ
was the measure of intelligence used in the 1916 and 1937 versions of the
Stanford–Binet Intelligence Scale and on other tests of mental ability. The
deviation IQ replaced the ratio IQ as the measure of intelligence used in
subsequent measures of intelligence. The deviation IQ reflects the location
of the test performance of an individual in a distribution of the test
performances of other persons with the same chronological age as the
individual, with the mean deviation IQ being typically equal to 100. For
example, if an individual has a test performance that is less than the mean
test performance for same-age peers, then the individual will have a
deviation IQ less than 100. Neither the dated measure of ratio IQ nor the
more contemporary measure of deviation IQ consistently provides concrete
information as to the reasoning skills of individuals.
The mental age
score may also be termed the ageequivalent score according to Sattler. The
mental age score for an individual provides information as to what age
group is most closely associated with the individual from the perspective
of mental ability. As an example, a 12-year-old child with a mental age
score of 14 indicates that the 12-year-old child has a mental ability more
typical of 14-year-old children than of 12-year-old children.
Sattler noted
that mental age scores have certain limitations. First, differences in
mental age do not reflect the same differences in mental ability across the
age spectrum. For example, the difference in mental ability between a
mental age score of 5 and a mental age score of 2 tends to be greater than
the difference in mental ability between a mental age score of 15 and a
mental age score of 12. Second, the same mental age may reflect different
capabilities for different individuals. For example, two children both with
the mental age score of 12 may have answered different test items
correctly.
Louis
Thurstone was highly critical of the mental age concept. Thurstone argued
that ‘‘the mental age concept is a failure in that it leads to ambiguities
and inconsistencies’’ (p. 268). To Thurstone, mental age may be defined in
two different ways. The mental age of an individual may be defined as the
chronological age for which the test performance of the individual is
average. The mental age of an individual may also be defined as the average
chronological age of people who recorded the same test performance as the
individual. To Thurstone, these two definitions do not engender the same
numerical scores. In addition, if one accepts the first definition, one
faces the problem that there may be many chronological ages for which a
test performance is average. For example, a 16- year-old adolescent who
provides a typical test performance for 16-year-old adolescents could be
viewed as having a mental age of any score from an adolescent mental age
score of 16 to an adult mental age score of 40. The average mental test
performances of older adolescents and adults tend to be very similar.
Thurstone did not support the continued use of mental age or IQ as a
measure of intelligence. However, he did support the use of percentiles for
sameage peers in designating personal mental abilities. For example, if the
test performance of a 12-year-old child receives a score that is equal to
the score of the median test performance among 12-year-olds, then that
12-year-old child may be viewed as receiving a percentile of 50 (i.e., the
test performance of the 12-year-old child is equal to or greater than 50%
of the test performances of all of the 12-year-old children who were
tested).
Despite the
trenchant criticism of the mental age concept by Thurstone and the
recognized limitations of mental age scores, noted commentators on
intelligence such as Sattler and Lloyd Humphreys extolled the merits of the
mental age score as an informative measure of mental ability. Both Sattler
and Humphreys contended that the mental age score provides useful
information about the mental capabilities of an individual. The mental age
score provides information about the size and the level of maturity of the
mental capabilities of an individual. The IQ score, whether the ratio IQ or
the deviation IQ, provides no such information. Both Sattler and Humphreys
contended that mental age will likely continue to be a popular and useful
measure of mental ability. However, the suggestion by Thurstone that
percentiles among same-age peers be used to index mental abilities
continues to be worthy of further consideration. Only time will tell
whether the percentile or some other index will replace mental age as a
popular index of mental ability.
http://www.englisharticles.info/2011/01/17/mental-age/
Multiple Intelligences
In 1983,
Howard Gardner introduced his Theory of Multiple Intelligences in a seminal
book, Frames of Mind. Based on his work as professor in the Harvard Graduate
School of Education, his work as a psychologist researching brain injuries,
and his long interest and involvement in the arts, he suggested that
intelligence is not a single attribute that can be measured and given a
number. He pointed out that I.Q. tests measure primarily verbal,
logical-mathematical, and some spatial intelligence. Believing that there
are many other kinds of intelligence that are important aspects of human
capabilities, he proposed that they also include visual/spatial,
bodily/kinesthetic, musical, interpersonal, and intrapersonal
intelligences. More recently he added naturalist intelligence to this list
and suggested that there may be other possibilities including spiritual and
existential.
In 1984, New
Horizons for Learning invited Dr. Gardner to present his theory to the
world of education at a conference we designed for the Tarreytown
Conference Center in New York. Subsequently, all of NHFL’s conferences were
designed around the Theory of Multiple Intelligences, and Dr. Gardner has continued
to write numbers of books expanding on the topic. At the present time
educators throughout the world are finding effective ways to implement this
theory as they seek to help students identify and develop their strengths,
and in the process
discover new,
more effective ways of learning.
Learning Styles
Learning
styles are the diverse ways in which people take in, process, and
understand information. Educational technologies increase an instructor’s
ability to design and implement teaching strategies that address a variety
of learning styles. Researchers often distinguish between visual, auditory,
and tactile-kinesthetic learners. Visual learners learn best by seeing and
respond well to illustrations and diagrams; students considered auditory
learners prefer listening and favor lectures and discussion, while
tactile-kinesthetic learners, stimulated by movement and touch, thrive in
active exploration of a topic or situation (Felder 1993). Webbased
technologies facilitate the use of multimedia; they help move learning
beyond a primarily text-based and linear arena into the cyclical world of
sights, sounds, creativity, and interactivity. Computer-mediated
communications tools such as e-mail, discussion boards, and virtual chat
provide opportunities for interaction collaboration and discussion both
inside and outside of the classroom.
Teachers,
freed from the constraints of time and place, use technology to develop and
deliver individualized instruction to a variety of learners (Kahn 1997). A
number of online resources are available to help students and instructors
identify their preferred learning and teaching styles; these include
inventories, assessments, and questionnaires.
A more complex
theory concerning the diverse ways people learn is Howard Gardner’s theory
of multiple intelligences (MIS). Gardner distinguishes between learning
styles and MI, suggesting “an intelligence entails the ability to solve
problems or fashion products that are of consequence in a particular
cultural setting or community” (Gardner 1993, 15). He approaches MI from a
biological perspective, believing each person has a different intellectual
composition made up of the following intelligences: verbal-linguistic
(speaking, writing, and reading), mathematical-logical (reasoning skills),
musical, visual-spatial, bodilykinesthetic, interpersonal, intrapersonal,
naturalist, and existential (Gardner 2000). Humans possess all the
intelligences in varying amounts and may utilize each one separately or
jointly depending upon the learning situation (Gardner 1993).
Gardner (1993)
recommends designing instruction and assessment that address the wide range
of intellect present in the classroom. Often traditional instruction is
geared toward verbal-linguistic and mathematical-logical intelligence, with
instructors and designers failing to take into account the presence of
other intelligences. Educational technology provides the platform upon
which numerous instructional approaches can be developed and delivered in a
timely and cost-effective manner. Table 1 identifies the attributes of each
intelligence and provides examples of appropriate online teaching
strategies to be used in an educational setting.
Examples of
evaluation that remain sensitive to individual differences include
portfolio development, journaling, and other types of reflective assessment
(Gardner 1993). Personality inventories and temperament sorters provide
another dimension to the discussion on learning styles. The most widely
used personality type indicator is the Myers-Briggs Type Indicator (MBTI),
developed during World War II. A variety of academic disciplines and
professional fields rely on results of the MBTI to provide direction on the
development of collaborative learning and group activities. The Keirsey
temperament sorter is another popular model of learning and has categories
that correspond to the four pairs of MBTI preferences (Fairhurst and
Fairhurst 1995).
Verbal-linguistic:
Preference for reading, writing, and speaking
Mathematical-logical:
Aptitude for numbers, reasoning skills
Musical:
Ability to produce and appreciate pitch, rhythms; learns well through song
Visual-spatial:
Visual and spatial stimulation; learners enjoy charts, maps, and puzzles
Bodily-kinesthetic:
Good sense of balance and hand-eye coordination; handles objects skillfully
Interpersonal:
Ability to detect and respond to moods and motivations of others; tries to
see things from another’s point of view
Intrapersonal:
Uses self-reflection to remain aware of one’s inner feelings
Naturalist:
Enjoyment of outdoors; ability to detect subtle differences in meaning
Existential:
Capacity to handle profound questions about existence
Empathy
In the last
two decades, empathy and related emotional reactions have received
increasing attention from social and developmental psychologists. This is
probably because of the strong theoretical link between empathy (and
related constructs such as sympathy) and both positive social behavior and
social competence. The term empathy has been defined in many ways in the
psychological literature. Although there is still disagreement regarding
its definition, many social and developmental psychologists currently
differentiate between various vicarious emotional responses to others’
emotions or state – which are generally viewed as empathy or related to
empathy – and cognitive and affective perspective taking. Perspective
taking involves the cognitive comprehension of another’s internal
psychological processes such as thoughts and feelings. Whereas perspective
taking often may result in empathy and related emotional responses (Batson,
1991), it is not the same as feeling something as a result of exposure to
another’s emotions or condition.
Many theorists
and researchers now use the term empathy to mean feeling an emotional
response consistent with the emotions or situation of another. Some also
use it to refer to related other-oriented reactions such as sympathy and
compassion (Batson, 1991). However, it is useful to differentiate among
empathy, sympathy, and personal distress. Specifically, empathy is defined
as an emotional reaction to another’s emotional state or condition that is
consistent with the other’s state or condition (e.g., feeling sad when
viewing a sad person). Sympathy, which frequently may stem from empathy
(Eisenberg & Fabes, 1990), is defined as a vicarious emotional reaction
based on the apprehension of another’s emotional state or condition, which
involves feelings of sorrow, compassion, or concern for the other (Batson,
1991, labels our definition of sympathy as empathy). Conceptually, sympathy
involves an other-orientation whereas empathy does not.
Another
vicariously induced emotional reaction that is frequently confused with
empathy and sympathy is personal distress (Batson, 1991). Personal distress
is an aversive vicariously induced emotional reaction such as anxiety or
worry which is coupled with self-oriented, egoistic concerns. Batson (1991)
has argued that experiencing personal distress leads to the motive of
alleviating one’s own distress. Empathy and related emotional reactions
have received increasing attention from social and developmental
psychologists in the last two decades. This is probably because of the
strong theoretical link between empathy (and related constructs such as
sympathy) and both positive social behavior and social competence (see
Batson, 1991; Eisenberg & Miller, 1987). Indeed, much of the recent
research on empathy and sympathy has concerned a few topics: (1) gender
differences in empathy; (2) the relation of empathy and sympathy to prosocial
behavior (voluntary behavior intended to benefit another); (3) whether
empathy or sympathy is associated with altruistic motive; (4) the relation
of empathy to aggression; and (5) the development and socialization of
empathy and related vicarious emotions.
Each of these
topics is now briefly reviewed. Gender Differences in Empathy and Related
Responses In reviews of gender differences in empathy, Eisenberg and her
colleagues (Eisenberg, Fabes, Schaller, & Miller, 1989) found that
gender differences in empathy and related vicarious emotional responses
varied as a function of the method of assessing empathy. There were large
differences favoring females for self-report measures of empathy,
especially questionnaire indices. However, no gender differences were found
when the measure of empathy was either physiological or unobtrusive
observations of nonverbal behavior. Eisenberg has suggested that this
pattern of results was due to differences among measures in the degree to
which the intent of the measure was obvious and respondents could control
their responses. Gender differences were greatest when demand
characteristics were high (i.e., it was clear what was being assessed) and
respondents had conscious control over their responses (i.e., selfreport
indices were used). In contrast, gender differences were virtually
nonexistent when demand characteristics were subtle and respondents were
unlikely to exercise much conscious control over their responding (i.e.,
physiological responses). When gender stereotypes are activated and people
can easily control their responses, they may try to project a socially
desirable image to others or to themselves. In recent work investigators
have attempted to differentiate between sympathy and personal distress
using physiological and facial reactions, as well as self reports. They
generally have found modest selfreported gender differences in sympathy and
personal distress in reaction to empathy-inducing stimuli (females tend to
report more), occasional differences in facial reactions (generally
favoring females), and virtually no gender differences in heart rate
findings. Findings for skin conductance are mixed. Overall the pattern of
findings suggests that females are slightly more likely than males to
evidence both sympathy and personal distress, but that the differences are
quite weak (except for questionnaire measures) and dependent on method of
measurement and context. Whether these slight gender differences are due to
biological factors or socialization (or both) is not clear, although
socialization clearly influences empathic responding (e.g.,Eisenberg,
Fabes, Carlo, & Karbon, 192).
Egoism
Psychological
egoism (sometimes called descriptive egoism) claims that every individual
does, as a matter of fact, always pursue his or her own interests. In other
words, it claims that people never act altruistically for the good of
others or for an ideal. Since psychological egoism claims to state what is
the case, it is a descriptive theory and so is very different from a
normative theory such as ethical egoism, which purports to say how people
ought to act. Psychological egoism seems to rest on either confusions or
false claims. If self-interest is interpreted in a narrow or selfish sense,
then psychological egoism is simply false. There are clearly many generous
people who often sacrifice their own interests, including their money and
time, to help others. Indeed, most of us are generous on some occasions.
Some defenders of psychological egoism admit this fact but claim that it is
irrelevant because even a person who is generous is acting on his or her
own desire to be generous and, hence, is really being self-interested. The
problem with this defense of psychological egoism is that it reduces
psychological egoism to a logical necessity; the motive for any action must
be that agent’s motive—this is logically necessary, for obviously it cannot
be someone else’s motive. Other psychological egoists argue that what
appear to be generous motives are always a front for some hidden
self-interested motive. For example, Mother Teresa was really, they claim,
motivated by a desire for fame or respect or a desire to get into heaven.
The problem with this form of psychological egoism is that there is no
reason to believe it is true. It is mere speculation and must always remain
so since we can never have access to a person’s “genuine’’ motives. It may
be believed mostly by people who are generalizing from their own ungenerous
character.
A final form
of psychological egoism rests on a confusion regarding the nature of
desires and motives. Some psychological egoists argue that whatever selfish
or generous desire motivates us, what we really want is the pleasure of
satisfying our desire; thus, all human motivation is really self-interested.
This misunderstanding was laid to rest in the 18th century by Bishop Joseph
Butler and others, who pointed out that a person needs to have generous
desires in the first place in order to get any pleasure from satisfying
them. They also pointed out that supposing we have a second-order desire to
fulfill our desires is redundant and involves an infinite regression.
Abraham Maslow
Maslow was
born in Brooklyn, New York, on April 1, 1908, and died from a heart attack
in Menlo Park, California, on June 8, 1970. For much of his professional
career, he was a faculty member at Brooklyn College and Brandeis
University. At Brandeis, he served as chairman of the Department of
Psychology; moreover, he was president of the American Psychological
Association from 1967 to 1968. Maslow first published his theory of basic
needs in 1943. Other discussions of Maslow’s theory or hierarchy of basic
needs can be found in his Motivation and Personality and his Toward a
Psychology of Being.
The year after
Maslow’s death, his widow, Bertha G. Maslow, in consultation with some of
Maslow’s colleagues, published a collection of his articles and papers in
the book Farther Reaches of Human Nature. This book also contains
discussion on his hierarchy of basic needs; furthermore, it includes a
comprehensive bibliography of Maslow’s publications and important
unpublished papers. As a psychologist, Maslow’s most significant
contributions were to the fields of humanistic psychology and transpersonal
psychology, wherein many authorities recognized him as a leading pioneer,
if not a founder, of these movements or forces in psychology.
In addition,
Maslow’s hierarchy of needs has provided implications and applications for
education, business management, and religion. It is a psychological theory
with multidisciplinary implications and applications across contexts or
settings. Although Maslow is primarily known for his writings on basic
needs and self-actualization, his books, articles, lectures, and papers
encompass a number of concepts of humanistic and transpersonal psychology.
Most of these concepts are related to his theory of basic needs and
self-actualization in some way and include topics such as peak experiences,
human aggression and destructiveness, human values, growth, transcendence,
humanistic education, creativity, religion, and holistic health.
Nevertheless, from the broader perspective, his work’s theoretical focus is
in the areas of human motivation and healthy personality, and his greatest
contribution is probably to the development of positive psychology,
humanistic psychology, and transpersonal psychology, or what is generally
referred to as the third force in psychology.
Emotional Intelligence
The phrase
emotional intelligence was coined by Yale psychologist Peter Salovey and
the University of New Hampshire’s John Mayer five years ago to describe
qualities such as understanding one’s own feelings, empathy for the
feelings of others and ‘the regulation of emotion in a way that enhances
living’. Their notion is about to bound into American conversation, handily
shortened to EQ, thanks to a new book, Emotional Intelligence (Bantam) by
Daniel Goleman. This New York Times science writer, who has a PhD in
psychology from Harvard and a gift for making even the chewiest scientific
theories digestible to lay readers, has brought together a decade’s worth
of behavioral research into how the mind processes feelings. His goal, he
announces on the cover, is to redefine what it means to be smart. His
thesis: when it comes to predicting a person’s success, brain power as
measured by IQ and standardized achievement tests may actually matter less
than the qualities of mind once thought of as ‘character’, before the word
began to sound quaint in the US.
Goleman is
looking for antidotes to restore ‘civility to our streets and caring to our
communal life’. He sees practical applications everywhere in America for
how companies should decide whom to hire, how couples can increase the odds
that their marriage will last, how parents should raise their children and
how schools should teach them. When street gangs become substitutes for
families, when school-yard insults end in stabbings, when more than half of
marriages end in divorce, when the majority of the children murdered in the
U.S. are killed by parents and step-parents – many of whom say they were
trying to discipline the child for behaviour such as blocking the TV or
crying too much – it suggests a need for remedial emotional education.
While children are still young, Goleman argues, there is a ‘neurological
window of opportunity’ since the brain’s prefrontal circuitry, which
regulates how we act on what we feel, probably does not mature until
mid-adolescence.
EQ is not the
opposite of IQ. Some people are blessed with a lot of both, some with
little of either. What researchers have been trying to understand is how
they complement each other; how one’s ability to handle stress, for
instance affects the ability to concentrate and put intelligence to use.
Among the ingredients for success, researchers now generally agree that IQ
counts for only 20%; the rest depends on everything from social class to
luck to the neural pathways that have developed in the brain over millions
of years of human evolution.
Emotional life
grows out of an area of the brain called the limbic system, specifically
the amygdala, where primitive emotions such as fear, anger, disgust and
delight originate. Millions of years ago, the neocortex was added, enabling
humans to plan, learn and remember. Lust grows from the limbic system;
love, from the neocortex. Animals such as reptiles, which have no
neocortex, cannot experience anything like maternal love. This is why baby
snakes have to hide to avoid being eaten by their parents.
Humans, with
their capacity for love, will protect their offspring, allowing the brains
of the young time to develop. The more connections there are between the
limbic system and the neocortex, the more emotional responses are possible.
If emotional intelligence has a cornerstone on which most other emotional
skills depend, it is a sense of self-awareness, of being smart about what
we feel. A person whose day starts badly at home may be grouchy all day at
work without quite knowing why. Once an emotional response comes into
awareness – or, physiologically, is processed through the neocortex – the
chances of handling it appropriately improve. Scientists refer to
‘metamood’, the ability to pull back and recognize that what I’m feeling is
anger – or sorrow, or shame.
Ethical Naturalism
Ethical
naturalism is the view that ethical claims are either true or false and
that their truth or falsity is determined by reference to the external
world, either facts about human nature or facts about the physical world
beyond humans. Ethical naturalism contrasts with ethical nonnaturalism,
which is the view that ethical claims are either true or false but their
truth or falsity is not determined by facts about the natural, physical
world. There are two main versions of ethical naturalism. The first can be
called virtue-based naturalism. According to standard versions of this
view, the question of which acts are right and which are wrong for a person
to perform can be answered by appealing to claims about which acts would
promote and which would undermine that person’s living a life that is good
for human beings to live. This is a natural approach to ethics as it
purports to explain when an act is right or wrong in a fully natural way,
without referring to any nonnatural source of moral value. This
virtue-based naturalism is based on the view that there is a distinctive
way of living that human beings are best suited to pursuing and that if
they were to pursue this, they would flourish. The primary objection to
such virtue-based naturalism is that there is no such distinctively human
life, and so it is not possible to determine if an act is right or wrong in
terms of whether it is in accord with such a life or not. It is also often
charged that this approach to naturalism faces an epistemological
difficulty: that even if there was a distinctively human life that could
ground claims about the rightness or wrongness of actions in this way, we
would not know what form it would take. However, even if this last
objection is correct, that we cannot have this access to the rightness or
wrongness of actions, it does not show that this naturalistic account of
what makes an action right or wrong is incorrect. It just shows that we
cannot know when an action is right or wrong.
The second
version of ethical naturalism, which can be termed metaethical naturalism,
is the view that moral philosophy is not fundamentally distinct from the
natural sciences. This is the version of ethical naturalism that is most
often understood to be at issue in discussions of the “naturalistic”
approach to ethics. On this approach to naturalism, moral value—that is,
roughly, the rightness or wrongness of an action— should be understood as
being defined in terms of (or constituted by, or supervening on) natural
facts and properties. For example, John Stuart Mill’s utilitarian approach
to ethics was a naturalistic approach of this sort. For Mill, an action was
morally right insofar as it tended to promote happiness and wrong insofar
as it failed to do so. Since for Mill happiness was defined in terms of
pleasure and the absence of pain, which are natural properties, the
rightness or wrongness of an action can be explained in terms of natural
properties. Although not all metaethical naturalists accept Mill’s account
of what explains the rightness or wrongness of actions, they all share his
belief that moral values (such as rightness and wrongness) can be
understood in terms of natural facts about the physical world. For such
naturalists, moral claims should be understood in terms of features of the
natural world that are amenable to scientific analysis. This does not mean
that moral philosophy should become simply another branch of science.
Rather, it simply means that there are likely to be regular or lawlike
relationships between physical properties and moral properties. Moral
claims are thus claims about natural facts about the world. Metaethical
naturalism is thus a type of moral realism, the view that moral claims are
not merely expressive statements but are literally true or false. Thus,
when people say, “Price-gouging is morally wrong,” they are not merely
expressing their personal view concerning price-gouging. Rather, they are
stating that they believe that it is a fact that price-gouging is morally
wrong— and so, like other claims about facts, this moral claim (and all
others) is either right or wrong. Like virtue-based naturalism, scientific
metaethical naturalism faces some serious objections. Some object that this
version of naturalism is untenable because it is not clear how to derive
ethical claims from descriptions of reality. But, as was noted above with
respect to the epistemological objection to the virtue-based account of
naturalism, this doesn’t show that this naturalistic approach to ethics is
mistaken. It just shows that we cannot know when an act is right or wrong.
Amore famous objection to metaethical naturalism was offered by G. E.
Moore. Moore claimed that naturalists were guilty of the “naturalistic
fallacy.” This fallacy was to draw normative conclusions from descriptive
premises. Thus, since naturalists infer from the fact that an action has a
certain natural property (e.g., it maximizes pleasure) that it has a
certain moral, normative property (e.g., it is right and should be
performed), they are, according to Moore, guilty of this fallacy.
Naturalists respond to this objection by noting that they do not need to
rely on only descriptive premises in their inferences from natural
properties to moral properties. They could insert into such inferences a
premise such as “Whatever act has natural property X is a right act.” With
this premise in place, the naturalists’ inferences are not fallacious.
A similar
objection to naturalism was offered by Moore in his “open question
argument.” Moore argued that any naturalistic account of a moral property
would face the difficulty of explaining how it is that a person who
understood both the naturalistic account and the moral property could still
question whether the moral property was present when the natural one was.
For example, a person who understood what it was to maximize happiness and
understood what it meant for an act to be right could still wonder whether
an act that maximized happiness was a right act. If, however, the rightness
of an act was instantiated by that act’s maximization of happiness, this
question would not be open in this way, just as the question “Is this
unmarried woman a spinster?” is not open. In response to this objection,
metaethical naturalists note that the meaning of moral terms might not be
as obvious to people who seem to understand them as Moore assumes. Thus, a
person might be able to use moral terms correctly but still be ignorant of
what criteria must be met for an act to be a right act. Such persons would
be competent users of the moral terms they deploy but would lack the
understanding that Moore assumes they have. If ethical naturalism is true,
this will have important implications for business ethics. If it is true
that ethical claims are either true or false and that their truth or
falsity is determined by reference to the external world, then there will
be objective ethical truths that are independent of the beliefs of humans.
If this is so, then it will not be true that ethical practices vary across
cultures. For example, it will not be true that bribery is ethically
acceptable in some countries, whereas it is not in others. Instead, there
will just be one set of ethical practices that applies universally.
Aura
An aura is an
energy field or life force that supposedly surrounds every living thing and
natural object, including rocks. People who believe in auras say that in
living things this energy field changes in accordance with its health, and
in human beings, it changes in accordance with emotions, feelings, and
thoughts as well. In addition, each human being is said to have a unique
aura; when two auras come into contact when two people meet, the auras
affect one another, with one taking some energy from the other and vice
versa. This phenomenon, believers say, is why one person sometimes feels
“drained” or tired after talking to another. Some people claim to be able
to see auras, usually while in a relaxed or meditative state. They report
that an aura is a colored outline or series of outlines, a colored band or
series of colored bands of varying widths, or a halo of one or more colors,
beginning at the surface of an object or being and emanating outward.
Believers in auras also sometimes say that each aura has seven layers, with
the layer at the skin much denser than each successive layer outward, and
that each layer can be associated with one of seven energy portals that
connect the mind to certain parts of the body. These portals, known as
chakras, are the reason, believers say, that the color, intensity, and/or
outline of an aura can indicate the health of various body parts.
Believers
disagree on how various health problems correspond to the colors of an
aura, but they generally contend that a vibrant aura with a distinct
outline means that a person is healthy, whereas a weak, blurry aura that
does not completely surround the subject is a sign of either mental or
physical illness. Some believers, for example, say red is a warning color,
suggesting that some part of the body is developing a serious health
problem, and that red indicates pain and/or swelling as well as anger and
aggression. Green, on the other hand, is the color of calm emotions and can
indicate that a person’s body is healing or healthy. Some believers say
that orange auras also indicate health, but others say that this depends on
the shade of orange, because brownish orange auras are a sign of a severe
illness or emotional imbalance. Most believers agree, however, that indigo
indicates a person with psychic abilities and that black auras usually
indicate that a person has a terminal illness and/or is so severely
depressed that he or she is suicidal.
Among those
who believe that auras can be indicative of a person’s health is
parapsychologist Thelma Moss, who argues that the phenomenon can be used to
diagnose specific illnesses. While working at the University of California,
Los Angeles, Neuropsychiatric Institute, she wrote the first books to
seriously examine the medical aspects of auras, The Body Electric (1979)
and The Probability of the Impossible (1983). In these works, Moss
advocates that Kirlian photography be used as a medical diagnostic tool.
With Kirlian photography, any object, whatever it is made of, is placed
against a photographic plate and subjected to a high-voltage electric field
or current; the result is a photographic image of the object, surrounded by
one or more radiant outlines of varying colors and widths. Supporters of
Kirlian photography as a diagnostic tool say that these glowing coronas of
light are auras, but skeptics say that they are a by-product of the
photographic process itself; in other words, the electrical charge, rather
than the object being photographed, is somehow producing the visual effect.
Skeptics similarly dismiss claims by people who say they can see auras with
the naked eye. Skeptics say these individuals are suffering from a
neurological or vision disorder that produces the colored rings or bands.
Indeed, physicians know that such disorders can create such false images.
Still, there is no sign that the individuals who claim to see auras suffer
from a physical or mental illness.
Sigmund Freud
(1856–1939) philosopher, psychologist
Sigmund Freud
was born in Freiberg, today in the Czech Republic, to Jewish parents, Jacob
Freud, a small-time textile merchant, and Amalia Freud. Although the family
was Jewish, Jacob Freud was not at all religious, and Sigmund Freud grew up
an avowed atheist.When he was four years old, the family moved to Vienna,
Austria, where Freud lived for most of his life. He was a brilliant
student, always at the head of his class; however, the options for Jewish
boys in Austria were limited by the government to medicine and law. As
Freud was interested in science, he entered the University of Vienna
medical school in 1873. After three years, he became deeply involved in
research, which delayed his M.D. until 1881. Independent research was not
financially feasible, however, so Freud established a private medical
practice, specializing in neurology. He became interested in the use of
hypnosis to treat hysteria and other mental illnesses, and with the help of
a grant, he went to France in 1885 to study under Jean-Martin Charcot, a
famous neurologist, known all over Europe for his studies of hysteria and
various uses of hypnosis. On his return to Vienna in 1886, Freud married
and opened a practice specializing in disorders of the nervous system and
the brain. He tried to use hypnosis to treat his patients but quickly
abandoned it, finding that he could produce better results by placing
patients in a relaxing environment and allowing them to speak freely. He
then analyzed whatever they said to identify the traumatic effects in the
past that caused their current suffering. The way his own self-analysis
contributed to the growth of his ideas during this period may be seen in
letters and drafts of papers sent to a colleague,Wilhelm Fliess.
After several
years of practice, Freud published The Interpretation of Dreams (1900), the
first major statement of his theories, in which he introduced the public to
the notion of the unconscious mind. He explains in the book that dreams, as
products of the unconscious mind, can reveal past psychological traumas
that, repressed from conscious awareness, underlie certain kinds of
neurotic disorders. In addition, he attempts to establish a provisional
matrix for interpreting and analyzing dreams in terms of their psychological
significance. In his second book, The Psychopathology of Everyday Life
(1901), Freud expands the idea of the unconscious mind by introducing the
concept of the dynamic unconscious. In this work, Freud theorizes that
everyday forgetfulness and accidental slips of the tongue (today commonly
called Freudian slips) reveal many meaningful things about the person’s
unconscious psychological state. The ideas outlined in these two works were
not taken seriously by most readers, which is not a surprise considering
that, at the time, most psychological disorders were treated as physical
illnesses, if treated at all.
Freud’s major
clinical discoveries, including his five major case histories, were
published in Three Essays on the Theory of Sexuality (1905). In this work,
he elaborates his theories about infantile sexuality, the meanings of the
id, the ego, and the superego, and the Oedipus complex (the inevitable but
tabooed incestuous attraction in families, and the associated fear of
castration and intrafamilial jealousy).
In 1902, Freud
was appointed full professor at the University of Vienna and developed a
large following. In 1906, he formed the Vienna Psychoanalytic Society, but
some political infighting resulted in division among members of the group
(Carl Jung, for instance, split from the group with bitter feelings). Freud
continued to work on his theories and, in 1909, presented them
internationally at a conference at Clark University in Massachusetts.
Freud’s name became a household word after the conference. In his later
period, in Beyond the Pleasure Principle (1920) and The Ego and the Id
(1923), he modified his structural model of the psychic apparatus. In
Inhibitions, Symptoms and Anxiety (1926), he applied psychoanalysis to
larger social problems. In 1923, Freud was diagnosed with cancer of the jaw
as a result of years of cigar smoking. In 1938, the Nazi party burned
Freud’s books. They also confiscated his passport, but the leading
intellectuals around the world voiced their protest, and he was allowed to
leave Austria. Freud died in England in 1939.
Freud made an
enormous contribution to the field of psychology: He established our basic
ideas about sexuality and the unconscious and also influenced, to some
extent, the way we read literary works by establishing remises for psychoanalytic criticism. His
own case studies are often interpreted for their literary merit.
Intelligence Quotient
(IQ)
Intelligence
is a construct that has been proposed by psychologists to underlie much of
human behavior and is a significant
factor contributing to an individual’s ability to do some things more or
less well. Most would agree that some children are better at math or
language arts than others, or that some hockey players or musicians are
gifted in comparison to their peers. It might be argued that some
individuals are born that way, whereas others have the benefit of good
environments and learning opportunities that can build on their basic
abilities. The intelligence test, and resulting intelligence quotient or
IQ, is a means for assessing and measuring intelligence, with the results
often used to classify or select persons or predict such outcomes as school
achievement.
Both the
construct of intelligence and its measurement are not new, and both existed
well before the advent of psychological science. Historians have traced the
forerunner of current cognitive ability and achievement assessment to more
than 2000 years B.C. Although intelligence has been studied in a number of
ways, from an early emphasis on sensory processes to the more current
attention given to brain-imaging techniques, the mainstay in the study and
assessment of intelligence has been the IQ test. Psychologists not only
assess intelligence but also study how intelligence is expressed; what
causes it; and how it contributes to understanding, explaining, predicting,
and even changing human behavior. Despite intelligence being a much studied
area of psychology, there is still considerable controversy and emotion
regarding the use of the IQ and intelligence tests and the results gleaned
from them in such contexts as schools and industry to describe both
individuals and groups. Given continued advances in the theories of
intelligence and cognitive assessment instruments, the issue appears to be
less with the constructs and the tests used to measure it, and more with
how this information is or can be used.
Theories of
Intelligence
Psychology
joined the scientific community in the late 1800s, and since then, a number
of theories outlining human intelligence, accompanied by a huge body of
research, have emerged. The hallmark of science and scientific inquiry is
the creation of theories and the pursuit of empirical support for the
hypotheses that are generated by and from a particular theory. The current
theories of intelligence attempt to explain what it is, what causes it, and
what intelligence tells us about other human behaviors.
Although
research has demonstrated that there is a considerable genetic component to
intelligence, it is also recognized that intelligence is an acquired
ability that reflects opportunity and experience such as comes from
effective schooling and home environments. Studies showing the remarkable
similarity in measured ability between twins, whether reared together or
apart, provide much evidence for a genetic foundation to intelligence.
However, intelligence appears to be polygenic rather than located on a
specific gene. Studies have also shown that animals raised in very
restricted in contrast to ‘‘rich’’ environments not only show considerable differences
in, say, their capacity to solve problems, but also show an impact on their
brain structures (e.g., number of neural connections). As well, research
has shown how the effects of poverty and restricted educational
opportunities can negatively influence human development, including
intelligence.
Among the
environmental factors that are known to directly influence brain
functioning and thus intellectual development and expression are various
medical conditions, neurotoxins, drugs such as alcohol (certainly during
pregnancy, as observed in children diagnosed with fetal alcohol syndrome),
and chemical pollutants such as lead and mercury. Almost anything that
negatively affects the brain, such as head injury and oxygen deprivation,
will have small or large observed effects on intelligence and its
expression. Less obvious but just as important are such additional factors
as motivation, selfconcept, and anxiety, all of which can influence a
person’s score on an IQ or intelligence test and their everyday functioning
at work or school.
Culture also
affects the expression of intelligence. Although there is a universal
ability related to the capacity to acquire, store, retrieve, and use
information from everyday experiences as well as from direct teaching
(e.g., school), how this is expressed, the content of a person’s response
to a question, and the language used in providing an answer to a question
all reflect the interaction between the person’s genetic capacities and the
environmental opportunities for intelligence. In addition, arguments have
been made that what constitutes intelligence may vary across cultures and
that different ethnic groups may have differing, but equally intelligent,
reasoning strategies. On the other hand, the successful adaptation of contemporary
assessment instruments for use in a large number of countries suggests that
central abilities and capacities comprising intelligence may be shared
across cultures. It should also be pointed out that intelligence is also a
developmental construct. A 5-year-old child has a very different view of,
say, cause–effect relationships or the understanding of number concepts
than does a 15-year-old in Grade 10 or a 35-year-old with a university
degree or a 50-year-old working in a factory. Brain maturation very much
influences the qualitative description of intelligence. At the same time,
it has been demonstrated that intelligence does change across the life
span, with some kinds of intelligence referred to as crystallized
intelligence (e.g., a person’s knowledge of words and language, learned
skills such as solving arithmetic problems) more likely to remain
unaffected and possibly continue to improve with age than are abilities
reflecting fluid intelligence and speed of processing information
(reflecting neural efficiency), barring, of course, dementia and other
diseases underlying cognitive decline.
Another debate
found in theoretical discussions and observed in models of intelligence is
centered on whether intelligence is a single characteristic or is composed
of several or even multiple abilities. These views can be traced back to
the turn of the previous century, when psychologists such as Spearman
argued that intelligence was a set of specific but related factors that
resulted in an overarching general factor (essentially similar to the
current full-scale IQ [FSIQ] score found on many tests). In contrast,
Thurstone proposed that intelligence was made up of a number of primary
mental abilities that could not be captured in a single summary score or an
FSIQ.
Today’s tests
and models continue to reflect these divergent viewpoints. For example,
psychologists such as Guilford have proposed that intelligence may have 120
or more facets, while Wechsler has argued for the relevance of the FSIQ
(but also the importance of looking at both verbal and nonverbal
performance). Other current models proposed by Sternberg, describe
intelligence along the lines of practical, analytical, and creative
abilities, whereas Gardner suggests that there are likely eight to nine core
kinds of intelligence reflecting, for example, interpersonal intelligence
(required for effective social interaction and communication), kinesthetic
intelligence (observed in athletes who excel in their sport), musical
ability (found in performers and composers), and logical-mathematical
intelligence (reflecting the capacity to reason logically in mathematics
and science such as physics). Other views, drawing from the work of Piaget,
focus more on how intelligence develops (in stages) and how it can be encouraged
through direct instruction and supportive learning environments (e.g.,
instrumental enrichment). Thus, there is quite some diversity in how
intelligence is defined, determining the key factors that affect its
development and expression, and how it is best measured. Although this may
be perplexing to some, it does show how complex intelligence is and, even
more so, how very complex human behavior is. At the same time, a great deal
is known about intelligence and what it tells us about human behavior. For
example, intelligence tests, yielding a measure of general mental ability,
are one of the best predictors of student achievement and success in
elementary schools. On the other hand, and as expected, intelligence tests
have been found to be more limited in predicting achievement among
intellectually homogeneous populations. For example, university students
generally possess average or above-average levels of intelligence such that
divergent performance in this group appears to be more highly related to
specific cognitive competencies (e.g., high aptitude in math) and personal
attributes (e.g., motivation, study skills). Intelligence is additionally
considered a key factor in understanding human capacity to manage stress
and develop resiliency, psychological well-being, and even longevity.
History of Intelligence
Testing
The very
earliest tests of intelligence were not based on any particular scientific
views and in many instances simply showed the wide or narrow range of
performance on such tasks as strength of grip or pitch discrimination. More
to the point, these tests did not tell us about other human characteristics
that, by expectation, they should. If intelligence is an underlying
capacity that influences how well a person does in school, or a person’s
accuracy at solving arithmetic problems, or the speed at which he or she
can perform other mental tasks, then the tests should be correlated with
those behaviors and be able to predict how well a person may perform on
those tasks requiring intelligence. In contrast to the earliest tests of
intelligence, more recent intelligence tests have resulted from extensive
research efforts, while still garnering a great deal of misunderstanding
from the general public. The first successful intelligence tests were developed
by Binet and Simon at the turn of the last century and used in the schools
of Paris, France, to help identify and classify schoolchildren according to
their ability to learn and whether they would benefit from regular or
special school programs. A short time later, these tests were introduced in
the United States. Along with the Army Alpha and Beta intelligence tests
used to screen military recruits during World War I, there was growing
opinion that intelligence tests had considerable value for purposes ranging
from personnel selection to identifying children who were intellectually
gifted or retarded. These early landmarks in the history of testing laid
the foundation for the advancement and proliferation of subsequent
intelligence tests. For example, the first intelligence test created by
Wechsler, in 1939, has evolved into several recently published tests for
assessing intelligence from preschool years to age 89, and these tests have
now been adapted for use in a large number of countries. The number of
tests available to psychologists for assessing cognitive abilities has
grown considerably over the past 60 to 70 years.
Current Intelligence
Tests
Today’s
intelligence tests vary from very brief measures that assess only a limited
or narrow part of the broader intelligence framework (e.g., Raven’s
Matrices) to large comprehensive batteries that tap many different aspects
of intelligence ranging from verbal comprehension and spatial reasoning
ability to memory and processing speed (e.g., Woodcock-Johnson Cognitive).
The large number of tests available also includes tests specific to various
age ranges, both group and individually administered tests, brief and
comprehensive batteries, and modified tests for use with, for example,
hearing-impaired clients, or clients who are nonverbal or who are less
proficient in English. The majority of intelligence tests require a
welltrained psychologist administering subtests that require the client to
complete a range of tasks. Two broad types of tasks are used on intelligence
tests—verbal and nonverbal. Verbal tasks generally entail a verbally
presented prompt or question and require an oral response such as defining
words (What is a hammer?), responding to general information questions
(What is the distance between the earth and the moon?), or identifying
similarities between two words (How are convention and meeting alike?).
Nonverbal tasks usually involve visual stimuli or materials and/or require
a psychomotor response such as copying geometric patterns using blocks,
identifying important parts that are missing from both common and uncommon
objects, or identifying patterns within a visual array. Although
instructions and prompts to nonverbal tasks are sometimes given orally,
verbal requirements are minimized within some tests through the use of
gestures, modeling, or pictorial directions.
Both verbal
and nonverbal tasks can be employed to measure a wide range of cognitive
abilities and capabilities. For example, short-term memory may be assessed
through a task requiring a student to repeat a string of presented numbers
(verbal task) or to touch blocks in a previously observed order (nonverbal
task). Regardless of the types of questions used, the psychologist is
careful to ensure that administration and nonintellective factors do not
confound the information gleaned from these tests. For example, it is
necessary to make accommodations for persons with, for example, visual,
auditory, or motor problems, lest these interfere with their performance on
tests that should be specifically tapping intelligence. The raw scores
obtained on intelligence tests are given meaning by comparing them to the
performance of large and appropriate reference groups. These group
performance indicators, called norms, are based on extensive standardization
studies whereby the test is administered to large numbers of examinees to
both ensure that the test is working well and to build a comparison group
that is similar on key characteristics such as age and that reflects the
composition of the larger community (ethnicity, sex, socioeconomic status,
etc.). An individual’s raw scores on the parts and the whole test are then
converted to standard scores through the use of tables; these standard
scores are often referred to as IQ scores, and in the case of, say, the
Wechsler Intelligence Scale for Children–Fourth Edition (WISC-IV), four
index scores assessing verbal comprehension, perceptual reasoning, working
memory, and processing speed, together with a full-scale IQ, are reported
with the average score set at 100. Furthermore, based on the use of normal
curve proportions, scores of 130 would represent intellectually gifted
persons; this score is obtained by less than 2% of the population. Scores
of 115 suggest high average ability that exceeds the scores obtained by
about 84% of the population. In contrast, scores of 85 are seen as low
average; a person with this score would have scored higher than 16% of the
population, but some 84% of the population scored higher than him or her.
For an intelligence test to be truly useful, it must demonstrate sound
psychometric properties that include reliability and validity. For a test
to be reliable, it should have a minimum of measurement error, thereby
measuring with consistency and precision. Thus, a FSIQ score will have some
error associated with it and should never been taken as an exact measure
but rather one that reflects a range wherein the person’s true score is
likely to be. Validity means that the test in fact measures what it is
intended to measure. If the test is supposed to measure acquired knowledge
or crystallized intelligence, then it should do just that and not do
something else. Although it can be said that current intelligence tests are
among the very best measures used by psychologists, certain caveats still
apply. For example, no one test tells everything about a person’s full
intellectual ability, because other factors, such as depression, low
motivation, test anxiety, or cultural factors, can influence intelligence
test scores.
Current Uses of
Intelligence Tests
The use of IQ
and other intelligence tests is a complex process that requires a
comprehensive understanding and training in such areas as test principles
(reliability, validity, test construction, norm groups, types of scores);
human development; and test administration and interpretation. As such,
certain state and provincial restrictions exist that limit who is permitted
to administer and interpret the results. In general, the use of
intelligence tests is limited to psychologists or other such individuals
who have a minimum of graduatelevel training in psychology and assessment.
Although most commonly used by school or clinical psychologists within
school and clinical settings, intelligence tests may also be used by
psychologists within other specializations (e.g., counseling, industrial
organization, research) and in such additional settings as community and
state agencies, workplaces, universities, and private practices. In part,
the purpose for administering an intelligence test may vary to some extent
depending on the reason for referral and who is administering it and in
which setting. A school psychologist may use the results of an intelligence
test to help decide which students should be selected for a gifted program,
whereas a neuropsychologist may use the results to assist with determining
the location and extent of a brain injury. In general, intelligence tests
provide information that can inform a wide range of diagnostic and
decision-making processes. Among the most common uses of intelligence tests
are to assist with diagnostic and eligibility decisions, intervention
planning, progress monitoring, and research into cognitive functioning.
Mental Age
Mental age is
a central concept in the study of intelligence measurement. Jerome Sattler
defined mental age as ‘‘the degree of general mental ability possessed by
the average child of a chronological age corresponding to the MA score’’
(p. 172). As an example, a child assessed with a mental age of 9 is viewed
as having the general mental ability of an average 9-year-old child.
From the
perspective of intelligence measurement, each individual has two ages: a
chronological age that is the number of years that the individual has been
alive, and a mental age that is the chronological age of persons for which
the test performance of the individual is the average test performance.
The mental age
and the chronological age of an individual need not be the same. For
example, if the mental age of an individual is greater than the
chronological age of the individual, then one can infer that the individual
has above-average intelligence or higher mental ability.
The mental age
and the chronological age for an individual are used to determine the ratio
IQ (intelligence quotient) of the individual. To compute the ratio IQ, one
divides the mental age of an individual by the chronological age of the
same individual and then multiplies that ratio by 100. For example, if a
child has a mental age of 12 and a chronological age of 10, then the ratio
IQ for that 10-year-old child is 120 (i.e., 12/10×100=120).
The ratio IQ
was the measure of intelligence used in the 1916 and 1937 versions of the
Stanford–Binet Intelligence Scale and on other tests of mental ability. The
deviation IQ replaced the ratio IQ as the measure of intelligence used in
subsequent measures of intelligence. The deviation IQ reflects the location
of the test performance of an individual in a distribution of the test
performances of other persons with the same chronological age as the individual,
with the mean deviation IQ being typically equal to 100. For example, if an
individual has a test performance that is less than the mean test
performance for same-age peers, then the individual will have a deviation
IQ less than 100. Neither the dated measure of ratio IQ nor the more
contemporary measure of deviation IQ consistently provides concrete
information as to the reasoning skills of individuals.
The mental age
score may also be termed the ageequivalent score according to Sattler. The
mental age score for an individual provides information as to what age
group is most closely associated with the individual from the perspective
of mental ability. As an example, a 12-year-old child with a mental age
score of 14 indicates that the 12-year-old child has a mental ability more
typical of 14-year-old children than of 12-year-old children.
Sattler noted
that mental age scores have certain limitations. First, differences in
mental age do not reflect the same differences in mental ability across the
age spectrum. For example, the difference in mental ability between a
mental age score of 5 and a mental age score of 2 tends to be greater than
the difference in mental ability between a mental age score of 15 and a
mental age score of 12. Second, the same mental age may reflect different
capabilities for different individuals. For example, two children both with
the mental age score of 12 may have answered different test items
correctly.
Louis
Thurstone was highly critical of the mental age concept. Thurstone argued
that ‘‘the mental age concept is a failure in that it leads to ambiguities
and inconsistencies’’ (p. 268). To Thurstone, mental age may be defined in
two different ways. The mental age of an individual may be defined as the
chronological age for which the test performance of the individual is
average. The mental age of an individual may also be defined as the average
chronological age of people who recorded the same test performance as the
individual. To Thurstone, these two definitions do not engender the same
numerical scores. In addition, if one accepts the first definition, one
faces the problem that there may be many chronological ages for which a
test performance is average. For example, a 16- year-old adolescent who
provides a typical test performance for 16-year-old adolescents could be
viewed as having a mental age of any score from an adolescent mental age
score of 16 to an adult mental age score of 40. The average mental test
performances of older adolescents and adults tend to be very similar.
Thurstone did not support the continued use of mental age or IQ as a
measure of intelligence. However, he did support the use of percentiles for
sameage peers in designating personal mental abilities. For example, if the
test performance of a 12-year-old child receives a score that is equal to
the score of the median test performance among 12-year-olds, then that
12-year-old child may be viewed as receiving a percentile of 50 (i.e., the
test performance of the 12-year-old child is equal to or greater than 50%
of the test performances of all of the 12-year-old children who were
tested).
Despite the
trenchant criticism of the mental age concept by Thurstone and the
recognized limitations of mental age scores, noted commentators on
intelligence such as Sattler and Lloyd Humphreys extolled the merits of the
mental age score as an informative measure of mental ability. Both Sattler
and Humphreys contended that the mental age score provides useful
information about the mental capabilities of an individual. The mental age
score provides information about the size and the level of maturity of the
mental capabilities of an individual. The IQ score, whether the ratio IQ or
the deviation IQ, provides no such information. Both Sattler and Humphreys
contended that mental age will likely continue to be a popular and useful
measure of mental ability. However, the suggestion by Thurstone that
percentiles among same-age peers be used to index mental abilities
continues to be worthy of further consideration. Only time will tell
whether the percentile or some other index will replace mental age as a
popular index of mental ability.
|