.

Contemporary Linguistics and Cognitive Science (реферат)

Язык: английский
Формат: реферат
Тип документа: Word Doc
0 2319
Скачать документ

Реферат на тему:

Contemporary Linguistics and Cognitive Science

Noam Chomsky

Sobel S.P. The Cognitive Sciences:

An Interdisciplinary Approach. – London; Toronto:

Mayfield Publishing Company, 2001. – pp. 159-167.

The impetus that set the field of linguistics on its current path came
from the publication of Chomsky’s Syntactic Structures (1957) and
Aspects of the Theory of Syntax (1965). These works ignited a revolution
in linguistics, placing it squarely back into the domain of the mind and
determining the direction it has followed ever since.

What Chomsky contributed initially was a shift of focus to the
(vast and largely unconscious) set of rules he hypothesized must exist
in the minds of speakers and hearers in order for them to produce and
understand their native language or languages. Like P?nini, he was
concerned with discovering, isolating, and pinpointing these rules, to
make their formulation precise and predictive. But, as a 20th-century
researcher, he was working within the contemporary framework of science.
Scientific effort requires abandoning vagueness in favor of focusing on
the observable specifics, which alone lead to productive hypotheses. But
unlike the behaviorists, Chomsky based his hypothesis on the assumption
of a capacity in the brain that functions without the conscious
awareness of the person in whom this functioning is taking place, and
which it is indeed possible and profitable to study. The data provided
by language permit us to infer what must be taking place as language is
produced. In the process, Chomsky proposed a method of formalizing the
rules of the components of language. In view of the impact on and
pervasiveness of this approach in linguistic research in the second half
of the 20th century, a brief introduction is in order.

The first component of language Chomsky addressed was the syntactic
component—the portion of one’s linguistic competence that handles the
arrangement of words into sentences. A simple sentence serves as an
example of what formal rules must contain if they are to be capable of
generating such a sentence:

The cat chased a mouse.

This sentence contains five separate words, some of which—the cat, a
mouse— “feel” as though, when taken together, they form a somewhat
larger unit. The words in each grouping must occur in this order *cat
the and *mouse a are not permissible English combinations. (The asterisk
preceding each such formulation is, by convention, a sign that what
follows is not grammatical in the language.) It is also true that in
English one or the other of these combinations may come first and the
verb, in this case chased, must come between them. The following
ordering would also be fine for English, though it expresses a somewhat
unusual situation:

A mouse chased the cat.

Also perfectly good sentences of English are these two:

A cat chased the mouse.

The mouse chased a cat.

A rule that would specify that these four orderings are just those that
are permitted for this set of words would have to refer to the part of
speech each word represents. These sentences demonstrate that nouns may
occur both before verbs and after verbs and that articles, when present,
must be placed before the nouns they refer to. But the rules would also
make clear that not all sentences contain nouns that are preceded by an
article:

Babies cry

is a perfectly good English sentence, yet there is no the or a before
babies. Nor, for that matter, is there a noun after the verb. So the
rules would specify that a verb need not be followed by a noun.

The rules Chomsky formulated making all of this explicit are
written, in their most basic form for the simplest of sentences, as
follows: Letting S stand for the sentence, N for nouns, V for verbs, and
Art for articles, and an arrow, ? , for the way in which S can be
expanded to include its elements,

S ? (Art) N V (Art) (N).

S can be rewritten or expanded as (i.e., the sentence contains) an
article followed by a noun followed by a verb followed by an article
followed by a noun—in that order. Articles and the noun following the
verb are placed in parentheses to indicate that they may or may not be
present in the sentence. The first noun, which serves as the subject of
the sentence, must be present, as must the verb.

A slightly more complicated sentence might contain another element:

The white cat chased a frightened mouse.

A brave mouse chased the small cat.

Now we must accommodate adjectives. In English, when an adjective is
associated with a noun, it occurs before the noun. Modifying our rule to
allow for this, we can write

S? (Art) (Adj) N V (Art) (Adj) (N).

But we know that a noun may have more than one adjective associated with
it. Therefore, we need a symbol to indicate that indefinitely many
adjectives may occur before a noun. To make this clear we place an
asterisk after Adj: Adj*. (A moment’s thought will suffice to convince
you that there can be only one article preceding a noun.) Our rule now
looks like this:

S ? (Art) (Adj*) N V (Art) (Adj*) (N).

? NPV(NP).

Now we must write a rule that expands NP:

NP ? (Art) (Adj*) N.

A sentence can be divided up intuitively much as a noun phrase can,
into components that seem to “go together.” The white cat forms one part
of the sentence, that which is being spoken about—the subject. Chased a
frightened mouse forms the other part of the sentence, that which is
being said about the subject— the predicate.

The white cat chased a frightened mouse at high speed into the
grassy yard.

To capture the intuition that the sentence breaks into two major parts,
we can recognize the status of the second part by calling it a verb
phrase, or VP. The rule that produces, or generates, sentences can now
be stated in the condensed form

S ? NP VP.

Our latest sentence now contains two additional phrases, at high
speed and into the grassy yard. Each of these, as you can see, contains
a preposition, abbreviated Prep (at, into), followed by a noun phrase.
This we can categorize as a prepositional phrase, abbreviated PP which
we formulate as follows and add to our list of rules:

PP ? Prep NP.

As the sentence indicates, there is the possibility for an indefinite
number of prepositional phrases following the verb.

Each part of our rule for generating sentences can be expanded by
writing the rules we have formulated for each element, giving us the
following set:

S ? NPVP.

NP ? (Art) (Adj*) N.

VP ? V(NP) (PP*).

PP ? Prep NP.

The NP in this last rule can of course be expanded by means of the
already stated NP rule.

It must be understood, of course, that these particular rules apply
to English sentences only. The rules for generating the sentences of
other languages would require a different formulation. German, for
example, would require the verb to occur as the last element in the verb
phrase.

So far we have formulated only four rules. Now, having formulated
the rules, we must include the words themselves in order to generate the
sentence. Each category they involve—nouns, verbs, articles,
prepositions—can be further expanded to include the words that
constitute that category. Thus:

N ? [cat, mouse, speed, yard . . .]

V ? [chased . . .]

Art ? [a(n), tfie]

Adj ? [white, frightened, high, small, grassy . . .]

Prep ? [at, into . . .]

You can easily see that a very great number of English sentences
can be generated by means of this small set of rules, using the many
other words that fit into the categories N, V, Adj, and Prep. (The
category Art is different; there are only two articles in English—an
being a variant of a.) However, if you consider sentences such as the
following, you will also see that there is much more that must be
accounted for in English sentences:

Whose mittens are these?

I disagree entirely.

I don’t want you to go out tonight.

Get lost!

Let’s see the crossword puzzle you have just finished.

Kenny told the girl who came to pick up the books that she couldn ‘t
have them because he wasn ‘t finished reading them yet.

This will give you some idea of the vastness of the task of formulating
the syntactic rules of a language—and we haven’t even mentioned the
rules for making the words sound right for the language or for
constructing the words or for deriving “Jeet jet?’ and ‘No, joo?’ from
“Did you eat yet?” and “No, did you?”!

Let us look briefly at an example of unconscious rules of the sort
Chomsky sought to formalize, drawn from the phonological component of
your linguistic competence, that portion that deals with the sound
system of your language. One example will suffice to indicate that the
phonological component is also vast and complex. Sounds of a language
that are identified by its speakers as being “the same,” such as the two
instances of the sound p in the English word paper, are in fact
pronounced in a somewhat different manner and thus sound somewhat
different. You can demonstrate this yourself quite easily. Hold a sheet
of paper up to your lips as you pronounce the word paper; you will find
that the first p carries with it a puff of air that blows the paper away
from your lips, whereas the second p is said without this puff of air.
Linguists refer to the puff of air as aspiration, calling the first p
“aspirated” and the second one “unaspirated.” The variation in their
articulation depends on their position in the word: The sound p becomes
aspirated when it occurs at the beginning of a syllable and is
immediately followed by a stressed vowel (as in the first syllable of
paper). When it is in any other position, it does not. Native English
speakers never make the mistake of using one version where the other
belongs. Yet they are generally unaware that they are using two versions
of the sound. Thus we can speak of an underlying notion of the sound,
which is stored in the speaker’s brain as part of the pattern of speech
sounds of his or her language. Just as water can exist as liquid
(water), solid (ice), or gas (steam), so too are many of the sounds of
language manifested differently in different environments, as the two
versions of p in paper clearly show. The variants of a given sound,
taken together, constitute a phoneme. A phoneme, then, is a kind of
abstraction.

The sounds t and k are articulated in very much the same way as
p—that is, without voice and by closing off the passageway in the mouth
that allows the sound to be uttered on the expelled breath. (P does this
at the lips, t with the tip of the tongue behind the top teeth, and k
with the back part of the tongue touching the roof of the mouth. This
can easily be demonstrated by simply saying the sounds.) All three of
these sounds behave the same way, following the same rule with respect
to aspiration. Following a convention that indicates aspiration by means
of a superscript ?, it is possible to write a rule that expresses this
situation:

p, t, k become p?, t?, k? when they occur at the beginning of a syllable
and are immediately followed by a stressed vowel.

By means of a set of symbols and terms that capture the
commonalities among the three sounds, the conditions under which the
rule is applied, and the result of its application, a very economical
formal rule can be written. (To explain all the complications involved
in arriving at such rules would require a course in linguistics. My
purpose here is merely to suggest their complexity.)

The Relevance of the Rules to Cognitive Science

Perhaps your head is spinning from this discussion of the formalization
of rules of English. Perhaps you are wondering why it was included.
There are several reasons. First, to appreciate what was involved in the
new linguistics of the 20th century, it is necessary to have at least
some understanding of the turn it took in the direction of scientific
inquiry and method. Second, establishing a formal means of encoding the
rules of language enables important generalizations to be grasped (such
as the one that extends the effect of the aspiration rule from one sound
to all sounds made in the same manner). If we can capture in this way
the rules that characterize languages, we can compare them to see what
types of rules characterize human languages in general. From there we
can proceed to a greater understanding of what the human brain is
equipped with that enables it to “do” language.

Another reason for making the rules explicit is to increase our
understanding of the” way children learn their first language. If it is
indeed by means of acquiring such rules (however they are represented in
the brain), we can more readily understand how it is possible for them
to do it in the short time it actually takes.

Still another important reason to formalize the rules of language
has to do with the capabilities we are developing, via the computer, to
model aspects of human intelligence. Computer programs require very
precise and unambiguous instructions. The formalization of the rules of
language has enabled computer scientists involved in artificial
intelligence to attempt to model human language on the computer.

Identifying and formalizing the rules of language rests on the
assumption that they have been internalized, represented in some fashion
in the brain — that they are in fact in there, somewhere. Regarding the
issue from this perspective requires that we consider how it is possible
for an infant to begin to acquire these rules, on the assumption that it
is born not knowing them. This assumption is reasonable, because babies
will learn to speak the languages spoken in their environment. (If a
child is adopted by a family of a different culture from that of the
biological parents, where a different language is spoken, the child
learns the language of the adoptive culture.) We wonder, of course, how
it is possible for the linguistic system of a given language to be
absorbed, as it demonstrably is, early in childhood. Is the capacity to
“do” language a specialized one, residing in brain functions evolved
just for that purpose? Or is it rather, as some have proposed, one among
many abilities that arise from certain general cognitive abilities, such
as the ability to categorize experiences of our environment (this is a
chair; that is a person; this word is a potential sentence subject, but
that word can only designate an action)?

Chomsky’s hypothesis was that the inborn linguistic capacity of
humans is sensitive to just those rules that occur in human
languages—and in no other sorts of language or system. In other words,
something quite specific to language has evolved in the human brain,
something that enables the steady stream of language sounds to be
perceived by the infant as distinct in an important way from other
sounds and that also enables the infant to begin to break down the flow
of this stream into meaningful parts, arranged, as the infant somehow
recognizes, in a systematic way. Chomsky likened this capacity to the
capacity to walk, in the sense that it is a behavior in which humans
naturally engage, without the need for special instruction, as soon as
they are developmentally ready—provided the environment permits. That
is, walking proceeds if the toddler has undergone maturation sufficient
for growth of bones and muscles, and language development occurs if the
environment provides exposure to language. Both are part of our
biological endowment. It is thus no accident that there are many
similarities among languages: The many systems, or grammars, that
underlie them are ultimately generated by the human brain.

That there are as many different grammars as there are languages is
evident. But linguists cannot discover the grammar of any one of them
without careful study of the actual language as it is spoken. At the
outset one might propose any number of possible grammars to account for
the speech heard in a given language. How many sets of rules might one
be able to think up that would generate the things people say? Think for
a moment about how many explanations one might propose for how a
machine—say, a car—works, in the absence of actually looking inside to
see the components and what drives their operation. One might come up
with many explanations, ranging from “There are forty little guys in
there pedaling for all they’re worth” to very complicated schemes for
having one element move another, which would then move another. . .
something like a Rube Goldberg contraption. Someone might even hit on
the correct explanation, and we who actually know how the car works know
that there is only one correct explanation.

So it is with language. There are many possible grammars that would
serve to produce a given language. But if we adults cannot peer inside
the mechanism and learn which is the correct one, how on earth does a
small child find it on his or her own? Even if we are willing to assume,
with Chomsky, that what the child brings to bear is an innate capacity
to project the right grammar on the language spoken in the environment,
we might not wish to endow—to overburden—this child with all the
grammars to select from that would be necessary for all languages. But
if babies do not know in advance which language they will be confronted
with at birth, how else could they possibly find the right grammar?

The hypothesis proposed to answer this question is that despite
their surface differences all human languages share a fundamental
structure, and what is common to them all has come to be known as
universal grammar. Innate in all of us, according to this hypothesis, is
the ability to apply this universal grammar to whatever languages we are
faced with at birth. This explanation is not universally accepted; some
have maintained, for example, that the human capacity for language is
not richly specified in the brain but is rather a special function of
the general cognitive abilities humans possess.

0

1

6

X 1/4 3/4 A 1

2

2

8

8B8a8?9A9i9

Bnguage of the sort I have described. But there are so many languages,
and each has its own set of rules. How is it possible for a brain to
possess innately a system adequate to the task?

The approach then taken by Chomsky and others was to attempt to
“factor out” general principles that hold for all languages, principles
that govern application of the rules of languages. Under this new
principles and parameters formulation, which crystallized around 1980,
it is these that constitute the universal grammar. Variation in
languages results from the ways in which these principles apply. There
is a finite set of ways in which the principles may apply; these are the
parameters (Chomsky, 1995). The parameters have been likened to a set of
switches, each having a fixed range of potential settings. The actual
language the learner is exposed to provides the data that trigger the
setting of the switches. Under the principles and parameters hypothesis,
learning the syntax of one’s first language is a matter of setting these
switches; acquiring language is a process of fixing the parameters in
one of the permissible ways. With no requirement that the innate
component specify all manner of language-specific rules of the sort
described previously, the hypothesized innate machinery of language can
be reduced.

Whichever way one looks at it, some of the questions we find
ourselves asking regarding the language capacity of humans are:

How is language organized in the brain?

How does it work?

If a person speaks more than one language, how are these languages
stored?

Once the rules of a given language have been internalized by the child,
how

and where are they represented in the brain?

We cannot take the direct path and look inside for the answers, because
even when the brain is exposed to view for medical reasons, as in
Penfield’s work, one sees neither rules such as VP ? V (NP) (PP*) nor
principles and parameters lurking there in some recognizable form. But
the recently developed sophisticated methods of scanning the brain, as
we saw in the preceding chapter, allow us to observe brain activity
during the performance of language-related tasks, among others.
Nonetheless, we see no rule hopping about the cortex clamoring to be
recognized.

There are, however, experiments that do provide us with
information. Brain-damaged patients can serve as subjects of such
experiments (Damasio & Tranel, 1993). These experiments indicate that
certain brain structures link areas of knowledge about traits, sounds,
and movements of birds, for example, with the nouns and verbs associated
with them. Subjects in the experiments could describe, using verbs, what
was taking place in pictures of activities involving birds, but did much
worse than non-brain-damaged controls in using the appropriate nouns,
such as duck, ostrich, and other bird names. Another conclusion to be
drawn, according to the experimenters, is that areas of the brain that
handle nouns are not the same as those that handle verbs.

????$??$????????tioning of language in our species.

Нашли опечатку? Выделите и нажмите CTRL+Enter

Похожие документы
Обсуждение

Ответить

Курсовые, Дипломы, Рефераты на заказ в кратчайшие сроки
Заказать реферат!
UkrReferat.com. Всі права захищені. 2000-2020