September 14, 2010

THE DESIGNATE STRUCTURE OF THOUGHT By: RICHARD J.KOSCIEJEW

                                        THE DESIGNATE STRUCTURE OF THOUGHT

  
                                                                                                                              By: Richard J.Kosciejew

Specification of functionally related stimuli and responses posed a number of problems for behaviouralistically oriented psychology, itself sometimes called ‘the experimental analysis of behaviour’. Often, for example, stimuli and responses selected for functional class cannot be usefully characterized in an apsychological (nonmental) vocabulary. Consider, for example, the temptation to classify the rat’s responses as seeking food and remembering whether it was found to the left or right. Mentalistic attribution is a tough temptation to resist. In some cases - human verbal behaviour, for instance - it is impossible to resist, however.
In North America behaviourism reigned for decades as a remarkably resilient, influential, and in many ways laudable doctrine that resonated through a number of disciplines beyond psychology. In linguistics it helped to displace philogy (the study of the histories of particular languages) with empirical studies of language use. Under the leadership of Leonard Bloomfield, linguistic behaviourism aspired to carry out a program in which linguistics would collect speaker`s utterances into a corpus and produce a grammar that described it. Explicitly excluded were and mentalistic assumptions, inference or explanations.
In philosophy, the logical positivism of Rudolf Carnap and Carl Hempe was congenial to behaviourism. Each tried to develop behaviouristic canons for the meaningfulness and empirical grounding of scientific hypotheses. Hempel himself eventually abandoned this effort: In order to characterize the behavioural patterns, propensities, or capacities . . . we need not only a suitable behaviouristic vocabulary, but psychological terms as well. (Hempel, 1966). Others maintained a thoroughgoing empiricism. Willard van Orman Quine imposed behaviouristic standards on the task of interpreting the speech of another person (or oneself) and argued that the only evidence available was the sensory input from the environment. He argued that from this evidence alone the meaning of a sentence would always be indeterminate, and therefore concluded that the notion of meaning was vacuous, but he made an exception only for those statements most firmly rooted in sensory experience (observation statements).
Not everyone agreed with behaviourism, however, it was nonetheless, the historical events that clearly represent a rebellion against behaviourism and the birth on new approach. The concept, as one is with concept, one is with idea, and so, that, one who is without concept, is one without idea; concept, in that, of a cognitive science revolution occurring at the end of the second world war. The inferring intellection illustrates the manner to which is representational or, perhaps, on the formulation that not of any idea gives apprehension toward the gestation and birth of the cognitive revolution. However, this enabled cognitive researchers to cast off their fears of mentalism and attempted efforts and understand the processing of information in the head - in the mind - that underlies behaviour. By the mid-1970's the conceptual and methodological frameworks of linguistics, psychology, and philosophy were fundamentally altered in ways characteristic of what Thomas Kuhn (1962/1970) has referred to as a ‘scientific revolution’. A generation of new thinkers, including Chomsky, George Miller, and Hilary Putnam had created a new paradigm, and a new generation of researchers took up the bannar and the incidence of a radically different set of research agendas. In addition, a brand new discipline - artificial intelligence - emerged, and such leaders as Allen Newell and Herbert Simon linked its approach to those of the other disciplines.
Of all the research fields that would come to play a major role in cognitive science, artificial intelligence, usually classified as a branch of computer science, was the newest, having to await the invention of the computer itself. The digital computer, as we know it, was another product of the second world war, though the idea of automated computing goes much further. One key element of computing is the idea of a set of instructions that can be applied mechanically. An early version of this idea was found in an 1805 device of Joseph-Marie Jacquard which used removable punch cards ti determine the pattern which a loom would weave. In the 1840's, Charles Babbage made use of this idea to his design of an analytical engine, which was to have been a steam-driven computational device. Babbage never succeeded in actually building the engine, but he did, however, engage in a fruitful collaboration with Lady Lovelace (Ada Augusta Byrin),who worked out ideas for programming Babbage’s machine.
A major hurdle faced by Babbage in the nineteenth century was the lack of sufficiently manufacturing for the components of his engine. Even so, by the start of the twentieth century, precision had improved to the point where mechanical calculators could be manufactured by companies such as Tabulating Machine Company which later merged into IBM. These machines were purely mechanical - without electrical components - bit in the late 1930`s Claude Shannon showed that electric switches could be arranged to turn one another on and off in such a way as to perform arithmetic operations. The idea of using electronic circuits to carry out calculations was put into practical use during the second world war in England by Alan Turing and his collaborators at Bletchley Park in the effort to decipher German military communications. The German cipher machine, Enigma was a particular challenge, since it was built out of a set of rotors which permuted the letters of the alphabet: The rotors were mechanically coupled so as to constantly change the alphabetic substitutions employed in the cipher. The challenge to Turing and his colleagues was to examine all combinations of encoding assignments in the machine to find the one used in the cipher, a huge computational task. For highest-level communications, Germany employed an even more sophisticated cipher, which produced what researchers at Bletchley Park referred to as `Fish` cipher text. To decipher these messages, Turing and his colleagues designed a vacuum tube-based special-purpose machine, Collissus, which employed thousands of electronic values.
Another world war two era computer, the Electronic Numerical Integrator and Calculator (ENIAC), was developed by J.Presper Eckert and John Mauchly at the Moore School of the University of Pennsylvania. It was designed to calculate artillery tables, which would specify how to aim artillery on various terrains so as, to hit desired targets. Despite massive effort, ENIAc remained incomplete until 1946. John von Neumann designed the basic architecture for ENIAC - the `von Neumann architecture`. It was, however, only fully realized in ENIAC`s successors, EDVAC (Electronic Discrete Variable Computer), and has continued to play a central role in computing to the present.
At the heart of von Neumann architecture is a distinction between a computer`s memory and its central processing unit (CPU). One of von Neumann`s innovations was to recognize that the instruments comprising a program could be stored in memory on the same manner as the data being operated upon. Computer operations are carried out in cycles in the CPU: In each cycle both data and instructions are read from memory into the CPU, which carries out the instructions and returns the results to memory.
We now come closer to the role of the compute on the birth of cognitive science, but we need to make another brief digression. After the war, computers became increasingly powerful. And with such power a possibility began to be realized that has first been envisaged by Gottfried Wilhelm Leibniz, the famous seventeenth-century philosopher at the University of Leipzig. He had proposed that numbers could be assigned to manipulate the concepts to which they were assigned. In 1854, the English mathematician George Boole had taken a major step in developing this idea in a book called `The Laws of Thought`. Boole formulated several operations that could be performed on sets, which could be applied to propositions. He suggested that the laws governing these operations could serve as laws of thought. The switches that Shannon had devised in the late 1930`s performed these basic Boolean operations, with the resulting state of the switches (on or off) corresponding to the truth values of the proposition (true or false).
Boole`s system was limited to operations on complete propositions (e.g., `The woman is a lawyer`) and could not deal with structures internal to the proposition (e.g., the fact that the predicate `Is a lawyer` is being predicated of `the woman`). Gottlob Frége, though, expanded the system in 1879 to deal with such predications (permitting representational arguments from premises such as `All lawyers have passed the bar expand`; `The woman is a lawyer` to `The woman has passed the bar exam`): The resulting system of predicate calculus provided a way of formalizing inferences that had been extremely influential. The idea of formally representing information in symbolic notation and using formal operations to transform this information provided a critical entrêe to the use of computers to simulate reasoning.
Turing, too, had an ingenious proposal, he offered a test - not a sole test, but a test, for thinking (Turing, 1950). His suggestion was to approach the question in terms of the behaviour of the machine: Could its behaviour pass for that if a thinking person. If yes, it thinks. In what is now known as the Turing test, one decides whether a machine is thinking by arranging for a human interrogator at a keyboard to ask whose answers are displayed. If the interrogator, even after sophisticated questioning, cannot differentiate the computer from the human, then the computer`s activity counts as thinking. Turing recognized that it would require a very complicated and complex machine to engage in any protracted dialogue with humans and not be detected, but he believed that a computer would eventually pass this test.
The British experimental psychologist, Sir Frederic Bartlett (1932) who studied the role of subjective construction in memory. For memories, he claims, are not simple recordings of experienced events, but are filled in by their subjects and embellished with details not present in the original context. For example, when asked to recall a Native American folktale, `The War of the Ghosts` from the Kwakiutl people, his subjects made changes in the plot of a story which tended to Westernize it. To explain this. Bartlett proposed that they employed their existing schemata to organize events in the story. As we will see, the notion of a schema as a structure for organizing information in memory has played a major role in subsequent cognitive psychology and in cognitive science generally. Bartlett also trained a number of influential British psychologists, including David Broadbent, who pioneered attention research using multi-channel listening techniques.
Nevertheless, this general approach can be extended to more complex situations in which there are more than two alternatives or alternatives have unequal probabilities - for example, any message in English - and can b e used to measure the amount of redundancy in such messages. Shannon (1948, 1951) presented a text one letter at a time to subjects whose task was to predict the next letter. There were 26 alternatives at each point, and they had unequal probabilities due in part to context. For example, ‘χ’ has a low probability overall, but is highly probable following ‘y’. Shannon defined redundancy as the reciprocal of the average number of guesses needed to generate the correct letter. Averaging across the entire text, subjects required an average of two guesses per letter, yielding a redundancy estimate of about 50 percent for printed English. Shannon’s information theory provided the key to interpreting Miller’s dissertation result that messages differed in how easily they could be understood in noisy environments. Miller and Selfridge (1950) found further application for information theory in a list learning experiment: The closer the word lists came to resulting English sentences (i.e., the greater their redundancy), the more words a subject could remember.
In one of the most influential papers of this point. Miller (1956) addressed more extensively the question of the cognitive structure of memory the study of human learning and memory had long moved along the path laid down by Hermann Ebbinghaus (1885-1913), who served as his own subject in a prolonged series of experiments in order to bring higher mental processes under experimental control and quantitative analysis. In his attempts to eliminate extraneous influences, Ebbinghaus arrived at the idea of using pronounceable nonsense syllables such as DAX and PAF as his stimuli rather than words. He studied lists of these nonsense syllables daily, and then tested himself to determine rates of learning and forgetting. Ebbinghaus uncovered important functional relations (e.g., repetition yields better retention, especially if distributed across several days: The amount retained is a logarithmic function of time), but the down side was his neglect of the cognitive structure and processes that meaningful stimuli so readily engage. Frederick Bartlett’s (1932) previously described idea that schemata help organize memory offered a corrective to the limitations of Verbal learning in North America. Such in the pursuing the updated variations on the Ebbinghaus tradition by asking, for example, which particular model of stimulus-response conditioning might best account for the accumulated data on paired-associate learning. Retention was an indicator of learning, not a clue to the nature of the memory system within.
Memory is a single word that refers to a complicating, complex and fascinating set of abilities which people and other animals posses that enabling them to learn from experience and retain what they learn. In memory, an experience affects the nervous system, leaves a residue or trace, and changes later behaviour. Types of memory are tremendously varied: So, too, are the techniques used in cognitive science to investigate them. The aim of the present day chapter is to give an overall sense of types of memory as well as of techniques used in the experimental study of memory.
Biologists, philosophers, and psychologists have described and discussed dozens of types of memory, such as Procedural memory, and refers to the knowledge of how to do things such as walking, talking, riding a bicycle, tying of ones shoe laces. Often the knowledge represented as difficult to verbalize, and the procedures are often acquired slowly and only after much practice. (Imagine someone trying to learn how to swim from reading about swimming, but not practising the skill.) The types of conditioning to which most species of animals are subject - classical (or Pavlovian) conditioning and instrumental (or operant) conditioning - are other examples of procedural memory.
Procedural memory is often contrasted with declarative memory, or knowing facts about the world and about one’s past (Squire, 1987). A major distinction within declarative memory is that between episodic and semantic memory. Episodic memory refers to the remembering of episodes of our lives and is contextually bound: This is, the time and place of occurrences are inextricable parts of memory for episodes. This type of memory enables the mental time travel in which we engage when we think back to an earlier occasion: Because it constitutes every individual’s personal history. It is some times called autobiographical memory. Semantic memory (or, generic memory) refers to our general knowledge of the world (is that, NaCl is the symbol for salt, what the word play-types means, and so on). This knowledge is not tied to one episode, and we need no refer to the time or place in which we learned these facts to know that they are true.
This is not the only way to distinguish types of memory. Another, important difference is between short-term and long-term memory. Short-term memory (or primary memory) refers to our ability to hold in mind a relatively small amount of information that is rapidly forgetting if we stop attending to it. A good example is remembering a telephone number for a brief period afer looking it up. This ability is also referred to as working memory, because it permits us to perform the mental work of manipulating symbols and thinking. Long-term memory (or secondary memory) is a rather imprecise term yet is used to refer to retention of various kinds over long time periods, depending on content, long may mean anywhere from 10 seconds to as of many years (hence the fuzzy nature of the term).
Its base of remembering or memory, which is long-term episodic memory: How do we remember what we read in the paper, where we parked our car this morning, the earliest event from our childhood, and the myriad of other events of our lives? We often need to recall events from the past as accurately as possible, and this process can be effortful. The process of recognition (when we are asked to judge whether something has been presented to us previously) appears easier than recall it. Such that we are to consider the study of memory and the critical principles of remembering as our concerns lie of forgetting and memory illusions, that is which are endorsed in of the falsity that our memories do enact.
Ebbinghaus advocated careful laboratory research as a sue path to knowledge, and the laboratory research tradition begun by Ebbinghaus still exists, albeit in radically different form. The development of alternative approaches has enriched today’s cognitive science, however. Some researchers advocate more naturalistic methods (the everyday memory tradition). Others seek the biological underpinning of memory in studies of animals or in the tradition of cognitive neuroscience (measuring neural-activity through modern neuroimaging techniques while people are engaged in memory tasks, or studying the deficit and pathologies of memory in brain-damaged patients). Yet another approach takes inspiration from artificial intelligence and asks how much human memory resembles computer memories. Some researcher seek to stimulate and to understand memory processes by creating neural network models. Each of these approaches makes a contribution, but our approaching perspective on learning and psychological memory as employing behavioural methodologies are the primary tool for subject study.
The learning/memory process can be divided into three hypothetical stages, encoding (original acquisition of information), storage (information over time), and retrieval (gaining access to information when it is desired) (Melton, 1963). Any time someone accurately remembers an event, all three stages are successfully completed. If by any chance or by change itself, that someone forgets or disremembers an event, we can ask at what stage or stages the process went wrong. However, answering this question is not as straightforward as it seems, because the three stages are interlocked, and psychology experiments cannot give a clear answer to the question of what stage in the process has suffered a collapse.
A standard psychology experiment on learning and memory has two stages. In a first stage people are exposed to information to be learned, be it sets of words, numbers, pictures, sentences, a story or prose passage, or a videotape of a complex events. In the second stage, a test is given some time later in which people may be asked to recall or to recognize the material. The first stage of memory experiments corresponds to the encoding of material, but, of course, there is no way to tell if materials were actually encoded unless it is tested . The second stage corresponds to the retrieval stage, but, of course, it does not measure retrievals per se – information can only be retrieved if it was encoded and stored.

Nonetheless, the work of Tulving and Pearlstone (1966), psychologists have distinguished between availability and accessibility of information in memory , where availability refers to the information about events that a person has encoded and stored and accessibility refers to the information that can be retrieved on any particular test occasion. The holy grail for psychologists interested in memory would be a test or procedure that accurately measured the contents of a person’s knowledge - what the person had encoded and stored. At one time, it was argued that procedures for measuring recognition procedures are subject to the same impulses as are to recall procedures. Every test of memory is an imperfect indicator of knowledge. whether in the classroom, in standardized tests, or in the psychology laboratory. we can never measure what information is encoded and stored , we can only measure what information is accessible or retrievable under a particular set of test conditions.
Despite these problems, the division of the learning/memory process into three stages can at still be useful. We can still sometimes ascribe forgetting to failures (say, of retrieval). Imagine people studying a list of 100 words on which umbrella is the fifty-first word. If people were tested by being asked to recall in any order on a blank sheet of paper (a procedure called free recall), the probability of recalling umbrella would be vanishingly small. Was the word not encoded?: Not retained? Or just not retrieved? There is no way to know from this one condition. However, if the people were then given retrieval cues to prompt memories for the words and the cue parasol elicited recollection of umbrella, then clearly the word had been encoded and stored, and the failure on the first recall test was one of retrieval. (It would be necessary to safeguard against the possibility that people are merely guessing the words from the cues, but in practice insuring this is relatively easy.)
Most experiments on memory can be classified as encoding experiments or retrieval experiments. Encoding experiments involve manipulations of some factor during the encoding stage (e.g., the type of material, the way the material is processed), with other factors (e.g., the type of test that is used to assess knowledge) held constant. Retrieval experiments hold constant the encoding factors but manipulate the retrieval factors, such as the type of test given or the particular instruments given before the test. One particularly useful research strategy in investigating memory combines these two types of experiments and has been called the encoding/retrieval paradigm (Tulving, 1983). For example, two different strategies for studying material might represent the encoding manipulation, and two different forms of tests might be used to assess knowledge. The encoding/retrieval paradigm is efficient, because it permits several questions to be asked at once. For example, will the outcome of the encoding manipulation generalize across more than one kind of test. Similarly, will different types of tests show different patterns of knowledge acquired and stored during the earlier effectiveness of retrieval cues. These factors are studies through encoding and retrieval experiments.
Our critical aspects of the learning and memory process is the original acquisition of learning of information. Many experiments have documented the importance of a general principle, namely, that the more effectively information is encoded, the better later recall is. Of course, such a statement runs the risk of being tautologous, unless can specify a way (independently of level of recall or recognition) of defining effectiveness of encoding. Frequently, that is impossible to do. However, to show this general principle can, at least order many findings from the experimental study of remembering. In general, all the research that the conformations to an encoding paradigm, as to experiment, and the its interest is to seeing how it affects performance on a later test.
Even so, that more meaningful information is better remembered than less meaningful information. For example, coherent passages are remembered better than chaotic ones (created, for example, by keeping the words from the coherent passage, the same but rearranging them). Similarly, new information about bridge , chess, or baseball will be better remembered by experts in those domains than by novices. The new information can be better assimilated (encoded) in terms of the an experts knowledge base.
Even very simple materials - such as words studied in a long list - can reveal this effect- Craik and Tulving (1975) reported an experiments showing a level-of-processing effect in remembering. As a perquisite, the basic idea that Craik and Tulving were exploring is that the cognitive system processes information to different levels, or depths, and that the depth of processing determines later retention. For example, in reading the German word Gedächtmnts, a reader of English (with a knowledge of the orthography of Western alphabets) could apply, at least an arthographac. Or graphemic analysis and identify the graphemes of the word. A person with some knowledge of phonology in German could sound the word out, even if he or she did not know its meaning. Finally, a person fluent in reading German could know th e meaning of the word too. (And a German-English bilingual could know the meaning of the word too. (And a German-English bilingual could translate it as memory.) To comprehend the word, the reader must progress through grapemic (visual), phonemic (sound), and semantic (meaning) codes. The level-of-progressing approach predicts that remembering depends on the level to which it has been processed, with deeper (meaningful) processing leading to better retention.
Craik and Tulving (1975) manipulated experimentally the depths to which subjects, had to process words on a list of 60 common words, such as bear, by requiring them to answer different questions about the words. Some questions directed attention to the words appearance (is it in upper-case better than other directed analysis to the words sound (Does it rhyme with chair): Whereas, others required consideration of the words meaning (It is an animal). For half, the questions answered was yes for the other half it was no. Subjects saw each word for five seconds, phonemic, or semantic levels of processing. Yet, keeping in mind that the subjects viewed the words for five seconds in all conditions and that they could answer the questions in each case in under a second. What the results show is that, with all else held constant, retention, and could be dramatically affected by the split-second cognition process engendered by the questions that were asked. How well people remember events depends partly on what the events are, but also on how they are encoded depthfully, as (meaningful) processing of information surpasses other, phonemic or graphemic analyses in its effect on later.
Long, since the Greeks, scholars have known that imagery can benefit the prospective successions in the potential sequence through which aids the potential of remembering. Instructors of rhetoric enlightened speakers mnemonic devices, which was critical for people who could not use written reminders. Modern experimental psychologists have confirmed the wisdom of using imagery in several types of controlled experiments. In most types of test, pictures are better than words, this is true even in tests that would seem to favour verbal encoding. For example, if a long series of pictures and concrete words (words that refer to rigidity, hence pictural objects) are presented, and people are asked to recall them by writing either the words presented or the names of pictured objects, pictures are better remembered than that of words. This occurs despite the fact that the verbal mode of response would seem to favour verbal over pictorial retrieval experiments. Conforming to general principles can, at least, order many intangible asset from the experimental study of remembering, that in general, all the research as such, in that conform to an encoding paradigm, as experiments are conducted in which a variable is manipulated during the study phase of an experiment, and the interest in its seeing how it effects performance on a later test.
To measure the effect of this manipulation, subjects were given a recognition test in which of the 60 items studied were randomly intermixed with 120 additional non-studied words, as subjects were told too go through the words and pick exactly 60 that they believed were previously studied. Chance performance on the test was 0.33 (60 out of 180 could be obtained by someone who had not studied the list at all). Clearly, the levels-of-processing manipulations had a dramatic effect on recognition, as a graphemic analysis recognition was contrasted in the changing recognition to a semantic analysis produced extremely accurate retention, especially when the answer to the question was yes.
Nevertheless, this same principle extends to remembering events from our personal lives. Most of us can recall with more accurately what we did on some salient occasion (New Year`s Eve, our birthday) than a day occurring a week earlier or later. A special name, flashbulb memories, is employed for memories of occasions that are emotionally very powerful, such as the attestation in the birth of a child or participating in some great national tragedy (an assassination). The analogy is that our memories are so clear as regards, details surrounding the place of occurrence, our feelings, and even fine details of the event (or our reaction to it), that they seem to have been caught as in a photographic flash and undoable imprinted in memory. People have great certitude about such memories, even though studies show that some of the retained information is false. There is debate about whether flashbulb memories must be explained by some special mechanism, or if they are simply strong variants of particularly distinctive events working through the same general mechanism that makes a picture well remembered when placed in the context of many words (Conway, 1995).
The three factors listed - endowing events with meaning, using imagery, and making events distinctive - are all examples of how factors manipulated at encoding can powerfully affect memory, however, as justly for reasons that posit manipulations occurring during encoding does not mean that retrieval processes are not important, that in most cases, the interaction between encoding and retrieval factors critically determining retention.
If by chance or the given of change one can reflect on experiences you have had in trying to remember events from the distant past, the importance of retrieval conditions for remembering will become obvious. As, perhaps, you see someone familiar but cannot remember her name, a bit later the name comes to you, or someone asks you who starred in a particular movie and you draw a blank: When several possibilities are mentioned, you immediately know which one is correct. In another case, you return to a place where you need to live, and the sight and sounds bring back memories of events that you had not thought of for years. All of these common experiences show that having information encoded and stored in memory is no guarantee that it will be remembered, in addition to good encoding appropriate retrieval conditions must exist for the events to be remembered.
Psychologists have studied the critical role of retrieval processes by manipulating the conditions and the types of cues provided to people during retrieval. In one common techniques, people are given long lists of words belonging to common categories (e.g., birds - pigeons, sparrows, furniture - dresser, hat rack, etc) with instructions to remember the objects in the category. Afterwards, some people are given a free recall test, in which they receive a blank sheet of paper with instructions to recall as many words as possible from the list: In one experiment people remembered 19 of 48 studied words under these conditions (Tulving and Pearlstone, 1966). What happened to the missing 29 words. Were they not well encoded and stored. Another group of people received a cued recall test with the category names given as retrieval cues. In this condition, subjects recalled about 36 words, or almost given as retrieval cues. In this condition, subjects retrieved about 36 words, or almost twice as many as in free recall. This shows that the failure to recall words under free recall conditions was due not solely to problems in encoding or storage, but also to retrieval factors. when supplied with strong retrieval cues, people can remember events that seemed forgotten under other conditions.
Many studies using many different types of materials, have revealed the same general point. It is impossible to make absolute statements about how much or what kind of information is available (or stored) in memory, but all we can ever know is what information is accessible (retrievable) under a particular set of test conditions. Change the retrieval conditions (or the nature of the test), and a different estimate of accessibility information will be produced.
What determines the effectiveness of retrieval cues. The general rule that is supported by considerable research is the encoding specificity principle, which states that retrieval cues are effective to the extent that they match the way the original events were encoded (Tulving, 1983). In the experiment, the category names served as effective cues, because they helped to re-create the encoding of the presented words, at least relative to free recall conditions. Similarly, the context in which events occurred can serve as an effective cue, which is why returning to a place from which one has long been absent can bring back memories of old experiences. The encoding specificity principle indicates that it is a mistake to consider either encoding factors retrieval factors in isolation when discussing memory. rather, the interaction between encoding and retrieval is critical.
Though remembering is best conceived as the successful interaction of encoding and retrieval. Consider, for example, the effects of distinctiveness on recall of the event to be remembered. If a person sees a picture in a list of 99 words, it will be well recalled, but the same picture would be poorly recalled after being embedded in a list of 99 other pictures. Although the manipulation of distinctiveness occurs during the encoding stage of the memory experiment, the reason for its effectiveness probably depends critically on retrieval. The retrieval cu e ‘picture in the list’ identifies only one item in the list, helping to remind the person as to that one distinctive item, but the same cue is essentially useless when a other encoding factors described when a large number of pictures has been studied. The same argument can be made with the other encoding factors as for understanding how each affect is retentive, and would necessitate consideration of retrieval factors too.
As another illustration of the interaction between encoding and retrieval factors, consider the effects of drugs on memory. Most drugs that depress activity in the central nervous system harm memory. Drinking alcohol or inhaling marijuana, for example, create poor recall of events that occur while the person is under the influence of the drug. The traditional explanation has been that these drugs harm the brain’s ability to encode and store events, as retention is poor. Although this explanation in terms of encoding factors is probably partly correct, it is not the whole story, because retrieval factors (in interaction with encoding) come into play in an interesting way. This is observed in the phenomenon of state-dependant retrieval: How well an event is remembered depends on the person’s pharmacological state both during encoding and during retrieval. Matching states during both phases aids retention relative to mismatching states.
In the most common type of experiments on state-dependent retrievals, four groups of people are tested in various conditions, as in an experiment by Eich, Weingartner, Stillman and Gallan (1975). Two groups studied words in a categorized list like, in the one described earlier, but under conditions when they were sober at study, whereas two other groups were given a drug prior to study. A day after studying the material, the people returned and were then tested either sober or intoxicated, with all four possible combinations of conditions between study and test being used (sober at study, sober at test, etc.) People were given a free recall test followed by a cued recall test. The conditions and results from the Eich et al. experiment, placed accountable in that these researchers used categorized word lists, the retrieval cues were category names, first examine the free recall results, as the fist two groups were to show the standard effect of marijuana on memory. People who were intoxicated during encoding remembered less of the information when tested sober than did people who were sober on both occasions. The results in the third group showed that intoxication during only the retrieval phase also inhibits recall, although not as badly. The interesting case is the last of the few that people who were intoxicated during study actually recalled the information better if they were intoxicated again during the test. The advantage of the drug-drug conditions, that words were recalled relative to the drug-sober condition, in that what defines the phenomenon of state-dependent retrieval: Matching the pharmacological state during study and test improves recall (which have been replicated many times, do not argue that depressive drugs add memory. The sober-sober condition always produced the best retention).
These same general principles also seem true of mood and memory research. People who learn information while depressed, for example, remember it better when they are depressed rather than happy (and conversely) again, this outcome occurs in free recall but not cued recall.
The phenomenon just discussed show the powerful interaction of encoding and retrieval conditions: People understanding of all memory phenomena depends on considering encoding factors, retrieval factors and their interaction, as this is true even of mnemonic devices, with which memory improvement techniques have been of great interest to scholars throughout recorded history. The most common techniques have been repeatedly discovered and employed. All mnemonic techniques employ the general principles and supply strategies for both effective encoding and effective retrieval.
Nonetheless, our memories are remarkable for being as accurate as they are. People who are rendered amnesic as a result of brain damage must be institutionalized or receive complete care at home, because our ability to remember affects everything that we do and every aspect of our being. (Imagine not being able to remember names, faces, where you put things, who told you facts and so on.) Yet, as good as our memories are, under normal circumstances, we are acutely aware that they are not perfect. We forget where we parked our car, our friends telephone number, and important appointments. More surprising, we can systematically misremember events. That is, we do not forget that some event occurred, but we forgot the details, or even the gist of what happened, proved wrong. We consider these issues as forgetting and false memories over time, means the loss of information over time. However, as we have already made clear, that the feature in standard research on forgetting is that different groups of people learn the same materials and then are tested (using some standard test) at various times since original learning, and the forgetting curve is plotted from the various groups performance. So, forgetting means loss of information over time. Still, forgetting in this sense does not necessarily imply that the forgotten information has vanquished from the brain; testing at any interval with more powerful retrieval cues would show recovery of the forgotten information. But it remains useful to speak of forgetting as loss of information over time when tested in a particular constant way.
The nature of the forgetting function is relatively clear, but the explanations for forgetting are more unsettled. The earliest ideas was simply that memories decay over time just as muscles atrophy without use and become weaker, memory traces were thought to have a certain strength that decayed over time if they were not used. However, this notion has been discredited as a general explanation of forgetting (McGeoch, 1932). No mechanism is postulated further, decay is occasioned by time, but time is not an explanatory constant. (Suppose a child asked why her bicycle rusted when left outside in the rain for a long time, telling her that time caused the rust would not do, whereas an explanation in terms of oxidation - the process operating over time - would be more accurate.) In addition, empirical evidence showed that forgetting could be greater or lesser over time depending on the intervening condition. In particular, if the time between learning some event and being tested on it is filled with similar events, greater forgetting occurs. This fact turned psychologists away from decay as an explanation of forgetting and toward interference.
Interference is undeniably critical to forgetting, but there is still no complete explanation of interference effects. Two classes of interference exist: Proactive and retroactive interferences. Suppose you try to remember the exact spot where you parked your car when you arrived at work on Monday, two weeks ago. This represents a difficult task for most of us because of interference. We park our car in different locations every day. All the times you parked your car before the day in question produce proactive interference for the target memory: All the places you parked your car after the day in question exert retroactive interference. The names indicate that interference can either have effects on retention of events coming later, a proactive effect, or later events can interfere with earlier ones, a retroactive effect. these two classes of interference have been systematically examined for almost a hundred years, and both can be quite potent in causing forgetting under appropriate circumstances.
Agreeing that forgetting usually refers to the emission of information. We try to remember something, and either nothing comes to mind, or what does come to mind can be rejected as the wrong information. The issue raised under the rubric of ‘false memories’ is whether we can vividly remember an event and its surrounding details, but either the event never actually occurred, or it happened in a way very different from the way it is remembered. This issue of erroneous memories has been investigated, so radically since the turn of the century, and this research has occasionally played a large role in the wider world, such as in legal cases where the accuracy in attestation of memories of crimes is at stake. Psychologists have now identified several factors that reliably lead to creation of false memories.
One of the most potent factors creating false memories is retroactive interference. We considered the role of interference in forgetting, but interference does not lead simply to omissions of memories, also to false memories. People can become confused about the source of material and can incorporate information that they read or heard about after an event’s occurrence into their recollection of an event. E.F. Loftus (1991) has reported many experiments documenting this phenomenon. In the basic paradigm, people witness a simplification accident or crime (say, a robbery) presented on videotape or in a series of slides. At some later point, they read a passage or answer a series of questions. In an experimental condition, the passage or questions contain some erroneous information about the original scene, such as the statement that the robber had a mustache (when in fact he did not). Subjects in a control condition read the passage without the misleading information. Later, subjects in both conditions receive a recognition or recall test in which they are asked about the crime or accident. interests centre on memory for the misleading information that was planted later. The outcome in dozens of experiments is that people will frequently remember the erroneous information as having actually happened in the original event, although the magnitude of the misinformation effect (as it is called) depends on many factors. The misleading information not only causes forgetting of what really happened, but seems to replace the correct information with erroneous information.

One practical implication is that suggestive questioning of witnesses to a crime by police, or lawyers can undermine the witnesses’ accurate retention of what rally transpired.
There is a second method of creating false memories is through presentation of related information. If people read a list of related words, or hear a prose passage, they will often mistake another related word or sentence as actually having occurred when in fact it did not. In one straightforward paradigm for creating such a memory illusion, people hear lists of words that we all associatively related to a word that is not presented. For example, they hear ‘hill, valley, climb, summit, tops, peak . . .’ all of which are associated of the nonpresented word ‘mountain’. Subjects frequently recall the word mountain as having to occurred in the list and recognize it as often as they do words that actually were presented (Roedierand McDermett, 1995). These illusory memories may be due to failure of reality monitoring, as Johnson and Raye (1981) call them: Did I hear something, or did I only imagine it?
As the previous question indicates a third potent source of false memories is imagination. Just as imagery can boast retention of events that actually did occur, as described, so can imagination create false memories. If people imagine events, they are more likely to think they really happened when they are tested later. In addition, imagining events can inflate one’s estimate of the frequency that the events actually occurred.
The three factors listed - interference, relatedness, and memory. The issue is a critical one to understanding memory and will be the focus of continuous research in years to come.
The frame-like structure of declarative memory elements are those production systems typically representing declarative memory items, in terms of entities called frames or schemas. Each frame is simply a list of attribute-values pairs in which attributes represent dimensions (e.g., colour, size, location, etc.) that take on the values of the entity that the memory item denotes. For example, a declarative memory item representing some visual object might have a slot for the object’s colour, another slot for the object’s shape, and yet another slot, and yet another slot for the object’s position. Different kinds of items can have a different set of slots. One can think of the different combinations of slots as representing different object categories, as well as relations between objects.
Such frame-like memory structures provide precise, powerful representations of things in the world, including objects, relations between objects, and relations between relations, this representational power is especially important when one tries to build systems that do complex problem solving. However, this form of memory representation often has difficulty in situations where the knowledge is more continuous and less hierarchical (e.g., low-level vision).
Interestingly, the particular organization of declarative knowledge in a production system usually does not have immediate consequences for the system’s performance. That is, one can get similar behaviour from very different organizations of memory items. For example, one can use a single declarative memory element with many slots representing all that one knows about some individual or one can have a large number of declarative elements each representing individual facts about that individual. A production system can function equally well with either representation scheme. The reason is that what matter’s is primarily whether information is contained somewhere in memory, not so much which information is stored together. If a different organization is selected the productions are rewritten to accommodate the new structure. It is important to note, however, that in the production systems that learn the organization of declarative memory can have a strong influence on performance.
Results of computation are stored in a potentially temporary declarative memory, whereas declarative memory does more than represent objects and features in the environment: It also represents the intermediate results for tasks that cannot be solved all in one step. For example, when mentally multiplying two - two-digit numbers, you must mentally store the intermediate products. Thus, a production system for doing this would contain some declarative memory elements that represent the [external] multiplicands as well as other declarative memory elements to represent the [internal] partial products. Another way in which declarative memory serves this function is in storing goals and sub-goals.
This function of declarative memory raises another important and related question: Are these declarative memory elements permanent? In particular, are all the intermediate products of complex tasks erased after the task is complete, or do they leave long-lasting declarative memory elements? The basic problem is that the more information there is sitting around in declarative memory, the more likely it is that many productions will be satisfied simultaneously. This, in turn, complicates the process of conflict resolution. Moreover, this issue values to a common psychological finding considered to be a basic feature of human cognition: The limited nature of short-term or working memory.
Production system designers have proposed a wide range of answers to these questions. At one extreme are systems in which items stay around forever once they are created. At the other extreme are systems in which items are deleted once the system moves onto the next task. The only way in which such systems can remember facts over long time spans is to have productions that re-create the facts in declarative memory when they are required. Intermediate between these two extreme approaches to dealing with the duration of declarative memory elements are those systems in which the elements vary in activation (which in turn determines how available or easily retrieved they are). The activation increases each time the represented facts or items are encountered and decays with time after each encounter. At first blush, it would seem obvious that the vast body of empirical evidence from experimental studies of human memory could be used to select among these approaches. However, it turns out that one can produce the effect of a limited working memory using any of these schemes, and the ultimate answer will require both further experimental evidence and detailed modelling of those experimental results.
How does knowledge interact, and how does learning become generalized? Production systems provide strong answers to these fundamental questions (1) Learning occurs at the unit of production: (2) Transfer from one situation to another occurs to the extent that the same productions are applicable in both situations. This assumption about the modularity of productions allow production system designers to determine, through a detailed analysis of a task domain or a careful encoding of the verbal and behavioural protocols of human problem-solvers or both, what the individual productions are and simply add them to the system. One does not have to decide where to put a production: Its condition define when it will be used.
Because of their modularity production systems scale up well to complex tasks. That is, not only do production systems function well on small, simple tasks they also function well in more realistic environments involving many sub-tasks and thousands (or more) of bits of knowledge. For example, there is a production system called TacAir Soar, which has tens of thousands of productions, can fly a simulated plane in a dogfight (in real time) while doing language comprehension and production, and is capable of providing a verbal summary of the mission afterwards (Tambe et al., 1995).
Nonetheless, there are a number of areas in which production system models have already done very well, and are arguably the strongest (and occasionally only), models in those areas. These areas are almost exclusively instances of higher-level cognition and generally require the coordination of many kinds of knowledge. They include learning mathematical skills such as algebra and geometry, learning computer programming skills, language comprehension, scientific discovery, and many other forms of high-level complex action and reasoning.
Nevertheless, the minor crises concern relations between memory for lists and memory for sentences. By postulating two autonomous systems for processing and storing lists versus sentences current multi-store theories of memory illustrate the SPM assumption that information processing and storage take place within autonomous modules, or stages. For example, Alan Baddeley, a leading British researcher investigating the psychology of memory, postulates a memory system known as the ‘phonological loop’ which processes and stores word lists in raw phonological form for short periods of time and is separate and distinct from the system for processing and storing the syntax and meaning of sentences (the central executive).
Baddeley’s multi-store account of memory currently faces two sorts of empirical crises. The first concerns cases where sentence variables influence lists processing in ways that would not be expected if fundamentally autonomous memory systems process sentences versus lists. By way of illustration, consider a recently discovered effect, whereby, syntactic and semantic factors influenced immediate recall of words in rapidly presented lists (MacKay and Abram, 1936) compared immediate memory for identical words in chucked versus unchucked lists that were six to eight words long and rapidly presented through computer so as to preclude rehearsal. In general, to explain the resultant amounts and meet this crises in general, multi-store theories must explain how semantic/syntactic factors influence a supposedly separate store traditionally viewed as purely phonological in nature.
1. Chucked list: Phrase good faith mind night gown film (phases italicized)
2. Unchucked list: Phrases people faith mind night hose film (unrelated words).
The second crises concerns phenomena in immediate recall of sentences that are attributable to factors that characterize lists. When introduced into spoken sentences causes a short-term memory phenomenon known as repetition deafness, that is to say, that otherwise observed a short-term phenomenon, that is otherwise observed only within lists. To explain the result and meet this crises in general, multi-store theories must explain how a phenomenon can arise in the supposedly autonomous memory system for storing and processing sentences by introducing a characteristic of lists.
Even through the mid 1950`s the various research programs within cognitive science have advanced our basic understanding of human mental function. Over the past 20 years, this basic science of mind has also contributed to the genesis of an applied science of learning and teaching that can powerfully inform educational practice and dramatically improve educational outcomes (Bruer, 1993). Classroom practice based on this applied science differs from traditional instruction in several ways. Instruction based on cognitive theory envisions learning as an active, strategic process. It assumes that learning follows developmental trajectories within subject-matter domains. It recognizes that learning is guided by the learners`introspective awareness and control of their mental processes. It emphasizes that learning is facilitated by social collaborative settings that value self-directed student dialogue.
Each of the instrumental features has its roots in specific cognitive science, research programs. Research on human memory has established that memory is an active, strategic process, supporting the contention that learning itself is active, strategic, and constructive. Research on problem solving within subject-matter domains has resulted in descriptions of domain-specific learning trajectories that specific in some detail the knowledge and skills required for expertise within domains and how knowledge and skills are best organized to enable expert performance. Research on this awareness and control can guide understanding and learning. Research on sociological factors in cognition has provided significant new insight about the importance of language, collaboration, and social discourse in cognitive development and learning.
Research on human memory has been a central pursuit of experimental psychology since its inception a century ago. A claim fundamental to cognitive psychology, which distinguishes it from behaviourism, is that the mind is an active information processor, not a passive communication channel. Early on, cognitive psychologists argued that we overcome intrinsic limitations on our short-term and working memory capacity by actively recoding knowledge into more complex symbol structures, or chucks. This suggested that learning might involve active, strategic recoding of knowledge structures in an attempt to discover the most efficient chucks for any given task. Cognitive research elaborated on an earlier, 1932 insight of F.C. Basrtlett about how long-term memory functions: Stimuli that cohere with prior existing memory schemata are better recalled than stimuli that fit poorly into prior schemata. The result was the development of the schema theory, which had contributed to how educational psychologists think about conceptual change. Cognitive research on memory provides empirical support for constructivist approach to learning and teaching.
One of the most educationally significant results arising out of this research program is the encoding specificity principle. To remember a percept, we perform specific encoding operations on it which determine what is stored in memory. In turn, what is stored in memory determines what cues will be effectively given in helping us retrieve that memory trace (Tulving, 1983). Success on a memory retrieval task is not a function of strength of mental representation alone. There is a striking interaction between memory encoding and retrieval processes. In fact, the utility and efficacy of a particular memory process depends in, and interacts with, eventually retrieval conditions. A more general educationally salient formulation of this same result is Morris Bransford, and Franks (1977) transfer-appropriate processing. The value of particular types of acquisition activities can be assessed only in relation to the type of activities that subjects will be expected to perform at the time of retrieval or test. According to this principle, it is not possible to determine the value of learning activities in themselves. The values of a learning activity can be determined only relative to what one expects students to do with the material they are expected to learn.
Research on human memory tells educators two things. First, encoding interacts with retrieval and acquisition conditions interacting with recall performance. Thus, the nature of the learning activity itself in prior to determining ones subsequent ability to transfer that learning to new situations. Second, the interaction between encoding and retrieval is mediated by, and develops other prior understanding, their perceiving knowledge, and their instructional schemata. If memory is an active pre-existing knowledge, and their pre-instructional schemata. If memory is an active constructive process, then ones prior knowledge structures of a current learning condition, and future application conditions are inextricably intertwined. Cognitively sound instruction should build this architectural feature of human memory.
Recognizing that one’s prior knowledge structure influences current learning has had a substantial impact on science instruction. Cognitive and educational research have documented numerous misconceptions that are indeed, that all of us, have about how the physical world operates. They have found that these misconceptions are largely impervious to traditional science instruction. In physics, for example, misconceptions persist even after extensive formal instruction through the ideation level, where the traditional instruction does not correct ones prior misconceptions, because it ignores them. Ignoring ones pre-instructional understanding allows one to interpret and encode traditional science instructional function using these pre-existing naive memory schemata. The result is that one can encode, or learn, schemata that are very different from these which teachers are attempting to impart.
Instrumental approaches that attempt to assess ones pre-instructional knowledge and belief about scientific principles are significantly more successful than traditional science instruction in correcting misconceptions and imparting more expert-like understanding of science. Jim Minstrel and Earl Hunt, for example, developed a cognitive approach to high school physics instruction, a curriculum they called ‘physics for understanding’ (McGilly, 1995). Each instructional unit begins with a diagnosis test that allows the instructor to identify ones prior understandings and observe how one reasons with them. Minstrell and Hunt call the pieces of science knowledge that one uses in their reasoning knowledge facets among the knowledge facets which one brings to a specific problem, some are incorrect, but others are correct. Correct facets can be used as anchors for instruction, to help one construct more expert-like schemata. Incorrect facets become targets for instructional change. Evaluation that have compared Minstrell and Hunts approach to traditional instruction and other experimental physics curriculum shows how one in the Minstrell-Hunt curriculum acquire significantly superior understandings of physics and scientific reasoning. Applying our understanding of memory in the design of science instruction can result in curriculum which allow the characterizations to help students correct their naive understandings and misconceptions. Such instruction is significantly more effective then traditional approaches.
However, the mechanism for reasoning may in fact, of a better understanding for its positing status of internal representation that is inherently interdisciplinary, for which representational pluralism in cognitive science ensures that much of what is posited as internal representations are what representation means in various forms from discipline to discipline and from theory to theory. Aside from complicating matters of interdisciplinary discourse in a discipline that is inherently interdisciplinary, representational pluralism in cognitive science ensures that much of what get posited as internal representations are representations just in virtue of the description placed on cognitive processing. Cognitive scientists use representation to refer to a wide range of phenomena (e.g., processes, mapping, rules, theories, information-bearing states, causally co-varying structures, and so forth). Aa such, it is not obvious that everything that gets called a representation warrants in virtue of notions of representation that are so trivial and uninteresting that cognitive scientists are guaranteed to find them. That is not good science. Although it is almost universally assumed that all cognitive processes are computational processes, and all computational processes require internal representations as the medium of computation, an anti-representationalist challenge has arisen from discussions of several computation related issues. Are intelligent systems computational systems? Is a symbolic computational framework a plausible framework for explaining biological cognitive processing? Do computational simulations explain how the mind/brain works? Thus, for a variety of reasons, some cognitive scientists contend that the status of internal representations, may be as problematic as that of phlogiston.
A distentiated future will not satisfy many committed to cognitive science. But those committed to an integrated cognitive science may discover that the potential for fracture is not as serious as it seems, at present the dynamicists’ challenge is not fully formed. Central to the challenge are the notions of information processing and representation, but these notions are currently vague and must be theoretically regimented. It may well be that a mature dynamical account will posit genuine information processing and representation, although the representations employed will not be syntactically structured or sentence-like representations (Fodor’s language of thought) is, in any case, under severe attack from a number of quarters in contemporary cognitive science. Other models of representations have come to the fore in graphs, maps, holograms, house plans, and other nonessential schemes, and many investigators are exploring the idea that the brain may process information using one or more of these other kinds of representation.
It is, nonetheless to explain how a given system does what it does, positing internal representations would be required just in case the system trafficked in entities whose content-bearings status does not depend just not depend on our descriptions or interpretations. Adopting th e less than ideal vocabulary found in the literature on intentionality, wherefore, intrinsic representation bears content even if no one were to see it: It will bear content for as long as it exists. Such is the case because ontological quantification depend on being content-bearers. They have this feature because unlike, say, rocks, photos are produced content-bearers. Not everything has this feature, the contrast class is extrinsic representations - content -bearing entities whose status as representations does depend just on our description or interpretations , as anything can be described as if it bore content: Anything can be an extrinsic representation, but can be everything is an intrinsic representation.
Do brains produce [internal] intrinsic representation? In all probability they do, whereas photos are produced by a mechanistic process designed to produce entities that are ontologically dependant on being content-bearers, a plausible evolutionary analog would be the product of mental imagery. Surely mental images of one’s past experiences are intrinsic re-representations if anything is. Linguistic tokens are another candidate . After all, once during such-and-such time, tokens of that type will always bear content. So, if either mental images or linguistic utterances are intrinsic representations, some intrinsic representations are products of biological cognitive processing+ What is at issue is this: Do internal representations mediate the process underlying the production of such representations? As these processes are supposed to be computational processes, the answer should be a positive yes.
Given the right sort of interpretation, analog quantities or disputed patterns of activation, like anything else, can be representational, but since it is the interpretational process alone that makes them representations, and, at best, they are extrinsic ones. While such constructs are descriptively useful, trying to pass them off as internal representations trivialized whatever gain representation-talk is supposed to contribute to our understanding of nonsymbiotic-analog processing (Stufflebeam, 1995). It also immunize representationalism from being falsified. So much the worse for representation.
Representation-talk is wrought with controversy. This is so, par t, because cognitive scientists posit representations while remaining ambivalent, at the very least, about the ontological problems associated with the practice. Also, it is far from obvious that everything that gets called representation merits the name, much less whether they are internal representations. Resolving these related tensions requires much in the way of reexamination and includes asking such questions as:
1. Why should representation-laden computational descriptions qualify as mechanistic explanations?
2. To what extent are internal representations artifacts of the interpretation we put on cognitive processing?
3. To what extent do our commonsense intuitions about vision predispose us to find representations in perceptual processing, even though representation-talk seems appropriate only when the system needs to keep track of external objects that are not immediately present?
4. If any internal patterns of activation counts as a symbol (or an internal representation), what possible empirical evidence would count against the notion that all intelligent systems operate over symbols (or internal representation)?
5. Is there any level of plexuity with which one would not posit internal representations to explain how a system works? If where is, why are mechanistic explanations of the simplest biological processes representation-laden?
6. How much computational labours do biological intelligent systems off-load to their environment, thus minimizing the need for internal representations?
Aside from sensitizing ourselves to the unconsidered use of representation-talk, another result is that we can be full-blooded computationalists without committing ourselves to the view that the brain processes information in the same way as do our representation-laden computer simulations. Where the ontology of biological intelligent systems is concerned, representation-related conservation is a small price to pay for a commitment to naturalism, hallowed be its name.
The needed cognition is the flexible coupling of perception and action. Whether direct or complex, this coupling depends on representing information and operating upon it. Thus, representation and its partner, processing, are the most fundamental ideas in cognitive science. Representations are the bundles of information on which processes operate cognitive processes such as perception and attention encode information from our inherent perceptions of the world, thus creating or changing our representations. Processes of reasoning and decision making operate on representations to form new beliefs and to specify particular actions. Process refers to the dynamic use of information. Representation refers to the information available for use. Loosely speaking, representations include the ideas, sights, images, and beliefs that fill our thoughts and also the sensations and dispositions which may fall outside our awareness. Because representation is such a central concept in cognitive science, the term is used in a number of related senses, and, are such that these are more specialized for uses as the need arises.
We have many intuitions about the information that is part of our own thinking or that is needed for the operation of an artificial system, and these intuitions are often a valuable starting point and source of hypotheses about representation. However, it is also frequently the case that our intuitions are incorrect or lacking altogether. This leaves a large set of problems regarding representation open for study, and cognitive scientists investigate many of them. what are the representational components of visual perception? What representations does an infant have to aid initial language learning? What representations will allow a computer system to diagnose blood diseases or a robot to navigate in unfamiliar territory? Different research goals emphasized by different disciplines within the cognitive sciences motivate different types of questions about representation: What people use, what a computer application needs, or what the nature of logic, language, or imagery might be.
Sometimes it is useful to separate questions about representation from those about processing. Consider a psychological example. As air traffic controller might be incorrect about those in processing critical information about loss of altitude or because of a processing slip due to overload attentions at the critical moment, identifying which was the case might be important, both theocratically and practically. The difference between representation and processing is often a useful contrast.
The most fundamental contrast in understanding representation, however, is the contrast between th e representation and the thing represented. All representation systems involve a relation between a represented world and a representing world (Palmer, 1978) A represented world provides the content that the representations are about, and the represented world provides the content that the representation from the represented world. Wherefore, intentionality is an important characteristic of cognition. It is useful to think of cognitive states as involving relations to intentional objects, even though the notion of an intentional object raises deep questions in philosophical logic. It is unclear whether all mental life involves intentionality, whether there are raw feels. Certainly, many kinds of feelings involve intentionality: Emotions, for example, and bodily feeling. Knowledge and perception have intentional content: Appreciation of this fact undermines the standard sense datum argument and helps to avoid mistakes in studying imagery. Understanding th e intentionality of language, pictures, and other symbols and representations requires a distinction between using symbols to communicate ideas and using symbols to calculate or think with. The intentionality of symbols used in communication may be derivative of the original intentionality of symbols used in thought and calculations. However, it is controversial whether the mere use of symbols in the right way is enough to give them original intentionality.
Our mental representation of some event does not contain the same information as did the event itself. This difference shows up when two people recall the past about the same conversational relations and discover that their memories are very different, of course, if each mental representation had the same information as the event itself, then two mental representations of a given event would be the same. Even so, the simplest precept is not the same as the stimulus which triggered it. Our perception selects, organizes, and sometimes distorts information from the perceived world. The perception of one individual differs from that of another, and difference of the same species are even greater.
Such is, that mental representations are the internal systems of information used in perception, language, reasoning, problem solving, and other cognitive activities. Mental representations cannot be observed directly. Then nature is inferred from observing the information to which a person is sensitive and the distinctions a person uses. As with external representation, there may be different kinds of mental representation systems, such as kinesthetic, linguistic, and visual. What is the represented world when the representing world is mental representation? Most simply, mental representation represent information about the external world - the perception of a face or memory of a conversation. Further, some of these external things are themselves representations: Photos, textbooks, menus, and so forth. In addition, mental representation can be about internally generated information, such as remembering a past thought or considering a newly generated idea or goal. Something is a mental representation because of its resolve in a person’s (or animal’s) cognitive system, not because it is about one thing versus another. (Some reachers, perhaps following Piaget, restrict the term mental representation to presentations, once, again, of information from long-term memory, unavailable from perceptions, but this restricted use is not the dominant one.)
To arrive at by reasoning from evidences derived of a conclusion by reasoning makes true of theoretical representations are part of a theory about something. They provide an abstract model of the target domain, be it movement of beach sand, economic growth, or human cognition. My theory of perception might claim that people represent rectangles in terms of size and shape: My theory about stereotyping might claim that non-grouping members represent the social group African-Americans with an average of media presentation, my theory about decision making might claim that people represent choices in terms of worst envisionable outcomes. Representations in a theory of cognition often have two layers of correspondence, first and foremost, the representations in the theory are taken to correspond to the mental representations in people’s minds: That is, the represented world of the theory. If the theory is a better one. It will represent more distinctions that are actually important to human cognition and will not introduce distinctions which do not matter. Secondly, any of these theoretical representations of mental representations indirectly correspond to things in the world, such as an actual rectangular structure.

Without saying, that by the early 1980's, certain kinds of difficulties were arising quite persistently and quite systematically within classifications. Examination of these difficulties makes it seem likely that they are not mere temporary setbacks but difficulties in principle stemming from fundamental assumptions of the classical framework. The difficulties centred largely around what has come to be called the frame problem. In its original form, the frame problem was concerned with the task of updating one’s system of beliefs in light of newly acquired information. If you learn that Mary has left the room, you will stop believing that Mary is in the room, and also stop believing, for example, that someone is sitting on the sofa and that there are four people in the room. You will also make some obvious inferences from the new information: For example, that the clothes Mary was wearing and the package she was carrying are no longer in the room. But most of your beliefs will not be affected by the new information. Human bings adjust their beliefs in response to new information so naturally that it is surprising to find that it is a problem. But, it has proved quite difficult for classical cognitive science.
For a belief system of any size, obviously, it is not possible to examine each of the system’s belief to see if it needs to be changed. Thus, Jerry Fodor (1983) retrieves the frame problem as ‘the problem of putting a frame around the set of beliefs that may need to be revised in light of specific newly available information’. Seen this way, the problem is fundamentally one of relevance to provide an effective, general procedure that will determine the belief to which any particular new belief is at all relevant. Those are the beliefs that get framed. Which of these relevant old beliefs actually need to be revised in a given case is then a further question.
There are several other cognitive activities that pose similar problems of relevance, belief fixation (arriving at a new belief on th e basis of diverse and perhaps conflicting evidence), retrieving from memory information that is relevant to solving a current problem or carrying out a current task, and forward-looking tasks such as deciding what to do next, deciding what is morally permissible or obligatory and making plans.
Apparently, the classical approach in all these areas must be as Fodor suggests, to attempt to put a frame around what is relevant: That is, to try to introduce rules which determine, for any given item of information, what is relevant to that item of information and what is not. Call such solutions to problems of relevance, ‘frame solutions’.
Frame solutions appear to be doomed to failure. Human cognitive systems are opened. There is no limit to the things a human being can represent, and anything one can represent is potentially relevant to anything else one can represent. Relevance depends upon the question, topic, or problem at hand - in a word, upon context. For virtually any pair of items of information you pick, there will be some context in which one is relevant to the other. (It has been suggested that the price of tea in India is not relevant to the question of whether Fred has had breakfast by 8:30 am. The obvious reply is that it is relevant ‘if Fred happens to be heavily invested in Indian tea and the market has just fallen savagely (Copeland, 1993).
Our suggestion, then, is that there are no such relevance frames in human cognition, but what other kind of solution is possible within the classical framework? Cognitive science lacks the slightest clue as to how representation-level rules could update memory appropriately or find relevant information efficiently for open-ended belief systems of the kind possessed by humans. Indeed, it seems entirely likely that it can’t be done by systems of rules at all. As Fodor (one of the staunchest defenders of classicism) has written:
The problem . . . is to get the structure of the entire belief system to bear on individual occasions of belief fixation. We have to put it bluntly, no computational formalism that show us how to do this, and we have no idea how such formalism might be developed . . .In this respect, cognitive science hasn’t even started: We are literally no further advanced than we were in the darkest days of behaviourism (Fodor, 1983).
The reemergence of connectionism in the 1980's was in large part a response to the problems in classical cognitive science. As problems persisted, many researchers looked elsewhere for a better prospect of positive results, and the only other game in town was parallelled distribution in the processing - connectionism. But this raises a fundamental question that has received surprisingly little discussion: Does connectionism have features (fundamentally different from those of classicism) that suggest that can make progress, not just on other problems, but on the very problems that slowed progress in classical cognitive science?
Classical systems, by their very nature, invoked both representation-level rule execution and representations with language-like syntactic systems. Thus, syntactic structure and cognitive-level rules are two places to look for fundamental differences between connectionism and classicism.
Certain kinds of rules are very prominent in connectionist theory, but they are not representation-level rules. Activation updating within individual nodes and local activation passing from one node to another occur in accordance with rules. (In current connectionist modelling, these are programmable rules. This is why connectionist works can be simulated with standard computers, as theory are in virtually all connectionist modelling. But it is not part of connectionist theory that node-level rules must be programmable.) However, the processing that takes place locally between nodes and within individual nodes is not in general representational. Not all local node activation in a network model and represent atomic content, and in some models the activation of a single node never has representational content - all representations, even the most basic or atomical, consist of activation patterns over a whole set of nodes. Thus, the fact that individual nodes are rule-governed leaves open the question of whether the processes that representations undergo in connectionist models must conform to rules.
There is an important sense in which even node-governing rules are absent from connectionist systems: Networks do not contain explicitly represented rules of any kind. It is sometimes thought that the absence of explicit rules constitute as a watershed difference between connectionism and classicism. But this is a mistake. Such that the rules posited by classicism can be hard-wired into a computational system rather than being encoded as representations. (Indeed, at least some rules executed by a classical computational system must be hard-wired. The node-governing activation-update rules of a connectionist network are analogous to basic-wired rules of classical systems.)
It is more common to focus on lack of syntactic structure as an alleged difference between connectionism and classicism (Churchlands, 1989, 1995: also as by virtue, Fodor and Pylyshyn, 1988 - as a deficiency). Such authors claim that the activation vectors that constitute representations in connectionist systems lack syntactic structure. (A vector is essentially an ordered n-tuple of items, an activation vector is an ordered n-tuple of activation values of specific nodes in a neural network.) This means that the processing of representations in connectionist systems is fundamentally different from the largely syntax-driven processing of representations in classical systems. These writers do not raise the question of whether connectionist processing conforms to programmable rules, implicitly, at least, they evidently suppose that it does, but they would suppose that the rules at work in connectionist systems describe processing as effecting vector-to-vector transformations, and suggests that such transformations conform to rules that are sensitive to the vectorial structure of the representations. The approach, which we call nonsentential computationalism, repudiate a fundamental assumption of classical mention.
Nonsentential computationalism is not obviously a correct interpretation of all extant connectionist models. On the contrary, there are certain models that are naturally interpreted as involving both representations that have syntactic structure and processing that is sensitive to the structure (Jordan Pollack (1990) and Paul Smolensky (1990), and discussed in Horgan and Tienson, 1996). Nor is nonsentential computationalism obviously the most natural or most attractive foundational framework for connectionist cognitive science. One serious reason for doubt is that framework for computationalism in effect offers just a seriously limited variant of classicism. It is a variant because it continues to hold that cognition is implemented by processes that conform to programmable rules (so it can be more powerful than classical cognitive science). It is limited because it eschews an extremely powerful way of introducing semantic coherence into the computational manipulation of representations: The syntactic encoding of propositional information.
One favouring nonsentential computationalism might be expected to reply that connectionist models get by without any explicit stored memories, with lots of information in the encumbering end that networks are not programmed. However, to the extent that connectionist precessing conforms to representation-level rules, we could get these same features in a classical system, in which all the rules are hard-wired rather than explicitly represented in lots of information is implicitly accommodated in the (hard-wired) rules rather than being explicitly stored in memory.
But is it possible for a connectionist system that employs representing to fail to conform to rules that refer to these representations? Indeed it is. In the first place, it is not necessary for the temporal evolution of a connectionist network to be tractably computable. The natural mathematical framework for describing networks is the theory of dynamical systems (Horgan and Tienson, 1996), and the temporal evolution of a network is not tractably computable, there is no reason to believe that the cognitive evolution of the cognitive system which the network realizes will be tractably computable through representation-level rules.
But in the second place, it is important to understand that a connectionist model may not conform to representation-manipulation rules even if it does conform to sub-representational programmable rules that govern individual nodes and local inter- node transactions, as do, that most current connectionist models conform to programmable node-governing rules: The networks are simulated on standard computers. As a prelude to explaining why not, we begin with a preliminary point that is important and not widely recognized. It is possible for a connectionist system to be nondeterministic at the sub-representational level of node activation updating and local inter-node activation passing. This is because the same connectionist representation can be realized by many different sub-representational states of the system, and the representation-level outcome of processing can depend upon the specific way that a representational state is realized sub-representationally.
One source of multiple reliability of representations is different degrees of activation of nodes. The realization of a particular cognitive stats, say, ‘A’, might consist in each of a given sets of nodes being active, to, at least, a certain degree, say 0.8. Then some realization of this cognitive state will have node ‘N’ mere highly activated than node ‘M’: Others will have node ‘M’ more highly activated. It can then happen that from some activation states that realize ‘A’ the system goes into activation states that realize cognitive state ‘B’ while from others it goes into activation states that realize a different cognitive state ‘c’: So, there will be no way of knowing the cognitive-level outcome just from knowing its initial total cognitive state. Being non-determinate at the cognitive level can be a valuable asset in many kinds of competitive activities, such as playing poker and feeling for one’s life: (note that no randomizing dice-throw rules are involve at any level of description, either representational or sub-representational, as would be required to make a classical system nondeterministic).
This preliminary point establishes an important moral: Namely, that key features of a connectionist system at the sub-representational level of description need not transmit upward to higher levels of description, because inter-level realization relations can work in ways that block such transmission. In that of saying, that tractable commutability of state transition can also fail to transmit upward in connectionist systems, so that a system can fail to conform to programmable, reorientation-level rules, even though it conforms to programmable sub-representational rules.
Given that the transition of the underlying network are tractably computable, one might think that the cognitive transitions realized in the network could be computed like this. Starting from a cognitive state, (1) Select an activation state that realizes this cognitive state, (2) Compute the network’s transitions from this activation state through subsequent activation states, and (3) For each subsequent activation state, compute the cognitive state (if any) realized by that state.
Although the assumption that the transition of the network are tractably computable guarantees (2), there is no guarantee that step (3) - or even step (1) - will be possible. The function from activation states to cognitive states need not be tractably computable. It is possible, for example, that the simplest, most compact way to specify that function might be through an enormous (possibly infinite) list that pairs specific total activation states with specific total cognitive states - a list far too long to b e written using all the matter in the universe , let alone to constitute a set of programmable rules.
If the cognitive transitions by a network are not computable in the way just suggested, they need not be tractably computable in any other way either. Thus, one should not infer from the fact that a network’s activation-state transitions are tractably computable that it implements a cognitive transition function that is tractably computable. Nor should one suppose that an algorithm for computing its cognitive transition.
The possibility that the realizing function may not be tractably computable is not a mere abstract possibility. Certain connectionist learning algorithms allow models to select their own representations (Pollack, 1990; Berg, 1992), discussed in Horgan and Tienson, 1996). Representations are modelled along with weights as learning progresses, this allows for more efficient schemes of representation, with weights and representations ending up made for each other. It is easy to suppose that complex cognitive systems that worked in this way (as natural cognitive systems apparently do) would have very complex, rich, subtle realizing relations that are not tractably computable.
Given that it is possible for a connectionist cognitive system to fail to conform to programmable representation-level rules, several questions arise. First, cognitive transitions are not effected by executing such rules, how are they brought about? Second, are there reasons to think that it is desirable for a system not to be rule desirable Third, if a system does not conform to rules at the cognitive level can it be coherent enough and systematic enough to be called a cognitive system at all?
A very natural way to think about cognitive transitions in connectionist systems is in terms of content-appropriate cognitive forces. Beliefs and desires work together to generate certain forces that tend to push the cognitive system toward output states that would result in particular actions. But those forces can be overcome by stronger forces pushing in different, incompatible directions. A single clue in a mystery might point to the guilt of some suspects and at the same time tend to clear certain other suspects to varying degrees. Thinking of the clue produces forces that tend to activate some possible beliefs about whodunit and inhibit others. The interaction of cognitive forces in a cognitive system can be very complex. Forces can complete, in that they tend toward incompatible cognitive states, or they can cooperate, tending toward the same or similar outcomes. There can be a large number of competing and cooperating factors at work in a system at once. Connectionist models that perform multiple simultaneous, soft constraints satisfaction provide suggestive simple models of the interaction of cognitive forces.
In a connectionist network the interaction of cognitive forces is physically implemented by spreading activation. But when a representation is realized by activation of a large number of nodes, the cognitive forces generated by the overall representation are distinct from the local physical forces produced by the individual nodes implementing the representation. (The individual nodes need not be similar to one another in the kinds of weighted connections they have to other nodes or representations so they might have different causal roles from one another.)
The possible value of such a picture for dealing with relevance phenomena - phenomena associated with the frame problem in classicism - should be evident. Any two cognitive states that put out forces tending to activate other cognitive state’s will be capable of interacting causally when co-present in a cognitive system. And any two or more states that are relevant to the same problem will interact with respect to that problem - at least to the extent of tending to move the system in the direction of conflicting or compatible solutions. Thus, certain kinds of content-relevant interaction are automatic for systems that have states with content-relevant cognitive forces. Potential interactions do not have to be anticipated in advance in terms of form or content - a key difference from classicism systems, in which the operative representation-level rules must determine all such outcomes. Furthermore, forces interact with one another in a manner appropriated, not only to the contents of all the cognitive states currently activated in the system, but also to much non-activated information that is implicit in the system’s structure - in the weights, as connectionists like to say.
In natural cognizers, there are many systematic patterns by which cognitive forces are generated (many of which correspond to the generalizations of commonsense psychology). Appropriately related beliefs and desires conspire to produce forces that tend toward certain choices. (It is arguable that this pattern depends upon syntactic or syntax-like structures of the belief and desire states.) Repeated observation of a pattern of events results in cognitive forces that tend to produce expectations of similar patterns. In such cases there is a causal tendency to make such choices, have such expectations, and so forth. But these are defeasible caused tendencies, that is, it is always possible that the tendency will be overridden by a stronger force or combination of forces. Thus, although there are generalizations about the cognitive transitions that correspond to these patterns of cognitive forces, there are no programmable rules corresponding to these generalizations because they have exceptions.
Furthermore, these generalizations cannot be refined into programmable rules by specifying the possible exception. Because of the potential relevance of anything to anything, it is not possible to spell out all of the exceptions in a machine-determinable way (Horgan and Tienson, 1996). The defeasibility of causal tendencies poses a deep problem for classical cognitive science, since all potential exceptions need to be specified in just such a way: They need to be explicitly covered, for instance, by unless clauses within representation-level rules. In the cognitive forces picture, nothing has to be done to deal with exceptions: They arise naturally as a feature of the architecture.
Although cognitive state transitions do not conform to representation-level rules, according to the connectionist inspired conception of cognition that we are suggesting, systematic patterns among cognitive processes (such as those mentioned) do conform to psychological laws of a certain kind: Soft psychological laws have in ineliminable - ceteris paribus - (all else equal) clauses, allowing for exceptions that are not specified in the laws themselves. It is important that the exceptions allowed by such laws include a virtually endless range of exceptions resulting from factors like physical collapse or its breakdown (e.g., having a stroke) or external physical interactions are not mistakes or errors, but the result of the proper functioning of the cognitive system. We believe that soft laws characterize the kind of consistency and systematicity that natural cognizers actually have. They support explanation and prediction that ranges of cognitive psychology (Horgan and Tienson, 1996).
It remains to be seen whether this nonclassical view of the mind will gain empirical support from ongoing work in cognitive science. Meanwhile, however, it is well to keep in mind that connectionist modelling does not presuppose or imply that human cognition conforms to programmable representation-level rules, and that there are serious reasons to believe that human cognitive capacities essentially outstrip the capacities of systems that excuse representation-level rules.
What brain mechanisms might underlie the dynamic of perceptual processing? The way piecemeal object structures get coordinated resembles a process of mutual constraint satisfaction. The process will be nonlinear, to allow for correction of components once they are given a role within the configuration, but dynamically evolving substructures which can be corrected as object structure emerges. This would imply a role for the primary visual cortex as a sketch pad of perception.
Hologenetic development, however, appears not to be limited to the perceptual time scale. Not only is hologenesis found in micro development but is also observed in the formation of perceptual categories and in the case of perceptual pattern learning. Similar phenomena occur at the scale of perceptual development and in the learning of syntactic structure (as in the work on language acquisition by Elissa Newport and Jeff Elman). The growth of object structure through a process of self-organization among the components could therefore be proposed as a process for perceptual dynamics across a variety of time scales.
The brain principle we are looking for, therefore, must encompass both short -term and long-term processing loops. Christopoh von der Malsburg, and Wolf Siunger and other theorists have proposed the synchronization of oscillatory activity as a mechanism for selective component binding as within our brains processing visual data in segregated, specified cortical areas. As in commonly remarked, the brain processes the ‘what’ and the ‘where’ of its environment in separate, distal locations. Linked, regarding the ‘what’ information that the brain computes. It responds to edges, colours , and movements using different neuronal pathways. Moreover, so far as we can tell, there are no true association areas in our cortices. there are no convergence zones where information is pooled and united: There are no central neural areas dedicated to information exchange. still, the visual features that we extract separately have to come together in some way, since our experiences are of these features united together into a single unit. The binding problem is explaining how our brains do that, given the serial distributorial nature of our visual processing. How do our minds know to join the perception of a shape with the perception of its colour to give us the single, unified experience of a coloured object?
This problem has a venerable history in philosophy, first appearing in its modern guise in David Hume, as he, following John Locke, speculated on the rules that our minds must follow in uniting simple impressions into more complex ideas. He recognized that the rules of association alone could not be enough: Incoming stimuli are always changing, yet we manage to experience ideas as constant across time. Somehow our faculties of imagination step in and fill the gap between stimulus impressions and later memories and ideas. Immanuel Kant, too, recognized that mere spatial contiguity and temporal conjunction would not unite certain incoming stimuli into bound impressions at the exclusion of others. Both Hume and Kant concluded that our minds must add something to our perceptions so that our experiences are a three-dimensional, object-filled world.
The history - and its solution - recapitulates itself in contemporary cognitive science. Like Hume and Kant, cognitive scientists recognize that the story of visual perception told thus far is incomplete. The brain must rely on something besides physical connectedness among cortical areas to generate united percepts. But what? Association, even in the head, is not enough. What would be?
It may therefore seem that a system approach to perception could provide a better explanation for perceptual phenomena. But the system approach is not without its own problems. From a system point of view, it may appear of some miracle that perception functions so well in situations where the condition require us to go beyond the information given, like limited vision conditions or conditions where the goal of the action is way beyond the horizon of visual stimulation. The constructist approach explains this from the overall tendency of perception to make sense of a situation. Pictures and films exploit this tendency of perception, including that of being misled by expectation. Such in seeing a bank robbery where, in fact, there is only a filmset of a bank robbery.
The intuitive direction in the early period of cognitive science tended to limit its focus to events presumed to be taking place within the mind or brain. While all researchers would acknowledge that minds exist within bodies and that these bodies have to deal with the external world (both physical and social), most researchers assumed that they could disregard these considerations when studying cognition. Cognition focussed on the processing of information inside the head of the person. In order for this to happen, information had to be represented mentally: Cognitive processes could then operate on representations. Subsequently, represented information had to be translated into commands to the motor system, but this took place after cognitive processing as such was finished. Jerry Fodor (1980) articulated such theoretical justification for ignoring both the external world and the body in cognitive science, labelling the resulting framework, as ‘methodological solipsism’, but opposition was already gathered in a number of quarters.
One of the major inspirations for challenging methodological solipsism was the work of J.J. Gibson, a psychologist working in Cornell contemporaneous with the early period of cognitive science, but whose impact fell elsewhere. Gibson studied visual perception, but instead of concerning on the information processing going on within individuals as they see, he examined the information that was available to the organism from its environment. His major contention was that there was much more information available in the light than psychologists recognized , and that organisms had only to pick up this information (Gibson, 1966). They did not need to construct the visual world through a process of inference or hypotheses formation. He argued, for example, that people do not need to construct a three-dimensional representation of the world, rather, there is information specifying the three-dimensional nature of the visual scene. In the gradient of texture density, changes in occlusion of object as the perceiver moves about in the environment, and so forth one of Gibson’s major contentions was that the perceiver must be understood as an active agent using its own motion to sample information about the environment. Gibson also stresses that not all organisms pick up the same information from the environment, but rather would resonate with information that is coordinated with their potential for action. Accordingly, he introduced the notion of an affordance, different objects afford different action to different agents (e.g., a baseball affords throwing to use, but not to frogs), and it is these affordances which organisms are turned to pick up
Nevertheless, the immediacy of these experiences makes it easy to take perception for granted. Yet, perception requires the flexible coordination of complex neuro-anatomical resources. The eye, the optic nerve, and also a significant portion of the brain are involved in vision. We may further consider the eye muscles that are used for focussing and targeting of the gaze to be part of the visual system, as well as the muscles of the neck and shoulders with which postural adjustments are made.
There are self-organizing processes that allow the appropriate response of how all these resources coordinated to let the system as a whole perform its function in the relevant circumstances, such that these processes permit rapid switches in response due to minimal changes in circumstance, as long as these are important enough. Thus, perception may appear immediate, but it is achieved through a variety of adaptional learning developments, and evolutionary processes, For these should for an essential part of the description of the system. The quest for such a description constitutes the systems approach to perception.
Perception starts from a pattern of external physical stimulation (e.g., the photons that reach the eye) and is completed when this pattern is matched to an internally kept set of beliefs or representations of the world. A conceptual distinction is therefore needed between sensory processing and an inferential reasoning stage, which could be called perceptual in a more narrow sense of the word.
Sensory processes are involved in measuring the physical stimulation. Employing linear, semi-linear, or threshold functions, they faithfully represent certain relevant aspects of physical signals such as light intensity and hue or sound intensity and pitch. Physical stimulation will arrive in a particular spatiotemporal pattern. The sensory process, however, is indifferent to this pattern. For instance, suppose a detector measures the light intensity in a certain area on the retina. This patch of light will be registered as the same sensory feature regardless of whether it is part of a triangle, a square or just a random configuration. Further sensory processing will combine the output of earlier detectors into higher-order ones in order to identify features of increasing plexuity. Thus, there will be detectors for features such as contours, line elements, and curvature. Nevertheless, in sensory processing the identification of each feature of these feature s will still not be influenced by the overall pattern of which it is a component.
In the constructivist account, sensory processes provide only the lines and angles of intersection: Perception tells you what object you are looking at. Perceptual processes operate on the sensory features to construe a perceptual representation. Unlike sensory features perceptual representations do not depend faithfully on stimulation: Ambiguous patterns such as the Necker cube, of which has two rival interpretations, referring to alternative views from which are mutually exclusive). The existence of alternative responses to the same pattern of sensory stimulation requires two alternative perceptual representations for the same pattern of sensory stimulation.
Different patterns of sensory stimulation, may also elicit the same perceptual response. In particular, it is important that the perceiver recognize an object as the same under different orientations. An elephant is an elephant whether one is looking at the front, back or side. for this reason, perceptual representations are often assumed to have a view-point-independent frame of reference. Even in non-stable circumstances, such representations will provide a stable basis for further evaluation against the background of what we know about the world.
The major problem from the constructivist point of view is how to get from objects and events in the world to perception of them. The fact that sensory processes, being indifferent to object structure and meaning, mediate between the world experiences imposes severe restrictions on perceptual models. By contrast, the need for mediation is denied by a systems account, on this view, perceptual systems operate and have evolved in close interaction with the world. So the perceptual system fits like lock and key, with the patterns of the environment. A crucial distinction between systems and constructivist approaches to perception concerns the construal of sensory processes.
The notion of sensory processes has its historical root in the concept of sensation. A sensation is the phenomenal awareness of a primary quality (the brightness and hue of a colour, the loudness and pitch of a tone). Phenomenal awareness means that th e perceiver experiences what it is like to sense the colour or the tone: Primary refers to the fact that these are the operants, presupposed in the notion of constructive operation. The concept of sensation has found its justification in classical operations about the perceptual process, which may be based on false assumptions. The first question that should be answered is, therefore, Do sensations exist?
The study sensation has evolved as a separate domain with its own research methods. Classical psychophysics, which started in nineteenth-century Leipzig with Gustav Theodor Fechner, tries to establish lawful connections between how perceives judge their experience, on the one hand, and physical quantities, on the other (brightness as a function of intensity). Exponents (Fechner/Weber) or power functions (S.S. Stevens) of the signal have been proposed to describe sensory quantity. Fechner’s proposal results from his assumption that just noticeable differences, propositional to physical intensity, are the units of sensation. This involves subjects detecting a weak signal (a light flash, a sound) or discriminating between two signals. How much are sensations a by-product of judgmental factors? Signal Detection Theory (Green and Swet, 1966) has provided a technique for distinguishing sensory sensitivity from judgmental bias.
Saying that, the neurosciences have provided a classical description of the visual system, which is in good agreement with this notion of sensory processing, and therefore frequently discussed as the view of the neurosciences. On this view the visual system is a feed-forward processing hierarchy which exhibits convergence. Rods and cones, the receptor cells involved in the registration of light intensity, neighbouring positions on th e retina, combine the signals to generate on-off patterns in ganglion cells. These are projected through relay stations in the thalamus called lateral geniculate nuclei onto cells of the visual cortex. These cells respond most accurately to contours or line segments in a specific orientation, resulting from the combined project as an on-off overlapping lateral geniculate cells. Sensory processing, therefore, seem to combine physical signals into features of increasing complexity (Hubel and Wiesel, 1962) but, still is not without global information. The global pattern is not represented in the individual cells of the cortex but is available for further processing because each retina projects to the visual cortex in a systematic manner that respects the topographical organization of the retina.
The classical view is an oversimplification from the perspective of more recent developments in the neurosciences (Zeki and Shipp, 1988). Besides convergence, divergence occurs in the visual pathways. From th earliest stage, the retina, a division into specialized pathways can be observed. For instance, two routes to the lateral geniculate nuclei with different cortical projections can be distinguished. One operates in a slow and sustained manner, has a high spatial resolution and restricted detection sensitivity.
Modern neuroscience in general suggests a division of labour in the brain according in different sensory modules, each specialized for a certain modality (colour, contrast, odour, temperature., and pitch). Many important attributes of perception, however, are modal (duration, rhythm, shape, intensity, and spatial extent) or multi-modal (such as being a brush-fire, which involves the heat and smell and the glow). So the notion of sensory modularity increases the need for perceptual integration.
This apparently is still in agreement with the principles of constructivism, which maintains that integration is achieved by processes a post-sensory, inferential nature. Unimodal perception will, therefore, precede integration across the modalities on development. According to a systems point of view, it is the other way round. Amodal and multi-amodal aspects of perception are primary properties, precisely because of the importance of these structures in the environment. The child will therefore start responding in the multi-modal structure, and development is aimed at differentiation.
David Lewkowicz and his colleagues have, over several years collected ample evidence that young infants (4 months old) perceive inputs in different modalities as equivalent if the overall amount of stimulation is the same. these infants, due to the immaturity of their nervous system, appear to react to the lowest common denominator of stimulation, which is quantity. Quantity is, therefore, modality-unspecific-that is, not associated with a specific sensory quality or process. Lewkowicz proposes that these early equalences may form the basis for later, more sophisticated equivalency judgment processes. For the attributes of time, for instance, infants differentiate according to synchrony first, and this differentiation forms the basis for the subsequent differentiation of responsiveness to duration, rate and rhythm.
Research in sensory development suggests that perceptual integration is not achieved according to the constructivist picture of sensory developments as feed-forward signal propagation. Rather, significance of amodal and cross-modal information early in processing suggests that integration between the sensory modals occurs early in processing. Such a notion of inter-sensory processing is in accordance with a system account of perception, which emphasize the role of coordination between the components of the system, rather than their isolated contributions to perception.
The neurosciences support the notion of inter-sensory perceptions at all possible levels of description. At the smallest scale, this is realized through inter-neurons. Which provide individual cells within the individual pathway with lateral, mostly inhibitory connections. Lateral inhibitions are useful, for instance, to selectively enhance boundaries in the pattern sensory stimulation, because identically stimulated neighbours will cancel each other’s activity. This example illustrates that integration of sensory stimulation into a coherent pattern does not wait until sensory processing is completed but begins in the earliest stage. Lateral connections also occur between different sensory modules and may serve to flexibly enhance or reduce the contributions of a sensory module to the process.
In addition to feed-forward and lateral connections, there are also backward connections, which are likely to play an important role in perception - for instance, from the higher visual areas back to the primary visual cortex and from there back to the thalamus. This in accordance with the downstream operation of semantic information. Pattern code could be mapped downward in the sensory detection system to correct its output. This wold make sensation dependent on backward knowledge and sensory process. But the effects mentioned of categorization (tomato verses apple) on the shape of there colour patch perceived and of word meaning on perceived pitch suggest otherwise. The interactive inter-sensory chapter of early processing in accordance with the notions of self-organization favoured by the systems approach.
The central problem of constructivism - how to get from isolated sensory features to the representation of integral structure - appears to be a misconceptualization. Isolated sensory features do not seem to exist. The close interactions observed, both within and between the sensory modules appear more in accordance with the view that the sensory system communicates with the world on the level of patterns than that communication is on the level of isolated signals. On the other hand, perceptual object structure doesn’t appear to have the abstract characteristics that constructivism attributed to it. It may therefore seem that a system approach to percept on could provide a better explanation for perceptual phenomena. But the system approach is not without its own problems. From a system point of view, it may appear that perception functions do well in situations where the conditions require us to go beyond the information given, like limited vision conditions or conditions where the goal of the action is way beyond the horizon of visual stimulation. The constructivist approach explains this from the overall tendency of perception to make sense of a situation.
In the process of setting up a system approach to perception, brain processes cannot be neglected. The problem is to find a general characterization of these processes in accordance with the systems approach, the dynamics of perceptual organization in the brain could be approached from the perspective of self-organization. The idea that the brain is an instrument for stepwise creative syntheses. This notion forms the basis for the constructivist approach, which requires that inference processes be posited to explain how the perceiver makes sense of a situation. Alternatively, the principle of hologenesis illustrates that a system account of these phenomena is possible.
Nonetheless, before scientists could make claims about the functional organization of the brain, they needed to learn something about its general architecture. At the end of the nineteenth century major advances were made at both the micro and the macro level in understanding the brain. At the micro level the crucial breakthrough was the discovery that nerve tissue is made up of discrete cells - neurons - and that there are tiny gaps between the axons that carry impulses away from one neuron and the dendrites of other neurons that pick up these impulses. In the 1880's Camilio Golgi introduced silver nitrate to strain brain slices for microscopic examination. Silver nitrate had the unusual and useful feature of staining only certain cells in the specimen, thereby making it possible to see individual cells, with there associated axons and dendrites, clearly. Santiago Ramón y Cajal argued that the nervous system was comprised of distinct cells (a view that Colgi, however, never accepted). Sir Charles Scott Sherrington then characterized the points of communication at the gap between neurons as synapses and proposed that this communication was ultimately chemical in nature.
While processes at the micro level of the neuronal substrate would figure prominently in understanding cognitive processes such as learning (which is widely thought to involve changes at synapses that alter the ability of one neuron to excite or inhibit another) and became the inspiration for computational modelling using neural networks. (An approach in which, all over the mediation of Donald Hebb took over the term connectionism from earlier, associationist approaches to conceptualizing the brain such as Wernicke’s). A key figure in this development was Warran McCulloch, a neurophysiologist who began his career at the University of Chicago. He collaborated with Walter Pitts, then an 18-year-old logician, in a widely cited 1943 paper that analysed networks of neuron-like units. McCulloch and Pitts showed that these networks could evaluate any compound logical function and claimed that, if supplemented with a tape and means for altering symbols on the tape, they were equivalent in computing power to a universal Turing machine. The units of the network were intended as simplified model neurons and have been referred to ever since as McCulloch-Pitts neurons. Each unit is a binary device (i.e., it can be in one or two states as for being on or off) that receives excitatory and inhibitory inputs from other units or outside the network, the state of a network of these units emerges over a number of cycles. On a given cycle, if a unit receives any inhibitory input, it is blocked from firing. If it receives no inhibitory imput it fires if the sum of equally weighted excitatory imputs exceeds a specific threshold. A unit with this design is appropriate not only as a model of a simplified neuron but also as a model of an electrical relay - a basic component of a computer - and hence McColloch-Pitts neurons helped by others, including John von Neumann and Marvin Minsky. McCulloch and Pitts also made a link to logic: The neurons could be associated with propositions, and because of the binary nature of these units, their activation states could be associated with truth values.
As attractive as some theorists found the comparison of the brain to a computer at the architectural level, many others moved beyond the logic-gate level of focus and began trying to analyse how nervous systems carried out more complex psychological tasks, such as those of perception. These ambitious researchers included Pitts and McCulloch themselves, who, in a 1947 paper tackled two alleged problems: How some one can recognize an object as the same when it appears in different parts of the visual field and how superior colliculus is able to transform spatial maps of sensory inputs into motor maps that direct such activities as eye movements. Here they abandoned the earlier paper’s focus on propositional logic in favour of spatial representations and analog computations. A further departure from the earlier paper is an emphasis on networks that rely on statistical order and operate appropriately despite small perturbations. Moreover, as part of their evidence for specific computational models, they compared diagrams of these computational states as with diagrams of specific neural structures.
The focus on perception continued in the central parts of Donald Hebb ‘s 1949 book, ‘The Organization of Behaviour’, The subtitle ‘Stimulus and response’ - and what occurs in the brain in the interval between them, points to one of the main emphases of Hebb’s analysis, the development of internal structures that mediate stimulus and response. Hebb sought to overcome the opposition between the more localizationist approaches of the Gestalt theories and his own mentor, Lashley. The key of his alternative was the notion of neuronal cell; assemblies, which considered of interconnected, and hence self-reinforcing, sets of neurons which represent and transform information in the brain:
Any frequently repeated particular stimulation will lead to the slow development of a ‘cell-assembly’, a diffuse structure Comprising cells in the cortex and diencephalon (and also, in the basal ganglis of the cerebrum), capable of acting briefly as a closed system, delivering facilitation to other such systems and usually having a specific motor facilitation. Each assembly action may be aroused by a preceding assembly, by a sensory event, or - normally - by both. The central facilitation from one of these activities on the next is the prototype of ‘attention’. (Hebb, 1949)
Hebb proposed that these subassemblies were created by an interaction between cells, whereby every time one cell figured in the firing of another there was strengthened, as, ‘When an axon of a cell ‘A’ is near enough to excite a call ‘B’ and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency , as one of the cells firing ‘B’, is increased.
Even so, another kind of advance involved linking different macro-level brain areas with specific cognitive functions. This required overcoming the view widely shared in th eighteenth century that the brain, especially the cerebral cortex operated holistically, without any localized differentiation of function.
However, one problem that researchers faced in attempting to localize mental functions in the brain was the lack of any standardized way of designating parts of the brain. The folding of the cortex gyui (hills) and sucli ( valleys)” Anatomist’s have named some of them and used the most prominent sucli to divide the brain into different lobes, as, the frontal lobe, parietal lobe, occipital lobe, and temporal lobe. But each lobe itself contains a number of anatomically distinct regions. Using each criteria as responses to various stains and the distribution of cells between cortical layers, a number of researchers at the end of the nineteenth century produced more detailed atlases of the brain. Of course, that by Korbinian Brodmann (1909) became the most widely adopted, and his numbering of brain regions is still widely employed today.
It comes about, that a proper understanding of intentionality is crucial to the study of a number of topics in cognitive science, including perception, imagery, and consciousness. The term itself, intentionality, can be misleading, in suggesting intentional action, doing something intentionally, with a certain aim or purpose. In cognitive science, the term is used in a different, more technical sense. Intentionality involves reference or aboutness or some similar relation to something having what the scholastics of the Middle Ages called intentional inexistence (Brentano, 1874).
When Mary thinks of George Miller as a cognitive scientist, the intentional object of her thought is George Miller is a cognitive scientist. She has a mental representation of him as a cognitive scientist. What Mary thinks about has intentional inexistence in the sense that her thoughts may be wrong and she can have thoughts about things that do no not even exist. She may think incorrectly that George Miller is a computer scientist or even that Santa Claus is a computer scientist.
If you treat intentionality as a relation to an intentional object, you must remember that it is not a real relation in the way that kissing or touching is. A real relation between two existing things independently of how they are conceived. When a woman kisses a man and the man she kisses is bald then the woman kisses a bald man. But Mary may represent him as hairy. Similarly, Mary can think of someone who does not exist but cannot kiss or touch someone who does not exist.

Looking for something is an example of an intentional activity in this technical sense of intentional as well as in the more ordinary sense having to do with what you are aiming at. You sometimes look for something that turn out not to exist. Ponce de Leon searched in Florida for the fountain of youth. Also, there was no such thing to be found.
There can be intentionality without representation. For example, needing something is an intentional phenomena. The grass in my lawn can need water even though it is not going to get any and even if there is no water to give it. But the grass does not represent the water it needs.
Other examples of intentional phenomena include spoken and written language gestures, representational paintings, photographs, film, road maps, and traffic lights. It is controversial how these last instances of intentionality are related to the intentionality of thoughts and other cognitive states.
Nonexistent intentional objects like Santa Claus and the fountain of youth raise difficult logical puzzles if taken seriously as objects. What properties do thy have? What sorts of properties does Santa Claus have, as he is conceived by a certain child? Perhaps he is fat, lives at the North Pole, dresses in red, drives a sleigh, bringing presents to children at Christmas time, and has at least eight reindeer. But intentional objects cannot always have all the properties which they are envisioned as having, because, as in the case of the child’s conception of Santa Claus, a nonexistent intentional object may be envisioned as existent, and it is inconsistent to suppose that something could be both existent and nonexistent (Parsons, 1980).
You must resist the temptation to try to resolve such problems by identifying intentional objects with mental objects such as ideas or mental representations. That identification does not work. The child does not indeed have an idea of Santa Claus, and Ponce de Leon had an idea of the fountain of youth. But the child does not believe that his idea of Santa Claus lives at the North Pole. Nor was Ponce de Leon looking for a mental representation of the fountain of youth. He already had a mental representation: He was looking for the (intentional) object of that representation.
Is it enough to say that a nonexistent intentional object is a merely possible object - an object that exists in some possible world or other, but not in the actual world? That is not a completely general account, because some intentional objects are not even possible. Someone may try to find the greatest prime number without realizing that there is no such thing. The intentional object of the attempt - the greatest prime number - is not a possible object. There is no possible world in which it exists.
One controversy concerning intentionality concerns how to provide a logically adequate account of talk of intentional objects. That is a controversy in philosophical logic (Parsons, 1980), sand may mot be especially important to the rest of cognitive science.
The moral is that, on the one hand, you have to take talk of nonexistent intentional objects with a grain of salt, without being too serious about the notion that there really are such things. On the other hand, you have to be careful not to conclude that the child pondering Santa Claus isn’t really thinking about anything or that Ponce de Leon wasn’t really looking for anything as he wandered through Florida.
To what extent does cognition involve intentionality? In one view, everything cognitive is intentional: Intentional inexistence is the mark of the mental, according to Franz Brentano. Another view allows for nonintentional aspects of cognitive states, raw feels.
Clearly , many feelings recognized in folk psychology have intentionality and are not simply raw feels. A child hopes that Santa Claus will bring a big red track and fears that Santa Claus will bring a lump of coal instead. The child is happy that Christmas is tomorrow and unhappy that he hasn’t been a good little boy for the last few weeks. A child’s hopes, fears, happiness, and unhappiness have intentional objects and intentional content, so that you are not anxious about anything or depressed about anything, but just depressed. Or do such states have a very general, nonspecific content, so that you are anxious about things in general or depressed about things in general, just not anxious or depressed about something specific? It is hard to say what turns on the answer to this question.
Perceptual experience has intentionality inasmuch as it presents or represents a certain environment, how perceptual experience presents or represents things may be accurate or inaccurate. Things may or may not be as they seem to be. Sometimes what you see or seem to see doesn’t really exist, as when Macbeth hallucinated a bloody dagger.
The intentional content of perceptual experience is sortally perspectival, representing how things are from here or even representing how things are as perceived from here. The content of the experience may even be in part about the experience itself. What is perceived is perhaps seen as causing that very experience.
The dagger is an intentional object of Macbeth’s perceptual experience. That’s what he is or seems to be aware of. You may be tempted to think that Macbeth must be aware of a mental image of a dagger, but that is like thinking that Ponce de Leon must have been trying to find an idea of the fountain of youth.
Any attempt to explain intentional content in terms of use or conceptual role, faces the following difficultly. Such as to the understanding the intentional content of a concept (i.e., understanding the concept) and understanding the conceptual role of the concept (i.e., understanding what the conceptual role of the concept is) are very different things. You can have a detailed understanding of the conceptual role or use of a concept without understanding the concept, and you can understand a concept perfectly without being able to specify exactly how the concept is used. For example, you might know exactly how a particular symbol is used in relation to other symbols and the environment without realizing that the symbol means plus. Similarly, you can fully understand addition and the concept of plus without being able to describe exactly how that concept is used in relation to other concepts and the environment.
To have a concept is automatically to understand the concept, whether or not you know how the concept is used. Furthermore, to understand another person’s thoughts, it is not enough (and not required) that you understand how the concepts involved in those thoughts function. You need an understanding of the other person’s thoughts from the inside. You need to know what it is like to have such thoughts. You need to relate to the other person’s thoughts to equivalent thoughts of your own that you understand.
Some theorists put the point like this: You need a first-person understanding of intentionality, an understanding from the point view of the thinker. It is not enough to have a third-person understanding from the point of view of an observer of the thinker (Nagel, 1974).
This need not mean that a conceptual role or use theory is incorrect. Perhaps, intentionality is a matter of use or conceptual role. But you have to distinguish two sorts of understandings of intentionality, the internal first-person understanding, you have by virtue of being the person who uses representations in a certain way and the third-person which a person can know how to swim being able to swim and being able to describe what is done when someone swims.
Intentionality is an important characteristic of cognition. It is useful to think of cognitive states as involving relations to intentional objects, even though the notion of an intentional object raises deep questions in philosophical logic. It is unclear whether all mental life involves intentionality, whether there are raw feels. Certainly, many kinds of feelings involve intentionality, emotions for example, and bodily feelings. Knowledge and perception have intentional content: Appreciation of this fact undermines the standard sense datum argument and helps to avoid mistakes in studying imagery. Understanding the intentionality of language, pictures, and other symbols and representations requires a distinction between using symbols to communicate ideas and using symbols to calculate or think with. The intentionality of symbols used in communication may be derivative of the original intentionality of symbols used in thought and calculation, however, it is controversial whether the mere use of symbols in the right way is enough to give them original intentionality.
That being aforesaid, that, in the course of a simple event, one’s encountering of another would engage upon a wide variety of cognitive activities, among them problem solving, face recognition, speech production and perception, memory, and motor control. How does the mind - an apparently unitary entity - accomplish such a diversity of tasks? Is the mind partitioned into diverse mechanisms, each responsible for a different job? Or are more uniform, general-purpose mechanisms deployed for different cognitive purposes? Which tasks even count as the same, and which as different? Is visual recognition a single task, or are the mechanisms that recognize objects fundamentally distinct from those that recognize faces? Is speech produced and perceived by similar processes or by different ones? More general, how, and how much, do such different processes interact?
It is to these and related questions that the debate over the modularity of mind is addressed. Because the issue is not the character of cognitive capacities per se, but the organization and distribution of the systems that underlie these capacities, the issue of modularity is often described as concerning the architecture, design principles, of the mind.
Some controversies in cognitive science, such as arguments about whether classical or distributed connectionist architecture best model the human cognitive system, reenact long-standing debates in the philosophy of science. For millennia, philosophers have pondered whether mentality can submit to scientific explanation generally, and to physical explanation, particularly. Recent, positive answers have gained popularity. The question remains, though, as to the analytical level at which mentality is best explained. Is there a level of analysis that is peculiarly appropriate to the explanation of either consciousness or mental contents? Are human consciousness, cognitive and conduct are best understood in terms of talk about neurons and networks or schemas and scripts or intentions and inferences? If our best accounts make no appeal to our hopes or beliefs or desires, how do we square those views with our conception of ourselves as rational brings? Moreover, can models of physical processes explain our mental lives? Does mentality in terms of overall brain functioning or neuronal or molecular or even quantum activities - or any of a dozen levels of physical explanation in between? Also, regardless of how they compare with explanations cast at physical levels, what is the status of psychological explanations that appeal fundamentally to mental contents?
Cognitive architecture permits cognitive scientists to explain human cognition by appealing to the concepts and principles of machine computation. Still, beyond a commitment to the notion that cognition involves computations over representations, the previse directions in which this relation should lead us remain controversial. The emergence of distributed connectionist models observe past decade or so, has stimulated debates about the character of both the representations and the computations involved in cognitive processing.
The behaviour of a computational system is not just a function of architecture constraints. Programs also play a decisive role. Without extensive knowledge of design, distinguishing those aspects of behaviour that arise as a result of the architecture from when the system is in question are organic, and the designer is natural selection. When cognitive systems consist of neurons, rather than computer chips, and the designer is evolution, instead of engineers, it is fairly safe to bet that at least sometimes the architecture realizes cognitive functions differently from digital computers do.
Classicism holds that a model of our cognitive architecture provides only a functional characterization of the underlying mechanism. A vast array of physical arrangements can implement the configuration of functional relations which these abstract models describe. On any computational view, distinguishing a cognitive level from the neuroscientific level of explanation depends precisely on the fact that models of cognitive architecture involve abstraction, say from many of the brains physical details. Computations of both the classical and the connectionist varieties assume that the neural level will not prove the best level for characterizing the cognitive architecture. In that, many connectionists (e.g., Smolensky (1988) demur - arguably providing more fine-grained analyses of these issues in the process.
For the purpose of theorizing proponents of classical models insist on a principle subdivision of the cognitive level into a semantic (or knowledge) level and a symbol (or syntactic) level and knowledge. As with commonsense psychology, considerations of meaning and rationality order semantic materials. The pivotal assumption in classical proposals, however, concern the symbol level.
1. Mental symbols are context independent representational primitives that posses their representational contents by virtue of their forms.
2. A finite set of such symbols can represent distinct semantic contents uniquely, because these symbols are the fundamental constituents of a quasi-linguistic system that posses a contenative syntax and semantics (that comprehensively parallel one another).
3. The formal syntactic features of these symbols correspond precisely to neural properties that are pivotal in the etiology of behaviour.
Proponents of modularity argue that the mind comprises separate subsystems carrying out relatively specific functions, relatively automatically and autonomously. Theories differ as to how isolated, automatic and specific these modules are claimed to be, and as too which cognitive processes are thought to be modular. Theories of modularity may be distinguished, in other words, in terms of their answers to the conceptual question, as to, ‘What makes something a module’, and the empirical question, ‘Which cognitive processes are modular, so described’?
Although largely unpopular earlier in this century, some form of the modularity thesis is now a prominent, even dominant view. One reason for this change in the intellectual tide concerns the role of empirical evidence in this debate. Current defenders of modularity theory are distinguished by the fact that experimental data are marshalled in support of the view.
The appeal to empirical evidence does not easily resolve the debate, however, because there is wide disagreement over how this evidence should be interpreted, questions remain as to how and how much interaction there is: Both among modules and between modules and nonmodular systems. There are also questions about the internal structure of modules themselves. Are they further decomposable into sub-modules, and if so, how, and how much, do sub-modules interact with each other and with their parents? Do the properties associated with modules constitute necessary and sufficient criteria for being a module, or are they merely generally characteristic properties? Are some properties more essential than others? If so, which ones?
In addition to the conceptual question of (What makes something a module?) And the empirical one is (Which specific processes are in fact modular?), a third more methodological dimension cuts across the discoursing of our debate, that the modularity thesis is not just a descriptive claim about the internal organization of the mind, but a normative claim about how the mind ought to be studied.
Jerry Fodor’s book, The Modularity of Mind (1983) has become a central reference point for debative constructions about modularity. At the time of its publication, however, a modular approach had already been defended in a number of domains. Such an approach is to be found, for example, in David Marr’s principle of modular design, in Kenneth Forster’s autonomous model of lexical access, in Noam Chomsky’s of a language organ, in Michael Posner’s distinction between automatic and strategic processing and in Herbert Simon’s concept of a nearly decomposable system. Fodor’s contribution was thus less to initiate discussion about modularity than to systematize and promote it.
We can understand Fodor’s central claim about modularity in terms of the three dimensions enumerated above conceptual empirical, and methodological. At the conceptual level, Fodor claims that modular systems are distinguished by their character -ceptual level. Fodor claims that modular systems are distinguished by their characteristic properties and functions. Fundamentally, he distinguishes three kinds of mechanism s. (1) Transducers. (2) Modules, and (3) central systems. The function of transducers is to receive energy impinging at the organism’s surface and translate it into a representational form accessible at the organism’s surface and translate it into a representational form accessible by other psychological systems. The function of central systems is that of inference and belief fixation. The function of modules is to mediate between transducers and central systems. Although this mediation may operate in either between direction. Fodor discusses almost exclusively modules which take transduced representations and infer hypotheses about their distal sources which then become available for use by central systems. More general, Fodor (1983) says, that the function of such modules is ‘to present the world to thought’.
Modules are intermediate between transducers and central systems not only in terms of the order of processing but in terms of the complexity of processing as well. Like central cognitive mechanisms, modular mechanisms are supposed to be inferential and computational: But, like transducers, they are assumed to be reflexive and automatic.
In ‘The Modularity if Mind’ Fodor identified nine properties that are claimed to be responsible for the automatic, autonomous nature of modular processing. Modular - Fodor says, of (1) are domain-specific. (2) operate in a mandatory fashion. (3) allow only limited central access to the computations of the module. (4)are fast (5) are informationally encapsulated (6) have shallow output (7) are associated with fixed neural architecture (8) exhibit characteristic and specific breakdown patterns, and (9) exhibit a characteristic pace and sequencing in their development.
In later essays, however, Fodor emphasizes informational encapsulation to the exclusion of the others as the single defining characteristic of a module. An informationally encapsulated system operates largely in isolation from the background information at the organism’s disposal. informational encapsulation constrains a priori the amount and type of data available for consideration in projecting hypotheses about the distal layout. Moreover, this constraint on information is achieved architecturally rather than substantively. That is, in solving a particular computational task, the module mechanism can only make use of information within the module: It has no capacity to bring even relevant information to bear if it happens to lie beyond the module’s boundaries.
It is important to distinguish informational encapsulation from domain specificity, which some other writers take to be the defining feature of a module. To say that modules are domain-specific, is to say that they operate on distinct classes of stimuli: Only specific stimulus domain will trigger the operation of any given module. Fodor (1983) describes the difference between informational encapsulation and domain specification, such is that :‘Roughly, domain specificity has to do with the range of questions for which a device provides answers (the range of inputs for which it computes analyses): Whereas, encapsulation has to do with the range of information that the device consults in deciding what to provide.’
Central systems - those responsible for inference and belief fixation - are, according to Fodor, nonmodular and hence unencapsulated. Such systems are characterized by the absence of antecedently established constraints on the information which they can recruit in the course of their operation. More positively, in an analogy to the process of confirmation in science. Fodor describes central systems as isotropic and Quinean. Isotropic processes are those in which information from arbitrary knowledge domains may be relevant to the confirmation of a given hypotheses. ‘Everything the scientist knows’ Fodor explains (1983), ‘is, in principle, relevant to determining what else he or she ought to believe.’ By Quinean systems, Fodor means one in which the degree of confirmation of a hypothesis depends not only on its intrinsic features but also on its relation to all other system beliefs.
At the empirical level, Fodor’s principal claim is that perception is modular but higher-order cognition is not. Perceptual, but not cognitive, processing is accomplished by encapsulated mechanisms which operate independently of the rest of the orgsnism’s knowledge. In Fodor’s usage, therefore, the phrase ‘modularity of mind’ implies only that some process (the perceptual ones) are accomplished by encapsulated mechanisms, not that the mind in general is modular.
The example that Fodor most often invokes to illustrate this view is the Muller-Lyer visual illusion, in which two parallel lines are flanked by arrows, pointing inward in one case and outward in the other. Although the lines are objectively of equal lengths, two lines are of the same length, they continue to look as if they are of different lengths. It is this persistence of the illusion and the discrepancy between how the lines look and what is believed about them that Fodor cites to support the claim that (visual) perception is modular. Even when the organism knows that the two lines are of the same length, it cannot use this knowledge to affect its perception, suggesting that the visual processes are encapsulated from such (module-external) information.
A second empirical claim that Fodor makes is that language is like perception in being modular, rather than central, like cognition. Because perception and language are not usually classified as being of a common type. Fodor coins the term input system for what he claims is the (natural) kind of mental system comprising perception and language (though strictly this kind includes both input and output systems).
Note: In passing, that the term cognitive is commonly used in two different senses, as a general, neutral term for all mental capacities, including perception, in which case it contrasts roughly with bodily, and in a narrow, and more restricted sense, in contrast with perceptual. It is this line usage that Fodor has in mind when identifying as cognitive perceptual. It is this latter usage that Fodor has in mind when identifying case cognitive, such nonmodular central systems as attention, memory , inductive reasoning, problem solving, and general knowledge.
Finally, at the methodological level, Fodor argues that the distinction between modular and nonmodular psychological systems is coextensive with the distinction between those psychological systems that can be fruitfully studied scientifically and those that cannot. Modular systems are good candidates for scientific investigation, central or unencapsulated systems are subject to unconstrained data search. This at once makes such systems rational - the y can take into account anything the organism knows or believes - but, it also makes them susceptible to what is known as the frame problem: The difficulty of finding a nonarbitrary strategy for restricting the evidence that should be searched and the hypothesis that should be contemplated in the course of rational belief fixation (Fodor, 1987).
The frame problem is something inherently faced by any unencapsulated, rational system. On the one hand, the lack of constraint on potentially relevant evidence implies that there is no natural end to deliberation. On the other hand, evidence must be constrained if a system is to function at all, and it must be constrained nonarbitrarily if it is to function rationally. (Modular processing is not rational processing precisely because its data base of information is constrained arbitrarily - i.e., architecturally). Because the identity and degree of relevant considerations change from situation to situation. Fodor believes that relevance cannot be formalized in a theory, and therefore, that central systems cannot be the object of fruitful scientific investigation.
Fodor’s view implies rather dire consequences for the future of cognitive science. Although cognitive science has been concerned to explain the processes of perception (especially vision) the centre-piece of the project has been the dream of explaining more general cognitive abilities such as thought, memory , and problem solving. Fodor’s claim is that these processes, being quintessentially unencapsulated, are ones that we have little hope of understanding and hence are ones that we should as a matter of research strategy, abandon, as bold by intellectual temperament, Fodor dubs this methodological point ‘Fodor’s First Law of the Nonexistence of Cognitive Science’, (1983).
Fodor makes three principal claims about modularity: The empirical claim that perception, but not cognitive, is modular, the conceptual claim that modules, but not central systems, are informationally encapsulated, and the methodological claim that encapsulated processes, but not unencapsulated ones are amendable to scientific study. Taken together, these three claims form an argument against the possibility of doing cognitive (as opposed to perceptual) science.
The modularity thesis has been investigated in most detail in the domain of language. In the dominant tradition of generative grammar, a tradition initiated by Chomsky in the 1950's, a core assumption has been that the processes responsible for language production and perception are largely innate and modular. To emphasize the functional independence of linguistic from other cognitive processes, Chomsky has described the language, module as an independent ‘mental organ’.
Nevertheless, because generative linguistics concentrates on explaining linguistic competence (the tactic knowledge that is said to underlie our ability to use language) rather than linguistic performance (the actual use of language in concrete circumstances), debates about modularity, which concern performance issues of how language is processed, have most often taken place in psychology and psycholinguistics, rather than in linguistics proper.
Even so, generative grammar is a theoretical approach that seeks to describe and explain natural language in terms of its mathematical form, using formal languages, such as propositional logic the formal distinction between semantics and syntax. The semantics of a linguistic proposition are the objective conditions under which it may truthfully be stated, and the syntax of that proposition is the mathematical structure of its linguistic elements and relations, irrespective of their semantics.
Recently, however, a new class of linguistics the theories has emerged. These theories seek to analyse natural language not in terms of their mathematical form, but rather in terms of their psychological functions. The focus is therefore, on the cognitive and social processes of which natural languages are constituted, including, symbols, categories, schemas, perspective, discourse context, social interaction, and communicative goals - in the broadest term to cover all these theories is functional linguistics.
The functional approach to language holds that the forms of natural languages are created, governed, constrained, acquired and used in the service of communicative function, yet, no one would deny the importance of functions in human language, as we constantly use language to communicate intentions between one person and the next. For example, we can use language to tell another person how to drive a car, where to look for edible mushrooms, and how to avoid falling into crevasses when walking on glaciers. We can also use language to foster social solidarity by greeting and acknowledging other people with salutations and standardized phrases. Yet another use of language is to represent our thoughts and goals internally. Both inner speech and exteriorly written expressions allow us to talk to ourselves in ways that help foster creativity, inventing, and memory. Additional artistic functions of language include drama, poetry and song.
Given the importance of these various functions of human language, it may surprising to learn that there is a major debate in linguistic and psycholinguistic circles in which functions determine the shape of language. To the outsider, it would seem almost obvious that the shapes and forms of things and are determined by the functions being served. We use nouns to refer to things and verbs to refer to actions. By choosing one word order over another, we can distinguish who did what to whom. In this way, the most basic forms of human language are functionally determined. But exactly how doe function have its impact on forms? Is the impact direct and immediate, or only indirect and delayed? Is there only one basic way in which functions determine forms or are there various types of form-function relations? It is even possible that the system of forms could become breed from linkage to function and take on some type of autonomous existence?
The antithesis to functionalism is formalism. The formalist position holds that although language may serve a variety of useful functions, the actual shape of linguistic form is determined by abstract categories that have nothing to do with particular functions or meanings. On this view, language is a special gift to the human species, whose formal contours reflect the abstract, reflective, and in particular nature of the human mind. Categories such as ‘verb’ or ‘subject’ are abstract objects that are processed and represented in a separate mental modular devoted to grammar. The objects of this modular are universal and derive not from functional pressures or ongoing conceptualization of the world but from the innate language making capacitates the language of modularity is informationally reencapulated, that this means that it relies only on its own abstract category and information to processes and represent language: It does not depend upon information from other aspects of cognition. According to this view, the liberation of linguistic form from any tight linkage to function has linked to the modular architecture that produces the power inherent in the human mind. Because language pressure of communication rather than in linguistic proper, the issue is not the character of cognitive capacities, but the organization and distribution of the systems that underlie these capacities, the issue of modularity is often described as concerning the architecture, or design principles of the mind: And all computational processes require internal representation as the uniform computations. Where anti-representationalist challenges has arisen from discussions of several computation-related issues, for which some cognitive scientists contend that the status of internal or interiorized representations may be as problematic as that of phlogiston.
The core issue on which functionalism and formalism disagree is that of autonomy versus modularity. Formalisms claims that the shape of language in minimally constrained by functional pressure, since language basically follows its own rules, in a separate, informationally encapsulated autonomous cognitive module. Functionalists claim that language is continually subject to the need it expresses, is that of conceptual and social messages, and that these pressures govern the processes of language change, language learning and language processing. At best, the broadest term to cover all these theories is functional linguistics.
Nonetheless, at the most fundamental level of analysis, functional linguistics rejects the generative grammar analogy between natural and formal languages, along with its concomitant distinction between semantics and syntax. In functional linguistic, natural language, like biological organisms, are composed most fundamentally of structures with functions. Linguistic structures vary from relatively simple entities such as word and grammatical morphemes to more complex entities such as phrases and linguistic constructions. All linguistic structures have functions, and in all cases this function concerns communication, including such things as reporting an event, identifying the roles played by participants in an event, asking a question, establishing a topic of discourse, and taking a particular perspective on a viewing aspect of a scene. For functional linguistics, therefore, the most fundamental distinction in natural languages is not between meaningful linguistic elements and their algorithmic combination, irrespective of meaning (i.e., mathematical semantics and syntax), but rather between structure and function, symbol and meaning, signifier and signified.
Within functional linguistics, cognitive linguistics refer to the set of theories that are primarily concerned with the cognitive dimensions of linguistic communication. Although there were important precursors in the work of linguistics such ss Charles Fillmore and Leonard Talmy, cognitive linguistics had its clear origins as a scientific paradigm in 1987 with the publication of George Lakoff’s ‘Women, Fire, and Dangerous Things’: ‘What Categories Reveal about the Mind’ and the first volume of Ronald Langacker’s ‘Foundations of Cognitive Grammar’ - followed immediately by the founding of the International Cognitive Linguistics Association and its official journal Cognitive Linguistics. The fundamental stance of cognitive linguistics may best be summarized in terms of two key issues: The nature of linguistic meaning and the nature of grammar. In the view of some cognitive scientists, the cognitive linguistics approach to these two issues constitutes a revolution in our understanding of how human and cognition operate.
In the process of linguistic communication the speaker of a language employ particular conventions/symbols to exhort their listeners to conceptualize particular events and situations in particular ways. It is therefore, misleading to say that language depends on cognition, as if they were two separate entities. Rather, the more accurate characterization is that natural languages are nothing more or less than ways of symbolizing cognition for purposes of communication. This cognitive linguistic view of language as one particular manifestation of human cognition is best illustrated by three phenomena (1) The dependence of word meaning on surrounding cognitive frames, (2) The myriad ways in which a single referential situation may be linguistically construed, and (3) The ever-changing meaning for which particular linguistic symbols are used historically, including metaphorical meanings: All of which will be treated successively in sequence.
First, in many linguistic theories the semantics of a language is viewed in the manner of a dictionary. That is, speakers are seen to posses cognitive distinct mental lexicons, within which there is a list of linguistic items, each of which has a meaning that may be described independently with something like a list of semantic features. The problem with this view is that many linguistic items take their meaning from the role they play in large forms of life, and thus they require a description more encyclopaedic in nature. For example, the word ‘bachelor’ - which is formalized’ in some semantic theories as something like ‘adult + male + unmarried’ - does not apply easily to such unmarried adult males as Tarzan, the Pope, and others much the same. These individuals meet the formal criteria for ‘bachelor’, but they are not good exemplars, because they do not participate in the cultural setting from which the word takes its meaning, a cultural setting in words whose significance is embedded in larger cultural frames, include ‘trump’ (which requires the game of bridge) ‘pedestrian’ (which requires a mystery). Even though it is clearest with highly culturally bound words such as these. The same basic principle applies as well to many other words that seem initially to be more context-independent: For example, a leaf can only be understood in the context of a tree, and a knuckle can only be understood in the context of a finger (which requires a hand, and so on) - (Langacker, 1987). In general, the meaning of many, perhaps most, linguistic expressions can be adequately characterized only with respect to some larger conceptual domain that is not, strictly speaking, a part of its meaning, but only provides a frame for that meaning.
Second, many linguists and cognitive scientists have implicitly operated with an objectivised view of linguistic semantics. On this view, a linguistic entity stands for things and situations in the world, so that entity’s semantics comprises those things and for situations in the world, for which it stands. But this view of linguistic meaning basically ignores semantic differences, that depend on the different perspectives that may be taken on one and the same objective situation. Clear examples of this more subjectivist view of linguistic meaning are provided by the alternative descriptions of single situations.
The roof slopes upward./The roof slopes downward
John kissed Mary./Mary was kissed by John
The glass is half empty./The glass is half full
He has a few friends in high places./He has few friends in high places.
In each case one and the same situation is described differently, depending on the point of view the speaker wishes to communicate (Langacker, 1987). People may also use different formulations to describe a single situation at different levels of detail. For example:
This is a triangle,/This is a three-sided polygon.
This vehicle is in my way./This blue van is blocking my way into the driveway.
Susan managed to open the doo r with Jim’s key./Jim’s key opened the door.
Bill flew to New York./Bill bought a ticket, drove to the airport, boarded an aeroplane, and so forth.
One and the same referential situation may also be described in different words depending on the background frame of the communicative situation. Thus, the exact same piece of real estate might be described thus:
Hiker on a hilltop: ‘There’s the coast’.
Sailor at sea: ‘There’s the shore’.
Skydiver from the air: ‘There’s the ground’.
Child on vacation: ‘There’s the beach’.
The main point of all these examples is that human languages provide their speakers with a whole battery of symbolic resources with which they may induce other people to construe a particular situation or event in particular ways. The ways in which a situation or event may be construed linguistically are myriad, depending ‘inter alia’ on the communicative intentions of the speaker, the canonical background frame of the expression, and the knowledge the listener may be assumed to posses in the communicative interaction.
Finally, there is the fact that the meaning of particular linguistic symbols in particular languages use constantly changing as their speakers put them to new uses, including metaphorical ones. These changes of meaning are not rare events, and the use of metaphor’s is not a specialized, atypical use of language. Lakoff and Johnson (1980) argue and present evidence that most everyday language includes the use of linguistic items originally conventionalized for other semantic purposes. These range from fairly subtle extensions, such as running for political office and being in an organization, to more obviously metaphorical extensions, such as being out of one’s mind or being a lost soul. Moreover, what Lakoff and Johnson discovered was that in human linguistic communication people do not just use isolated semantic extensions and metaphors in sporadic, unsystematic ways but rather, they often structure whole experimental domains metaphorically. For example, following the metaphor that ‘Time is money,’ people say such things as:
I spend too much time watching TV.
That detour cost me 2 hours.
The delaying tactics bought them more time.
But time may also be seen in terms of space:
I don’t know what lies ahead for me.
His youth is behind him now.
I’II be there at 5:00 am the 11th of July.
An especially powerful discovery about the metaphorical dimension of language is that people often use more concrete domains of knowledge to structure and comprehend more abstract ones. This is manifest in people’s frequent use of terms for very basic aspects of experience, such as bodily actions and simple perceptual transformations of objects, to structure more abstract domains. For example, we understand the English expression ‘in’ and ‘out’ most fundamentally for such things as putting objects into and taking them out of containers: But, we also put arguments in and take arguments out of our speeches. We use ‘off’ and ‘on’ most basically for putting cloths on and taking them off our bodies or putting objects on and taking them off tables, but we also say that a tennis player is on her game or off her game. Lakoff and Johnson’s claim is that there are certain fundamental domains of human experience - constituted by what they call image schemas - that serve as prototypes of some very general referential situation, and thus as especially powerful source domains for metaphorical construal (Johnson, 1987. Overall, It may be said that semantic extension and metaphorical construal pervade human language use, and their existence demonstrates that linguistic meaning is part and parcel of a process in which people continually adapt their existing means of linguistic expression for particular communicative goals.
The most general point to be made from all three considerations is that it is basically impossible to isolate linguistic meaning from cognition in general in the manner of a mental lexicon divorced from other aspects of human cognition and communication. Cognitive linguistics, therefore adopts an encyclopaedic, subjective approach to linguistic meaning in which human beings create and is linguistic convention in order to symbolize their shared experience in various ways for specific communicative purposes. These different experiences and purposes are always changing so they can never be captured by an itemized, objectivist description of linguistic elements and their associated truth conditions. For an adequate description of linguistic semantics from the cognitive linguistics point of view, what is needed is a psychology of language in terms of such things as cognitive structure, the manipulation of attention, alternative construal of situations, and changing communicative goals.
In the cognitive linguistics view, the grammar of a language is best characterized as ‘a structured inventory of symbolic units’, each with its own structure and function (Langoff, 1987). There units may vary in both their complexity and generality, with words being only one type of symbolic unit at the simplest level of analysis, all the structure of a language are composed of some combination of four types of symbolic elements, word makers of worts (e.g., the English plural -s): Word order, and intonation (Bates and MacWhinny 1989). Each of the several thousand languages of the world uses these four elements, but in different ways. In English, for example, word order is mos t typically used for the basic syntactic function of indicating who did what to whom, intention is used mainly to highlight or a background certain information in the utterance, and markers on words serve to indicate such things as tense and plurality. In Russia, on the other hand, who did what to whom is indicated by case-markers on words, and word order is used mostly for highlighting and background information. In some words language (e.g., Masai), who did what to whom is indicated through by virtually any semantic or pragmatic function in a particular language. Moreover, these structure-function relationships may change over time within a language, as in the English change from case marking to words if or indicating who did what to whom several hundred of years ago.
These four types of symbolic elements do not occur in isolation, but in each language constructions composed of unique configurations of these elements (Goldberg, 1995). Linguistic constructions are basically cognitive schemas of the same type that exist in other domains of cognition. These schemas/constructions may vary from specific to general, for example, the one word utterance, ‘Fore’ is a very simple, concrete construction use for a specific function in the game of golf. ‘Thank you’ and ‘Don’t mention it’ are multi-word constructions used for relatively specific slots into which whole classes of items may fit: ‘Down with _ ‘ and ‘Hooray for _’ Two other constructions of the type that have more general application are:

The way construction: She mad e her way through the crowd.
I paid my way through college.
He smiled his way into the meeting.

The let alone construction: I wouldn’t go to New York, let alone Boston.
I’m too tired to get up, let alone go running around with you. I wouldn’t read an article about, let alone a book written by, that swine.

Each of these constructions are defined by its use of certain specific words (way, let alone) and each thus conveys a certain relatively specific relational meaning, but is also general in its different specific content (Fillmore et al., 1989).
There are also constructions that are extremely general in the sense that they are not defined by any words in particular, but rather by categories of words and their relations. Thus, the di-transitive construction in English prototypically indicates transfer of possession and is represented by utterances such as ‘He gave the doctor money’. No particular words are a part of this construction. It is characterized totally schematically by means of certain categories of words. In a particular order, noun-phrase + verb + noun-phrase + noun-phrase. No construction is fully general, however, so in the di-transitive construction the verb must involve, at the least, some form of motion (as in, ‘He threw Susan money’, but not ‘He stayed Susan money’). Other examples of very general English construction are the various resultative constructions (e.g., ‘She knocked him silly’, ‘He cleaned the table off’), constituted by a particular ordering of particular categories of words, and their various passive constructions (e.g., ‘She is loved by Harry’. ‘She got kissed’). Which provide a unique perceptive on scenes and are constituted by a particular ordering of word categories as well as some specific word s (e.g., by) and marker (e.g., -ed). All these more general constructions are defined by general categories of words and their interrelations; so each may be applied quite widely for many referential situations of a certain type. These abstract linguistic constructions may be thought of as cognitive schemas of the same type found in other cognitive skills that is, as relatively automatized procedures that operate on a categorical level.
Ann important point is that each of these abstract linguistic schemas has a meaning of its own. In relative independence of the lexical items involved (Goldberg, 1995). Much of the creativity of language comes from fitting specific words into linguistic constructions that are nonprotypical for the word. For example, the verb ‘kick’ is not typically used for transfer of possession, and so, it is not prototypically used with the di-transitive construction. But it may be construed in that way in utterances such ss ‘Mary kicked John the football’, because kicking can be seen as imparting direct motion to an object with another person as terminus, this process may extend even further to such things as ‘Mary sneezed John the football’, which requires an imaginative interpretation in which the verb ‘sneeze’ is not used in its more typical intransitive sense (as in ‘Mary sneezed’), but rather as a verb in which the sneezing causes direct motion in the football. If the process is extended begins to break down- as in ‘Mary smiled John the football’. The important point is that in all these examples from the construction itself, not from the specific words of which it is constituted. Linguistic constructions are thus an important part of the inventory of symbolic resources that language-users control, and they create an important top-down component of the process of linguistic communication - in keeping with the role of abstract schemas in many other domains of human cognition.
All constructions, whether composed of one word or many categories of words in specific orders with specific markers and intonations, derive from recurrent events, or types of events, with respect to which the people of a culture have recurrent communicative goals. This means that a major function of all linguistic constructions is attentional - for instance , to take one or another point of view on a situation (the other asking a question). For example, the same event may be depicted as:
Fred broke the window with a rock.
Fred broke the window.
The rock broke the window.
It was Fred who broke the window.
It was the window that Fred broke.
What Fred did was break the window.
In each of these construal’s of the event, the perspective is slightly different, and Fred’s and the rock’s role in the process are made attentionally salient to different degrees (Croft, 1991), with each construal being made felicitously used for a particular communicative purpose in a particular discourse created for precisely these types of attentional functions.
Different languages are constituted by different specific symbols and constructions, of course, in some cases these differences have become relatively conventionalized across linguistic structures within a language, so that we speak of different types of languages with regard to how they symbolize certain recurrent events or states of affairs. An important area of research in cognitive linguistics, therefore, concern the different resources that different languages provide for symbolizing certain universal events and situations (van Valin and LaPolla, 1997). For example, almost all people speaking almost all languages have general constructions for talking about someone causing something to happen, someone experiencing something, someone giving someone, an object moving along a path, and an object changing motion events, as by Tamy (1988):
English: The bottle floated into the cave.
Spanish: La botella entrô la cueva flo tando (‘The bottle entered the cave floating’).
In English the path of the bottle is expressed by the preposition ‘into’, and the marker of motion is expressed by the verb ‘float’: Whereas in Spanish, the path is expressed by the verb ‘entrada’, and the manner of motion is expressed by the modifier ‘flotando’. Because this difference is persuasive and consistent in the two languages, we may say that in depicting motion by the verb, whereas, English is a satellite-framed language because the path of motion is typically expressed by the preposition. There are other typogical differences among languages well.
The cognitive bases of linguistic constructions have been most thoroughly investigated by Langacker (1987, 1991). Most importantly, Langacker has provided an asccount of the different cognitive operations that characterize the two categories of word that form the heart of the most general constructions in most of the world’s languages, verbs and nouns. Verbs form the relational back-bone of linguistic expressions and have to do with processes that unfold over time or else states that remain stable over some period of time. Thus, to be able to say, that something has moved or changed, there must have been at least moments of attention: One in which an entity was in one location or state and another in which it was in another location or state. For example, we cannot make the judgment that ‘She crossed the river’ the basis of a single snapshot of a woman at any location in or near a river, but rather we must have something like a first snapshot in which she is at a location on the bank of the river, a temporally subsequent snapshot in which she is in the river, and another in which she is on the opposite riverbank. We can also say, ‘She is across the river’, for this same situation (woman standing on one bank:, but in this situation there is no implication that a process of crocssing ever occurred. Note that the description of states, as in ‘She remains across the river;, and also requires, at least two moments of attention in which the women stays in the same location onto the other side of the river (in on e snapshot the might be engaged in initiating an activity). Interestingly, most languages allow their speakers to use some nouns as verbs in certain situations, in which case some kind of process interpretation is required as in ‘brush with a brush’ and ‘hammer with a hammer’, ‘dock the boat’, and ‘table the motion’, (typically an action closely associated with the object).
Nouns are words used to indicate the participants in events or situations. Most prototypically these are spatially bounded entities such as people or trees or bicycles, but, nouns may also be used to designate temporally bounded entities such as Tuesday or corporations or virtues. For Langacker, the key cognitive operation involved the bounding of a portion of experience so as to create a thing as distinct from any experience, as illustrated by the fact that nouns may be used to talk about what are clearly events in nature (e.g., the parade, the party). Indeed, in most languages there are processes by means of which a verb form to swim may be turned into a noun like swimming. If It is thought of as a participant in an event or state of affairs, as in ‘This swimming strengthens my leg muscles’. The bounding process that creates nouns thus reflects not the independent structure of the word, but rather the fact that an important communicative function in linguistic communication is the identification of things to be talked about.
The view of linguistic communication and the cognitive processes on which it depends is obviously very different from the generative grammar and other formalistic approaches. But cognitive linguistics can nevertheless account for all the major phenomena of generative grammar. For example, of the generative grammar view, natural language structures may be used creatively, because speakers posses a syntax divorced from semantics. On the cognitive linguistics view, on the other hand, linguistic creativity results quite simply from the fact that speakers have formed highly general linguistic constructions composed of word categories and abstract schemas that operate on the categorical level. That linguistic categories and schemas are formed in the same basic way as other categories and schemas is evidenced by the fact that they show the same kinds of prototypically effects and metaphorical extensions as other categories and schemas (Lakoff, 1987; Taylor, 1996). Also, generative grammar analyses depend crucially on hierarchically organized tree structures that are seen as unique to language.
A major objective of cognitive science is to understand the nature of the abstract representations and computational processes responsible for our ability to reason, speech, perceive and interact with the world.




cation of functionally related stimuli and responses posed a number of problems for behaviouralistically oriented psychology, itself sometimes called ‘the experimental analysis of behaviour’. Often, for example, stimuli and responses selected for functional class cannot be usefully characterized in an apsychological (nonmental) vocabulary. Consider, for example, the temptation to classify the rat’s responses as seeking food and remembering whether it was found to the left or right. Mentalistic attribution is a tough temptation to resist. In some cases - human verbal behaviour, for instance - it is impossible to resist, however.
 In North America behaviourism reigned for decades as a remarkably resilient, influential, and in many ways laudable doctrine that resonated through a number of disciplines beyond psychology. In linguistics it helped to displace philogy (the study of the histories of particular languages) with empirical studies of language use. Under the leadership of Leonard Bloomfield, linguistic behaviourism aspired to carry out a program in which linguistics would collect speaker`s utterances into a corpus and produce a grammar that described it. Explicitly excluded were and mentalistic assumptions, inference or explanations.
 In philosophy, the logical positivism of Rudolf Carnap and Carl Hempe was congenial to behaviourism. Each tried to develop behaviouristic canons for the meaningfulness and empirical grounding of scientific hypotheses. Hempel himself eventually abandoned this effort: In order to characterize the behavioural patterns, propensities, or capacities . . . we need not only a suitable behaviouristic vocabulary, but psychological terms as well. (Hempel, 1966). Others maintained a thoroughgoing empiricism. Willard van Orman Quine imposed behaviouristic standards on the task of interpreting the speech of another person (or oneself) and argued that the only evidence available was the sensory input from the environment. He argued that from this evidence alone the meaning of a sentence would always be indeterminate, and therefore concluded that the notion of meaning was vacuous, but he made an exception only for those statements most firmly rooted in sensory experience (observation statements).
 Not everyone agreed with behaviourism, however, it was  nonetheless, the historical events that clearly represent a rebellion against behaviourism and the birth on new approach. The concept, as one is with concept, one is with idea, and so, that, one who is without concept, is one without idea; concept, in that, of a cognitive science revolution occurring at the end of the second world war. The inferring intellection illustrates the manner to which is representational or, perhaps, on the formulation that not of any idea gives apprehension toward the gestation and birth of the cognitive revolution. However, this enabled cognitive researchers to cast off their fears of mentalism and attempted efforts and understand the processing of information in the head - in the mind - that underlies behaviour. By the mid-1970's the conceptual and methodological frameworks of linguistics, psychology, and philosophy were fundamentally altered in ways characteristic of what Thomas Kuhn (1962/1970) has referred to as a ‘scientific revolution’. A generation of new thinkers, including Chomsky, George Miller, and Hilary Putnam had created a new paradigm, and a new generation of researchers took up the bannar and the incidence of a radically different set of research agendas. In addition, a brand new discipline - artificial intelligence - emerged, and such leaders as Allen Newell and Herbert Simon linked its approach to those of the other disciplines.
 Of all the research fields that would come to play a major role in cognitive science, artificial intelligence, usually classified as a branch of computer science, was the newest, having to await the invention of the computer itself. The digital computer, as we know it, was another product of the second world war, though the idea of automated computing goes much further. One key element of computing is the idea of a set of instructions that can be applied mechanically. An early version of this idea was found in an 1805 device of Joseph-Marie Jacquard which used removable punch cards ti determine the pattern which a loom would weave. In the 1840's, Charles Babbage made use of this idea to his design of an analytical engine, which was to have been a steam-driven computational device. Babbage never succeeded in actually building the engine, but he did, however, engage in a fruitful collaboration with Lady Lovelace (Ada Augusta Byrin),who worked out ideas for programming Babbage’s machine.
 A major hurdle faced by Babbage in the nineteenth century was the lack of sufficiently manufacturing for the components of his engine. Even so, by the start of the twentieth century, precision had improved to the point where mechanical calculators could be manufactured by companies such as Tabulating Machine Company which later merged into IBM. These machines were purely mechanical - without electrical components - bit in the late 1930`s Claude Shannon showed that electric switches could be arranged to turn one another on and off in such a way as to perform arithmetic operations. The idea of using electronic circuits to carry out calculations was put into practical use during the second world war in England by Alan Turing and his collaborators at Bletchley Park in the effort to decipher German military communications. The German cipher machine, Enigma was a particular challenge, since it was built out of a set of rotors which permuted the letters of the alphabet: The rotors were mechanically coupled so as to constantly change the alphabetic substitutions employed in the cipher. The challenge to Turing and his colleagues was to examine all combinations of encoding assignments in the machine to find the one used in the cipher, a huge computational task. For highest-level communications, Germany employed an even more sophisticated cipher, which produced what researchers at Bletchley Park referred to as `Fish` cipher text. To decipher these messages, Turing and his colleagues designed a vacuum tube-based special-purpose machine, Collissus, which employed thousands of electronic values.
 Another world war two era computer, the Electronic Numerical Integrator and Calculator (ENIAC), was developed by J.Presper Eckert and John Mauchly at the Moore School of the University of Pennsylvania. It was designed to calculate artillery tables, which would specify how to aim artillery on various terrains so as, to hit desired targets. Despite massive effort, ENIAc remained incomplete until 1946. John von Neumann designed the basic architecture for ENIAC - the `von Neumann architecture`. It was, however, only fully realized in ENIAC`s successors, EDVAC (Electronic Discrete Variable Computer), and has continued to play a central role in computing to the present.
 At the heart of von Neumann architecture is a distinction between a computer`s memory and its central processing unit (CPU). One of von Neumann`s innovations was to recognize that the instruments comprising a program could be stored in memory on the same manner as the data being operated upon. Computer operations are carried out in cycles in the CPU: In each cycle both data and instructions are read from memory into the CPU, which carries out the instructions and returns the results to memory.
 We now come closer to the role of the compute on the birth of cognitive science, but we need to make another brief digression. After the war, computers became increasingly powerful. And with such power a possibility began to be realized that has first been envisaged by Gottfried Wilhelm Leibniz, the famous seventeenth-century philosopher at the University of Leipzig. He had proposed that numbers could be assigned to manipulate the concepts to which they were assigned. In 1854, the English mathematician George Boole had taken a major step in developing this idea in a book called `The Laws of Thought`. Boole formulated several operations that could be performed on sets, which could be applied to propositions. He suggested that the laws governing these operations could serve as laws of thought. The switches that Shannon had devised in the late 1930`s performed these basic Boolean operations, with the resulting state of the switches (on or off) corresponding to the truth values of the proposition (true or false).
 Boole`s system was limited to operations on complete propositions (e.g., `The woman is a lawyer`) and could not deal with structures internal to the proposition (e.g., the fact that the predicate `Is a lawyer` is being predicated of `the woman`). Gottlob Frége, though, expanded the system in 1879 to deal with such predications (permitting representational arguments from premises such as `All lawyers have passed the bar expand`; `The woman is a lawyer` to `The woman has passed the bar exam`): The resulting system of predicate calculus provided a way of formalizing inferences that had been extremely influential. The idea of formally representing information in symbolic notation and using formal operations to transform this information provided a critical entrêe to the use of computers to simulate reasoning.
 Turing, too, had an ingenious proposal, he offered a test - not a sole test, but a test, for thinking (Turing, 1950). His suggestion was to approach the question in terms of the behaviour of the machine: Could its behaviour pass for that if a thinking person. If yes, it thinks. In what is now known as the Turing test, one decides whether a machine is thinking by arranging for a human interrogator at a keyboard to ask whose answers are displayed. If the interrogator, even after sophisticated questioning, cannot differentiate the computer from the human, then the computer`s activity counts as thinking. Turing recognized that it would require a very complicated and complex machine to engage in any protracted dialogue with humans and not be detected, but he believed that a computer would eventually pass this test.
 The British experimental psychologist, Sir Frederic Bartlett (1932) who studied the role of subjective construction in memory. For memories, he claims, are not simple recordings of experienced events, but are filled in by their subjects and embellished with details not present in the original context. For example, when asked to recall a Native American folktale, `The War of the Ghosts` from the Kwakiutl people, his subjects made changes in the plot of a story which tended to Westernize it. To explain this. Bartlett proposed that they employed their existing schemata to organize events in the story. As we will see, the notion of a schema as a structure for organizing information in memory has played a major role in subsequent cognitive psychology and in cognitive science generally. Bartlett also trained a number of influential British psychologists, including David Broadbent, who pioneered attention research using multi-channel listening techniques.
 Nevertheless, this general approach can be extended to more complex situations in which there are more than two alternatives or alternatives have unequal probabilities - for example, any message in English - and can b e used to measure the amount of redundancy in such messages. Shannon (1948, 1951) presented a text one letter at a time to subjects whose task was to predict the next letter. There were 26 alternatives at each point, and they had unequal probabilities due in part to context. For example, ‘χ’ has a low probability overall, but is highly probable following ‘y’. Shannon defined redundancy as the reciprocal of the average number of guesses needed to generate the correct letter. Averaging across the entire text, subjects required an average of two guesses per letter, yielding a redundancy estimate of about 50 percent for printed English. Shannon’s information theory provided the key to interpreting Miller’s dissertation result that messages differed in how easily they could be understood in noisy environments. Miller and Selfridge (1950) found further application for information theory in a list learning experiment: The closer the word lists came to resulting English sentences (i.e., the greater their redundancy), the more words a subject could remember.
 In one of the most influential papers of this point. Miller (1956) addressed more extensively the question of the cognitive structure of memory the study of human learning and memory had long moved along the path laid down by Hermann Ebbinghaus (1885-1913), who served as his own subject in a prolonged series of experiments in order to bring higher mental processes under experimental control and quantitative analysis. In his attempts to eliminate extraneous influences, Ebbinghaus arrived at the idea of using pronounceable nonsense syllables such as DAX and PAF as his stimuli rather than words. He studied lists of these nonsense syllables daily, and then tested himself to determine rates of learning and forgetting. Ebbinghaus uncovered important functional relations (e.g., repetition yields better retention, especially if distributed across several days: The amount retained is a logarithmic function of time), but the down side was his neglect of the cognitive structure and processes that meaningful stimuli so readily engage. Frederick Bartlett’s (1932) previously described idea that schemata help organize memory offered a corrective to the limitations of Verbal learning in North America. Such in the pursuing the updated variations on the Ebbinghaus tradition by asking, for example, which particular model of stimulus-response conditioning might best account for the accumulated data on paired-associate learning. Retention was an indicator of learning, not a clue to the nature of the memory system within.
 Memory is a single word that refers to a complicating, complex and fascinating set of abilities which people and other animals posses that enabling them to learn from experience and retain what they learn. In memory, an experience affects the nervous system, leaves a residue or trace, and changes later behaviour. Types of memory are tremendously varied: So, too, are the techniques used in cognitive science to investigate them. The aim of the present day chapter is to give an overall sense of types of memory as well as of techniques used in the experimental study of memory.
 Biologists, philosophers, and psychologists have described and discussed dozens of types of memory, such as Procedural memory, and refers to the knowledge of how to do things such as walking, talking, riding a bicycle, tying of ones shoe laces. Often the knowledge represented as difficult to verbalize, and the procedures are often acquired slowly and only after much practice. (Imagine someone trying to learn how to swim from reading about swimming, but not practising the skill.) The types of conditioning to which most species of animals are subject - classical (or Pavlovian) conditioning and instrumental (or operant) conditioning - are other examples of procedural memory.
 Procedural memory is often contrasted with declarative memory, or knowing facts about the world and about one’s past (Squire, 1987). A major distinction within declarative memory is that between episodic and semantic memory. Episodic memory refers to the remembering of episodes of our lives and is contextually bound: This is, the time and place of occurrences are inextricable parts of memory for episodes. This type of memory enables the mental time travel in which we engage when we think back to an earlier occasion: Because it constitutes every individual’s personal history. It is some times called autobiographical memory. Semantic memory (or, generic memory) refers to our general knowledge of the world (is that, NaCl is the symbol for salt, what the word play-types means, and so on). This knowledge is not tied to one episode, and we need no refer to the time or place in which we learned these facts to know that they are true.
 This is not the only way to distinguish types of memory. Another, important difference is between short-term and long-term memory. Short-term memory (or primary memory) refers to our ability to hold in mind a relatively small amount of information that is rapidly forgetting if we stop attending to it. A good example is remembering a telephone number for a brief period afer looking it up. This ability is also referred to as working memory, because it permits us to perform the mental work of manipulating symbols and thinking. Long-term memory (or secondary memory) is a rather imprecise term yet is used to refer to retention of various kinds over long time periods, depending on content, long may mean anywhere from 10 seconds to as of many years (hence the fuzzy nature of the term).
 Its base of remembering or memory, which is long-term episodic memory: How do we remember what we read in the paper, where we parked our car this morning, the earliest event from our childhood, and the myriad of other events of our lives? We often need to recall events from the past as accurately as possible, and this process can be effortful. The process of recognition (when we are asked to judge whether something has been presented to us previously) appears easier than recall it. Such that we are to consider the study of memory and the critical principles of remembering as our concerns lie of forgetting and memory illusions, that is which are endorsed in of the falsity that our memories do enact.
 Ebbinghaus advocated careful laboratory research as a sue path to knowledge, and the laboratory research tradition begun by Ebbinghaus still exists, albeit in radically different form. The development of alternative approaches has enriched today’s cognitive science, however. Some researchers advocate more naturalistic methods (the everyday memory tradition). Others seek the biological underpinning of memory in studies of animals or in the tradition of cognitive neuroscience (measuring neural-activity through modern neuroimaging techniques while people are engaged in memory tasks, or studying the deficit and pathologies of memory in brain-damaged patients). Yet another approach takes inspiration from artificial intelligence and asks how much human memory resembles computer memories. Some researcher seek to stimulate and to understand memory processes by creating neural network models. Each of these approaches makes a contribution, but our approaching perspective on learning and psychological memory  as employing behavioural methodologies are the primary tool for subject study.
 The learning/memory process can be divided into three hypothetical stages, encoding (original acquisition of information), storage (information over time), and retrieval (gaining access to information when it is desired) (Melton, 1963). Any time someone accurately remembers an event, all three stages are successfully completed. If by any chance or by  change itself, that someone forgets or disremembers an event, we can ask at what stage or stages the process went wrong. However, answering this question is not as straightforward as it seems, because the three stages are interlocked, and psychology experiments cannot give a clear answer to the question of what stage in the process has suffered a collapse.
 A standard psychology experiment on learning and memory has two stages. In a first stage people are exposed to information to be learned, be it sets of words, numbers, pictures, sentences, a story or prose passage, or a videotape of a complex events. In the second stage, a test is given some time later in which people may be asked to recall or to recognize the material. The first stage of memory experiments corresponds to the encoding of material, but, of course, there is no way to tell if materials were actually encoded unless it is tested . The second stage corresponds to the retrieval stage, but, of course, it does not measure retrievals per se – information can only be retrieved if it was encoded and stored.
 Nonetheless, the work of Tulving and Pearlstone (1966), psychologists have distinguished between availability and accessibility of information in memory , where availability refers to the information about events that a person has encoded and stored and accessibility refers to the information that can be retrieved on any particular test occasion. The holy grail for psychologists interested in memory would be a test or procedure that accurately measured the contents of a person’s knowledge - what the person had encoded and stored. At one time, it was argued that procedures for measuring recognition procedures are subject to the same impulses as are to recall procedures. Every test of memory is an imperfect indicator of knowledge. whether in the classroom, in standardized tests, or in the psychology laboratory. we can never measure what information is encoded and stored , we can only measure what information is accessible or retrievable under a particular set of test conditions.
 Despite these problems, the division of the learning/memory process into three stages can at still be useful. We can still sometimes ascribe forgetting to failures (say, of retrieval). Imagine people studying a list of 100 words on which umbrella is the fifty-first word. If people were tested by being asked to recall in any order on a blank sheet of paper (a procedure called free recall), the probability of recalling umbrella would be vanishingly small. Was the word not encoded?: Not retained? Or just not retrieved? There is no way to know from this one condition. However, if the people were then given retrieval cues to prompt memories for the words and the cue parasol elicited recollection of umbrella, then clearly the word had been encoded and stored, and the failure on the first recall test was one of retrieval. (It would be necessary to safeguard against the possibility that people are merely guessing the words from the cues, but in practice insuring this is relatively easy.)
 Most experiments on memory can be classified as encoding experiments or retrieval experiments. Encoding experiments involve manipulations of some factor during the encoding stage (e.g., the type of material, the way the material is processed), with other factors (e.g., the type of test that is used to assess knowledge) held constant. Retrieval experiments  hold constant the encoding factors but manipulate the retrieval factors, such as the type of test given or the particular instruments given before the test. One particularly useful research strategy in investigating memory combines these two types of experiments and has been called the encoding/retrieval paradigm (Tulving, 1983). For example, two different strategies for studying material might represent the encoding manipulation, and two different forms of tests might be used to assess knowledge. The encoding/retrieval paradigm is efficient, because it permits several questions to be asked at once. For example, will the outcome of the encoding manipulation generalize across more than one kind of test. Similarly, will different types of tests show different patterns of knowledge acquired and stored during the earlier effectiveness of retrieval cues. These factors are studies through encoding and retrieval experiments.
 Our critical aspects of the learning and memory process is the original acquisition of learning of information. Many experiments have documented the importance of a general principle, namely, that the more effectively information is encoded, the better later recall is. Of course, such a statement runs the risk of being tautologous, unless can specify a way (independently of level of recall or recognition) of defining effectiveness of encoding. Frequently, that is impossible to do. However, to show this general principle can, at least order many findings from the experimental study of remembering. In general, all the research that the conformations to an encoding paradigm, as to experiment, and the its interest is to seeing how it affects performance on a later test.
 Even so, that more meaningful information is better remembered than less meaningful information. For example, coherent passages are remembered better than chaotic ones (created, for example, by keeping the words from the coherent  passage, the same but rearranging them). Similarly, new information about bridge , chess, or baseball will be better remembered by experts in those domains than by novices. The new information can be better assimilated (encoded) in terms of the an experts knowledge base.
 Even very simple materials - such as words studied in a long list - can reveal this effect- Craik and Tulving (1975) reported an experiments showing a level-of-processing effect in remembering. As a perquisite, the basic idea that Craik and Tulving were exploring is that the cognitive system processes information to different levels, or depths, and that the depth of processing determines later retention. For example, in reading the German word Gedächtmnts, a reader of English (with a knowledge of the orthography of Western alphabets) could apply, at least an arthographac. Or graphemic analysis and identify the graphemes of the word. A person with some knowledge of phonology in German could sound the word out, even if he or she did not know its meaning. Finally, a person fluent in reading German could know th e meaning of the word too. (And a German-English bilingual could know the meaning of the word too. (And a German-English bilingual could translate it as memory.) To comprehend the word, the reader must progress through grapemic (visual), phonemic (sound), and semantic (meaning) codes. The level-of-progressing approach predicts that remembering depends on the level to which it has been processed, with deeper (meaningful) processing leading to better retention.
 Craik and Tulving (1975) manipulated experimentally the depths to which subjects, had to process words on a list of 60 common words, such as bear, by requiring them to answer different questions about the words. Some questions directed attention to the words appearance (is it in upper-case better than other directed analysis to the words sound (Does it rhyme with chair): Whereas, others required consideration of the words meaning (It is an animal). For half, the questions answered was yes for the other half it was no. Subjects saw each word for five seconds, phonemic, or semantic levels of processing. Yet, keeping in mind that the subjects viewed the words for five seconds in all conditions and that they could answer the questions in each case in under a second. What the results show is that, with all else held constant, retention, and could be dramatically affected by the split-second cognition process engendered by the questions that were asked. How well people remember events depends partly on what the events are, but also on how they are encoded depthfully, as (meaningful) processing of information surpasses other, phonemic or graphemic analyses in its effect on later.
 Long, since the Greeks, scholars have known that imagery can benefit the prospective successions in the potential sequence through which aids the potential of remembering. Instructors of rhetoric enlightened speakers mnemonic devices, which was critical for people who could not use written reminders. Modern experimental psychologists have confirmed the wisdom of using imagery in several types of controlled experiments. In most types of test, pictures are better than words, this is true even in tests that would seem to favour verbal encoding. For example, if a long series of pictures and concrete words (words that refer to rigidity, hence pictural objects) are presented, and people are asked to recall them by writing either the words presented or the names of pictured objects, pictures are better remembered than that of words. This occurs despite the fact that the verbal mode of response would seem to favour verbal over pictorial retrieval experiments. Conforming to general principles can, at least, order many intangible asset from the experimental study of remembering, that in general, all the research as such, in that conform to an encoding paradigm, as experiments are conducted in which a variable is manipulated during the study phase of an experiment, and the interest in its seeing how it effects performance on a later test.
 To measure the effect of this manipulation, subjects were given a recognition test in which of the 60 items studied were randomly intermixed with 120 additional non-studied words, as subjects were told too go through the words and pick exactly 60 that they believed were previously studied. Chance performance on the test was 0.33 (60 out of 180 could be obtained by someone who had not studied the list at all). Clearly, the levels-of-processing manipulations had a dramatic effect on recognition, as a graphemic analysis recognition was  contrasted in the changing recognition to a semantic analysis produced extremely accurate retention, especially when the answer to the question was yes.
 Nevertheless, this same principle extends to remembering events from our personal lives. Most of us can recall with more accurately what we did on some salient occasion (New Year`s Eve, our birthday) than a day occurring a week earlier or later. A special name, flashbulb memories, is employed for memories of occasions that are emotionally very powerful, such as the attestation in the birth of a child or participating in some great national tragedy (an assassination). The analogy is that our memories are so clear as regards, details surrounding the place of occurrence, our feelings, and even fine details of the event (or our reaction to it), that they seem to have been caught as in a photographic flash and undoable imprinted in memory. People have great certitude about such memories, even though studies show that some of the retained information is false. There is debate about whether flashbulb memories must be explained by some special mechanism, or if they are simply strong variants of particularly distinctive events working through the same general mechanism that makes a picture well remembered when placed in the context of many words (Conway, 1995).
 The three factors listed - endowing events with meaning, using imagery, and making events distinctive - are all examples of how factors manipulated at encoding can powerfully affect memory, however, as justly for reasons that posit manipulations occurring during encoding does not mean that retrieval processes are not important, that in most cases, the interaction between encoding and retrieval factors critically determining retention.
 If by chance or the given of change one can reflect on experiences you have had in trying to remember events from the distant past, the importance of retrieval conditions for remembering will become obvious. As, perhaps, you see someone familiar but cannot remember her name, a bit later the name comes to you, or someone asks you who starred in a particular movie and you draw a blank: When several possibilities are mentioned, you immediately know which one is correct. In another case, you return to a place where you need to live, and the sight and sounds bring back memories of events that you had not thought of for years. All of these common experiences show that having information encoded and stored in memory is no guarantee that it will be remembered, in addition to good encoding appropriate retrieval conditions must exist for the events to be remembered.
 Psychologists have studied the critical role of retrieval processes by manipulating the conditions and the types of cues provided to people during retrieval. In one common techniques, people are given long lists of words belonging to common categories (e.g., birds - pigeons, sparrows, furniture - dresser, hat rack, etc) with instructions to remember the objects in the category. Afterwards, some people are given a free recall test, in which they receive a blank sheet of paper with instructions to recall as many words as possible from the list: In one experiment people remembered 19 of 48 studied words under these conditions (Tulving and Pearlstone, 1966). What happened to the missing 29 words. Were they not well encoded and stored. Another group of people received a cued recall test with the category names given as retrieval cues. In this condition, subjects recalled about 36 words, or almost given as retrieval cues. In this condition, subjects retrieved about 36 words, or almost twice as many as in free recall. This shows that the failure to recall words under free recall conditions was due not solely to problems in encoding or storage, but also to retrieval factors. when supplied with strong retrieval cues, people can remember events that seemed forgotten under other conditions.
 Many studies using many different types of materials, have revealed the same general point. It is impossible to make absolute statements about how much or what kind of information is available (or stored) in memory, but all we can ever know is what information is accessible (retrievable) under a particular set of test conditions. Change the retrieval conditions (or the nature of the test), and a different estimate of accessibility information will be produced.
 What determines the effectiveness of retrieval cues. The general rule that is supported by considerable research is the encoding specificity principle, which states that retrieval cues are effective to the extent that they match the way the original events were encoded (Tulving, 1983). In the experiment, the category names served as effective cues, because they helped to re-create the encoding of the presented words, at least relative to free recall conditions. Similarly, the context in which events occurred can serve as an effective cue, which is why returning to a place from which one has long been absent can bring back memories of old experiences. The encoding specificity principle indicates that it is a mistake to consider either encoding factors retrieval factors in isolation when discussing memory. rather, the interaction between encoding and retrieval is critical.
 Though remembering is best conceived as the successful interaction of encoding and retrieval. Consider, for example, the effects of distinctiveness on recall of the event to be remembered. If a person sees a picture in a list of 99 words, it will be well recalled, but the same picture would be poorly recalled after being embedded in a list of 99 other pictures. Although the manipulation of distinctiveness occurs during the encoding stage of the memory experiment, the reason for its effectiveness probably depends critically on retrieval. The retrieval cu e ‘picture in the list’ identifies only one item in the list, helping to remind the person as to that one distinctive item, but the same cue is essentially useless when a other encoding factors described when a large number of pictures has been studied. The same argument can be made with the other encoding factors as for understanding how each affect is retentive, and would necessitate consideration of retrieval factors too.
 As another illustration of the interaction between encoding and retrieval factors, consider the effects of drugs on memory. Most drugs that depress activity in the central nervous system harm memory. Drinking alcohol or inhaling marijuana, for example, create poor recall of events that occur while the person is under the influence of the drug. The traditional explanation has been that these drugs harm the brain’s ability to encode and store events, as retention is poor. Although this explanation in terms of encoding factors is probably partly correct, it is not the whole story, because retrieval factors (in interaction with encoding) come into play in an interesting way. This is observed in the phenomenon of state-dependant retrieval: How well an event is remembered depends on the person’s pharmacological state both during encoding and during retrieval. Matching states during both phases aids retention relative to mismatching states.
 In the most common type of experiments on state-dependent retrievals, four groups of people are tested in various conditions, as in an experiment by Eich, Weingartner, Stillman and Gallan (1975). Two groups studied words in a categorized list like, in the one described earlier, but under conditions when they were sober at study, whereas two other groups were given a drug prior to study. A day after studying the material, the people returned and were then tested either sober or intoxicated, with all four possible combinations of conditions between study and test being used (sober at study, sober at test, etc.) People were given a free recall test followed by a cued recall test. The conditions and results from the Eich et al. experiment, placed accountable in that these researchers used categorized word lists, the retrieval cues were category names, first examine the free recall results, as the fist two groups were to show the standard effect of marijuana on memory. People who were intoxicated during encoding remembered less of the information when tested sober than did people who were sober on both occasions. The results in the third group showed that intoxication during only the retrieval phase also inhibits recall, although not as badly. The interesting case is the last of the few that people who were intoxicated during study actually recalled the information better if they were intoxicated again during the test. The advantage of the drug-drug conditions, that words were recalled relative to the drug-sober condition, in that what defines the phenomenon of state-dependent retrieval: Matching the pharmacological state during study and test improves recall (which have been replicated many times, do not argue that depressive drugs add memory. The sober-sober condition always produced the best retention).
 These same general principles also seem true of mood and memory research. People who learn information while depressed, for example, remember it better when they are depressed rather than happy (and conversely) again, this outcome occurs in free recall but not cued recall.
 The phenomenon just discussed show the powerful interaction of encoding and retrieval conditions: People understanding of all memory phenomena depends on considering encoding factors, retrieval factors and their interaction, as this is true even of mnemonic devices, with which memory improvement techniques have been of great interest to scholars throughout recorded history. The most common techniques have been repeatedly discovered and employed. All mnemonic techniques employ the general principles and supply strategies for both effective encoding and effective retrieval.
 Nonetheless, our memories are remarkable for being as accurate as they are. People who are rendered amnesic as a result of brain damage must be institutionalized or receive complete care at home, because our ability to remember affects everything that we do and every aspect of our being. (Imagine not being able to remember names, faces, where you put things, who told you facts and so on.) Yet, as good as our memories are, under normal circumstances, we are acutely aware that they are not perfect. We forget where we parked our car, our friends telephone number, and important appointments. More surprising, we can systematically misremember events. That is, we do not forget that some event occurred, but we forgot the details, or even the gist of what happened, proved wrong. We consider these issues as forgetting and false memories over time, means the loss of information over time. However, as we have already made clear, that the feature in standard research on forgetting is that different groups of people learn the same materials and then are tested (using some standard test) at various times since original learning, and the forgetting curve is plotted from the various groups performance. So, forgetting means loss of information over time. Still, forgetting in this sense does not necessarily imply that the forgotten information has vanquished from the brain; testing at any interval with more powerful retrieval cues would show recovery of the forgotten information. But it remains useful to speak of forgetting as loss of information over time when tested in a particular constant way.
 The nature of the forgetting function is relatively clear, but the explanations for forgetting are more unsettled. The earliest ideas was simply that memories decay over time just as muscles atrophy without use and become weaker, memory traces were thought to have a certain strength that decayed over time if they were not used. However, this notion has been discredited as a general explanation of forgetting (McGeoch, 1932). No mechanism is postulated further, decay is occasioned by time, but time is not an explanatory constant. (Suppose a child asked why her bicycle rusted when left outside in the rain for a long time, telling her that time caused the rust would not do, whereas an explanation in terms of oxidation - the process operating over time - would be more accurate.) In addition, empirical evidence showed that forgetting could be greater or lesser over time depending on the intervening condition. In particular, if the time between learning some event and being tested on it is filled with similar events, greater forgetting occurs. This fact turned psychologists away from decay as an explanation of forgetting and toward interference.
 Interference is undeniably critical to forgetting, but there is still no complete explanation of interference effects. Two classes of interference exist: Proactive and retroactive interferences. Suppose you try to remember the exact spot where you parked your car when you arrived at work on Monday, two weeks ago. This represents a difficult task for most of us because of interference. We park our car in different locations every day. All the times you parked your car before the day in question produce proactive interference for the target memory: All the places you parked your car after the day in question exert retroactive interference. The names indicate that interference can either have effects on retention of events coming later, a proactive effect, or later events can interfere with earlier ones, a retroactive effect. these two classes of interference have been systematically examined for almost a hundred years, and both can be quite potent in causing forgetting under appropriate circumstances.
 Agreeing that forgetting usually refers to the emission of information. We try to remember something, and either nothing comes to mind, or what does come to mind can be rejected as the wrong information. The issue raised under the rubric of ‘false memories’ is whether we can vividly remember an event and its surrounding details, but either the event never actually occurred, or it happened in a way very different from the way it is remembered. This issue of erroneous memories has been investigated, so radically since the turn of the century, and this research has occasionally played a large role in the wider world, such as in legal cases where the accuracy in attestation of memories of crimes is at stake. Psychologists have now identified several factors that reliably lead to creation of false memories.
 One of the most potent factors creating false memories is retroactive interference. We considered the role of interference in forgetting, but interference does not lead simply to omissions of memories, also to false memories. People can become confused about the source of material and can incorporate information that they read or heard about after an event’s occurrence into their recollection of an event. E.F. Loftus (1991) has reported many experiments documenting this phenomenon. In the basic paradigm, people witness a simplification accident or crime (say, a robbery) presented on videotape or in a series of slides. At some later point, they read a passage or answer a series of questions. In an experimental condition, the passage or questions contain some erroneous information about the original scene, such as the statement that the robber had a mustache (when in fact he did not). Subjects in a control condition read the passage without the misleading information. Later, subjects in both conditions receive a recognition or recall test in which they are asked about the crime or accident. interests centre on memory for the misleading information that was planted later. The outcome in dozens of experiments is that people will frequently remember the erroneous information as having actually happened in the original event, although the magnitude of the misinformation effect (as it is called) depends on many factors. The misleading information not only causes forgetting of what really happened, but seems to replace the correct information with erroneous information.
 One practical implication is that suggestive questioning of witnesses to a crime by police, or lawyers can undermine the witnesses’ accurate retention of what rally transpired.
 There is a second method of creating false memories is through presentation of related information. If people read a list of related words, or hear a prose passage, they will often mistake another related word or sentence as actually having occurred when in fact it did not. In one straightforward paradigm for creating such a memory illusion, people hear lists of words that we all associatively related to a word that is not presented. For example, they hear ‘hill, valley, climb, summit, tops, peak . . .’ all of which are associated of the nonpresented word ‘mountain’. Subjects frequently recall the word mountain as having to occurred in the list and recognize it as often as they do words that actually were presented (Roedierand McDermett, 1995). These illusory memories may be due to failure of reality monitoring, as Johnson and Raye (1981) call them: Did I hear something, or did I only imagine it?
 As the previous question indicates a third potent source of false memories is imagination. Just as imagery can boast retention of events that actually did occur, as described, so can imagination create false memories. If people imagine events, they are more likely to think they really happened when they are tested later. In addition, imagining events can inflate one’s estimate of the frequency that the events actually occurred.
 The three factors listed - interference, relatedness, and memory. The issue is a critical one to understanding memory and will be the focus of continuous research in years to come.
 The frame-like structure of declarative memory elements are those production systems typically representing declarative memory items, in terms of entities called frames or schemas. Each frame is simply a list of attribute-values pairs in which attributes represent dimensions (e.g., colour, size, location, etc.) that take on the values of the entity that the memory item denotes. For example, a declarative memory item representing some visual object might have a slot for the object’s colour, another slot for the object’s shape, and yet another slot, and yet another slot for the object’s position. Different kinds of items can have a different set of slots. One can think of the different combinations of slots as representing different object categories, as well as relations between objects.
 Such frame-like memory structures provide precise, powerful representations of things in the world, including objects, relations between objects, and relations between relations, this representational power is especially important when one tries to build systems that do complex problem solving. However, this form of memory representation often has difficulty in situations where the knowledge is more continuous and less hierarchical (e.g., low-level vision).
 Interestingly, the particular organization of declarative knowledge in a production system usually does not have immediate consequences for the system’s performance. That is, one can get similar behaviour from very different organizations of memory items. For example, one can use a single declarative memory element with many slots representing all that one knows about some individual or one can have a large number of declarative elements each representing individual facts about that individual. A production system can function equally well with either representation scheme. The reason is that what matter’s is primarily whether information is contained somewhere in memory, not so much which information is stored together. If a different organization is selected the productions are rewritten to accommodate the new structure. It is important to note, however, that in the production systems that learn the organization of declarative memory can have a strong influence on performance.
 Results of computation are stored in a potentially temporary declarative memory, whereas declarative memory does more than represent objects and features in the environment: It also represents the intermediate results for tasks that cannot be solved all in one step. For example, when mentally multiplying two - two-digit numbers, you must mentally store the intermediate products. Thus, a production system for doing this would contain some declarative memory elements that represent the [external] multiplicands as well as other declarative memory elements to represent the [internal] partial products. Another way in which declarative memory serves this function is in storing goals and sub-goals.
 This function of declarative memory raises another important and related question: Are these declarative memory elements permanent? In particular, are all the intermediate  products of complex tasks erased after the task is complete, or do they leave long-lasting declarative memory elements?  The basic problem is that the more information there is sitting around in declarative memory, the more likely it is that many productions will be satisfied simultaneously. This, in turn, complicates the process of conflict resolution. Moreover, this issue values to a common psychological finding considered to be a basic feature of human cognition: The limited nature of short-term or working memory.
 Production system designers have proposed a wide range of answers to these questions. At one extreme are systems in which items stay around forever once they are created. At the other extreme are systems in which items are deleted once the system moves onto the next task. The only way in which such systems can remember facts over long time spans is to have productions that re-create the facts in declarative memory when they are required. Intermediate between these two extreme approaches to dealing with the duration of declarative memory elements are those systems in which the elements vary in activation (which in turn determines how available or easily retrieved they are). The activation increases each time the represented facts or items are encountered and decays with time after each encounter. At first blush, it would seem obvious that the vast body of empirical evidence from experimental studies of human memory could be used to select among these approaches. However, it turns out that one can produce the effect of a limited working memory using any of these schemes, and the ultimate answer will require both further experimental evidence and detailed modelling of those experimental results.
 How does knowledge interact, and how does learning become generalized? Production systems provide strong answers to these fundamental questions (1) Learning occurs at the unit of production: (2) Transfer from one situation to another occurs to the extent that the same productions are applicable in both situations. This assumption about the modularity of productions allow production system designers to determine, through a detailed analysis of a task domain or a careful encoding of the verbal and behavioural protocols of human problem-solvers or both, what the individual productions are and simply add them to the system. One does not have to decide where to put a production: Its condition define when it will be used.
 Because of their modularity production systems scale up well to complex tasks. That is, not only do production systems function well on small, simple tasks they also function well in more realistic environments involving many sub-tasks and thousands (or more) of bits of knowledge. For example, there is a production system called TacAir Soar, which has tens of thousands of productions, can fly a simulated plane in a dogfight (in real time) while doing language comprehension and production, and is capable of providing a verbal summary of the mission afterwards (Tambe et al., 1995).
 Nonetheless, there are a number of areas in which production system models have already done very well, and are arguably the strongest (and occasionally only), models in those areas. These areas are almost exclusively instances of higher-level cognition and generally require the coordination of many kinds of knowledge. They include learning mathematical skills such as algebra and geometry, learning computer programming skills, language comprehension, scientific discovery, and many other forms of high-level complex action and reasoning.
 Nevertheless, the minor crises concern relations between memory for lists and memory for sentences. By postulating two autonomous systems for processing and storing lists versus sentences current multi-store theories of memory illustrate the SPM assumption that information processing and storage take place within autonomous modules, or stages. For example, Alan Baddeley, a leading British researcher investigating the psychology of memory, postulates a memory system known as the ‘phonological loop’ which processes and stores word lists in raw phonological form for short periods of time and is separate and distinct from the system for processing and storing the syntax and meaning of sentences (the central executive).
 Baddeley’s multi-store account of memory currently faces two sorts of empirical crises. The first concerns cases where sentence variables influence lists processing in ways that would not be expected if fundamentally autonomous memory systems process sentences versus lists. By way of illustration, consider a recently discovered effect, whereby, syntactic and semantic factors influenced immediate recall of words in rapidly presented lists (MacKay and Abram, 1936) compared immediate memory for identical words in chucked versus unchucked lists that were six to eight words long and rapidly presented through computer so as to preclude rehearsal. In general, to explain the resultant amounts and meet this crises in general, multi-store theories must explain how semantic/syntactic factors influence a supposedly separate store traditionally viewed as purely phonological in nature.
 1. Chucked list: Phrase good faith mind night gown film (phases italicized)
 2. Unchucked list: Phrases people faith mind night hose film (unrelated words).
The second crises concerns phenomena in immediate recall of sentences that are attributable to factors that characterize lists. When introduced into spoken sentences causes a short-term memory phenomenon known as repetition deafness, that is to say, that otherwise observed a short-term phenomenon, that is otherwise observed only within lists. To explain the result and meet this crises in general, multi-store theories must explain how a phenomenon can arise in the supposedly autonomous memory system for storing and processing sentences by introducing a characteristic of lists.
 Even through the mid 1950`s the various research programs within cognitive science have advanced our basic understanding of human mental function. Over the past 20 years, this basic science of mind has also contributed to the genesis of an applied science of learning and teaching that can powerfully inform educational practice and dramatically improve educational outcomes (Bruer, 1993). Classroom practice based on this applied science differs from traditional instruction in several ways. Instruction based on cognitive theory envisions learning as an active, strategic process. It assumes that learning follows developmental trajectories within subject-matter domains. It recognizes that learning is guided by the learners`introspective awareness and control of their mental processes. It emphasizes that learning is facilitated by social collaborative settings that value self-directed student dialogue.
 Each of the instrumental features has its roots in specific cognitive science, research programs. Research on human memory has established that memory is an active, strategic process, supporting the contention that learning itself is active, strategic, and constructive. Research on problem solving within subject-matter domains has resulted in descriptions of domain-specific learning trajectories that specific in some detail the knowledge and skills required for expertise within domains and how knowledge and skills are best organized to enable expert performance. Research on this awareness and control can guide understanding and learning. Research on sociological factors in cognition has provided significant new insight about the importance of language, collaboration, and social discourse in cognitive development and learning.
 Research on human memory has been a central pursuit of experimental psychology since its inception a century ago. A claim fundamental to cognitive psychology, which distinguishes it from behaviourism, is that the mind is an active information processor, not a passive communication channel. Early on, cognitive psychologists argued that we overcome intrinsic limitations on our short-term and working memory capacity by actively recoding knowledge into more complex symbol structures, or chucks. This suggested that learning might involve active, strategic recoding of knowledge structures in an attempt to discover the most efficient chucks for any given task. Cognitive research elaborated on an earlier, 1932 insight of F.C. Basrtlett about how long-term memory functions: Stimuli that cohere with prior existing memory schemata are better recalled than stimuli that fit poorly into prior schemata. The result was the development of the schema theory, which had contributed to how educational psychologists think about conceptual change. Cognitive research on memory provides empirical support for constructivist approach to learning and teaching.
 One of the most educationally significant results arising out of this research program is the encoding specificity principle. To remember a percept, we perform specific encoding operations on it which determine what is stored in memory. In turn, what is stored in memory determines what cues will be effectively given in helping us retrieve that memory trace (Tulving, 1983). Success on a memory retrieval task is not a function of strength of mental representation alone. There is a striking interaction between memory encoding and retrieval processes. In fact, the utility and efficacy of a particular memory process depends in, and interacts with, eventually retrieval conditions. A more general educationally salient formulation of this same result is Morris Bransford, and Franks (1977) transfer-appropriate processing. The value of particular types of acquisition activities can be assessed only in relation to the type of activities that subjects will be expected to perform at the time of retrieval or test. According to this principle, it is not possible to determine the value of learning activities in themselves. The values of a learning activity can be determined only relative to what one expects students to do with the material they are expected to learn.
 Research on human memory tells educators two things. First, encoding interacts with retrieval and acquisition conditions interacting with recall performance. Thus, the nature of the learning activity itself in prior to determining ones subsequent ability to transfer that learning to new situations. Second, the interaction between encoding and retrieval is mediated by, and develops other prior understanding, their perceiving knowledge, and their instructional schemata. If memory is an active pre-existing knowledge, and their pre-instructional schemata. If memory is an active  constructive process, then ones prior knowledge structures of a current learning condition, and future application conditions are inextricably intertwined. Cognitively sound instruction should build this architectural feature of human memory.
 Recognizing that one’s prior knowledge structure influences current learning has had a substantial impact on science instruction. Cognitive and educational research have documented numerous misconceptions that are indeed, that all of us, have about how the physical world operates. They have found that these misconceptions are largely impervious to traditional science instruction. In physics, for example, misconceptions persist even after extensive formal instruction through the ideation level, where the traditional instruction does not correct ones prior misconceptions, because it ignores them. Ignoring ones pre-instructional understanding allows one to interpret and encode traditional science instructional function using these pre-existing naive memory schemata. The result is that one can encode, or learn, schemata that are very different from these which teachers are attempting to impart.
 Instrumental approaches that attempt to assess ones pre-instructional knowledge and belief about scientific principles are significantly more successful than traditional science instruction in correcting misconceptions and imparting more expert-like understanding of science. Jim Minstrel and Earl Hunt, for example, developed a cognitive approach to high school physics instruction, a curriculum they called ‘physics for understanding’ (McGilly, 1995). Each instructional unit begins with a diagnosis test that allows the instructor to identify ones prior understandings and observe how one reasons with them. Minstrell and Hunt call the pieces of science knowledge that one uses in their reasoning knowledge facets among the knowledge facets which one brings to a specific problem, some are incorrect, but others are correct. Correct facets can be used as anchors for instruction, to help one construct more expert-like schemata. Incorrect facets become targets for instructional change. Evaluation that have compared Minstrell and Hunts approach to traditional instruction and other experimental physics curriculum shows how one in the Minstrell-Hunt curriculum acquire significantly superior understandings of physics and scientific reasoning. Applying our understanding of memory in the design of science instruction can result in curriculum which allow the characterizations to help students correct their naive understandings and misconceptions. Such instruction is significantly more effective then traditional approaches.
 However, the mechanism for reasoning may in fact, of a better understanding for its positing status of internal representation that is inherently interdisciplinary, for which representational pluralism in cognitive science ensures that much of what is posited as internal representations are what representation means in various forms from discipline to discipline and from theory to theory. Aside from complicating matters of interdisciplinary discourse in a discipline that is inherently interdisciplinary, representational pluralism in cognitive science ensures that much of what get posited as internal representations are representations just in virtue of the description placed on cognitive processing. Cognitive scientists use representation to refer to a wide range of phenomena (e.g., processes, mapping, rules, theories, information-bearing states, causally co-varying structures, and so forth). Aa such, it is not obvious that everything that gets called a representation warrants in virtue of notions of representation that are so trivial and uninteresting that cognitive scientists are guaranteed to find them. That is not good science. Although it is almost universally assumed that all cognitive processes are computational processes, and all computational processes require internal representations as the medium of computation, an anti-representationalist challenge has arisen from discussions of several computation related issues. Are intelligent systems computational systems? Is a symbolic computational framework a plausible framework for explaining biological cognitive processing? Do computational simulations explain how the mind/brain works? Thus, for a variety of reasons, some cognitive scientists contend that the status of internal representations, may be as problematic as that of phlogiston.
 A distentiated future will not satisfy many committed to cognitive science. But those committed to an integrated cognitive science may discover that the potential for fracture is not as serious as it seems, at present the dynamicists’ challenge is not fully formed. Central to the challenge are the notions of information processing and representation, but these notions are currently vague and must be theoretically regimented. It may well be that a mature dynamical account will posit genuine information processing and representation, although the representations employed will not be syntactically structured or sentence-like representations (Fodor’s language of thought) is, in any case, under severe attack from a number of quarters in contemporary cognitive science. Other models of representations have come to the fore in graphs, maps, holograms, house plans, and other nonessential schemes, and many investigators are exploring the idea that the brain may process information using one or more of these other kinds of representation.
 It is, nonetheless to explain how a given system does what it does, positing internal representations would be required just in case the system trafficked in entities whose  content-bearings status does not depend just not depend on our  descriptions or interpretations. Adopting th e less than ideal vocabulary found in the literature on intentionality, wherefore, intrinsic representation bears content even if no one were to see it: It will bear content for as long as it exists. Such is the case because ontological quantification depend on being content-bearers. They have this feature because unlike, say, rocks, photos are produced content-bearers. Not everything has this feature, the contrast class is extrinsic representations - content -bearing entities whose  status as representations does depend just on our description  or interpretations , as anything can be described as if it bore content: Anything can be an extrinsic representation, but can be everything is an intrinsic representation.
 Do brains produce [internal] intrinsic representation? In all probability they do, whereas photos are produced by a mechanistic process designed to produce entities that are ontologically dependant on being content-bearers, a plausible evolutionary analog would be the product of mental imagery. Surely mental images of one’s past experiences are intrinsic re-representations if anything is. Linguistic tokens are another candidate . After all, once during such-and-such time, tokens of that type will always bear content. So, if either mental images or linguistic utterances are intrinsic representations, some intrinsic representations are products of biological cognitive processing+ What is at issue is this: Do internal representations mediate the process underlying the production of such representations? As these processes are supposed to be computational processes, the answer should be a positive yes.
 Given the right sort of interpretation, analog quantities or disputed patterns of activation, like anything else, can be representational, but since it is the interpretational process alone that makes them representations, and, at best, they are extrinsic ones. While such constructs are descriptively useful, trying to pass them off as internal representations trivialized whatever gain representation-talk is supposed to contribute to our understanding of nonsymbiotic-analog processing (Stufflebeam, 1995). It also immunize representationalism from being falsified. So much the worse for representation.
 Representation-talk is wrought with controversy. This is so, par t, because cognitive scientists posit representations while remaining ambivalent, at the very least, about the ontological problems associated with the practice. Also, it is far from obvious that everything that gets called representation merits the name, much less whether they are internal representations. Resolving these related tensions requires much in the way of reexamination and includes asking such questions as:
 1. Why should representation-laden computational descriptions qualify as mechanistic explanations?
 2. To what extent are internal representations artifacts of the interpretation we put on cognitive processing?
 3. To what extent do our commonsense intuitions about vision predispose us to find representations in perceptual processing, even though representation-talk seems appropriate only when the system needs to keep track of external objects that are not immediately present?
 4. If any internal patterns of activation counts as a symbol (or an internal representation), what possible empirical evidence would count against the notion that all intelligent systems operate over symbols (or internal representation)?
 5. Is there any level of plexuity with which one would not posit internal representations to explain how a system works? If where is, why are mechanistic explanations of the simplest biological processes representation-laden?
 6. How much computational labours do biological intelligent systems off-load to their environment, thus minimizing the need for internal representations?
Aside from sensitizing ourselves to the unconsidered use of representation-talk, another result is that we can be full-blooded computationalists without committing ourselves to the view that the brain processes information in the same way as do our representation-laden computer simulations. Where the ontology of biological intelligent systems is concerned, representation-related conservation is a small price to pay for a commitment to naturalism, hallowed be its name.
 The needed cognition is the flexible coupling of perception and action. Whether direct or complex, this coupling depends on representing information and operating  upon it. Thus, representation and its partner, processing, are the most fundamental ideas in cognitive science. Representations are the bundles of information on which processes operate cognitive processes such as perception and attention encode information from our inherent perceptions of the world, thus creating or changing our representations. Processes of reasoning and decision making operate on representations to form new beliefs and to specify particular actions. Process refers to the dynamic use of information. Representation refers to the information available for use. Loosely speaking, representations include the ideas, sights, images, and beliefs that fill our thoughts and also the sensations and dispositions which may fall outside our awareness. Because representation is such a central concept in cognitive science, the term is used in a number of related senses, and, are such that these are more specialized for uses as the need arises.
 We have many intuitions about the information that is part of our own thinking or that is needed for the operation of an artificial system, and these intuitions are often a valuable starting point and source of hypotheses about representation. However, it is also frequently the case that our intuitions are incorrect or lacking altogether. This leaves a large set of problems regarding representation open for study, and cognitive scientists investigate many of them. what are the representational components of visual perception? What representations does an infant have to aid initial language learning? What representations will allow a computer system to diagnose blood diseases or a robot to navigate in unfamiliar territory? Different research goals emphasized by different disciplines within the cognitive sciences motivate different types of questions about representation: What people use, what a computer application needs, or what the nature of logic, language, or imagery might be.
 Sometimes it is useful to separate questions about representation from those about processing. Consider a psychological example. As air traffic controller might be incorrect about those in processing critical information about loss of altitude or because of a processing slip due to overload attentions at the critical moment, identifying which was the case might be important, both theocratically and practically. The difference between representation and processing is often a useful contrast.
 The most fundamental contrast in understanding representation, however, is the contrast between th e representation and the thing represented. All representation systems involve a relation between a represented world and a representing world (Palmer, 1978) A represented world provides the content that the representations are about, and the represented world provides the content that the representation from the represented world. Wherefore, intentionality is an important characteristic of cognition. It is useful to think of cognitive states as involving relations to intentional objects, even though the notion of an intentional object raises deep questions in philosophical logic. It is unclear whether all mental life involves intentionality, whether there are raw feels. Certainly, many kinds of feelings involve intentionality: Emotions, for example, and bodily feeling. Knowledge and perception have intentional content: Appreciation of this fact undermines the standard sense datum argument and helps to avoid mistakes in studying imagery. Understanding th e intentionality of language, pictures, and other symbols and representations requires a distinction between using symbols to communicate ideas and using symbols to calculate or think with. The intentionality of symbols used in communication may be derivative of the original intentionality of symbols used in thought and calculations. However, it is controversial whether the mere use of symbols in the right way is enough to give them original intentionality.
 Our mental representation of some event does not contain the same information as did the event itself. This difference shows up when two people recall the past about the same conversational relations and discover that their memories are very different, of course, if each mental representation had the same information as the event itself, then two mental representations of a given event would be the same. Even so, the simplest precept is not the same as the stimulus which triggered it. Our perception selects, organizes, and sometimes distorts information from the perceived world. The perception of one individual differs from that of another, and difference of the same species are even greater.
 Such is, that mental representations are the internal systems of information used in perception, language, reasoning, problem solving, and other cognitive activities. Mental representations cannot be observed directly. Then nature is inferred from observing the information to which a person is sensitive and the distinctions a person uses. As with external representation, there may be different kinds of mental representation systems, such as kinesthetic, linguistic, and visual. What is the represented world when the representing world is mental representation? Most simply, mental representation represent information about the external world - the perception of a face or memory of a conversation. Further, some of these external things are themselves representations: Photos, textbooks, menus, and so forth. In addition, mental representation can be about internally generated information, such as remembering a past thought or considering a newly generated idea or goal. Something is a mental representation because of its resolve in a person’s (or animal’s) cognitive system, not because it is about one thing versus another. (Some reachers, perhaps following Piaget, restrict the term mental representation to presentations, once, again, of information from long-term memory, unavailable from perceptions, but this restricted use is not the dominant one.)
 To arrive at by reasoning from evidences derived of a conclusion by reasoning makes true of theoretical representations are part of a theory about something. They provide an abstract model of the target domain, be it movement of beach sand, economic growth, or human cognition. My theory of perception might claim that people represent rectangles in terms of size and shape: My theory about stereotyping might claim that non-grouping members represent the social group African-Americans with an average of media presentation, my theory about decision making might claim that people represent choices in terms of worst envisionable outcomes. Representations in a theory of cognition often have two layers of correspondence, first and foremost, the representations in the theory are taken to correspond to the mental representations in people’s minds: That is, the represented world of the theory. If the theory is a better one. It will represent more distinctions that are actually important to human cognition and will not introduce distinctions which do not matter. Secondly, any of these theoretical representations of mental representations indirectly correspond to things in the world, such as an actual rectangular structure.
 Without saying, that by the early 1980's, certain kinds of difficulties were arising quite persistently and quite systematically within classifications. Examination of these difficulties makes it seem likely that they are not mere temporary setbacks but difficulties in principle stemming from fundamental assumptions of the classical framework. The difficulties centred largely around what has come to be called the frame problem. In its original form, the frame problem was concerned with the task of updating one’s system of beliefs in light of newly acquired information. If you learn that Mary has left the room, you will stop believing that Mary is in the room, and also stop believing, for example, that someone is sitting on the sofa and that there are four people in the room. You will also make some obvious inferences from the new information: For example, that the clothes Mary was wearing and the package she was carrying are no longer in the room. But most of your beliefs will not be affected by the new information. Human bings adjust their beliefs in response to new information so naturally that it is surprising to find that it is a problem. But, it has proved quite difficult for classical cognitive science.
 For a belief system of any size, obviously, it is not possible to examine each of the system’s belief to see if it needs to be changed. Thus, Jerry Fodor (1983) retrieves the frame problem as ‘the problem of putting a frame around the set of beliefs that may need to be revised in light of specific newly available information’. Seen this way, the problem is fundamentally one of relevance to provide an effective, general procedure that will determine the belief to which any particular new belief is at all relevant. Those are the beliefs that get framed. Which of these relevant old beliefs actually need to be revised in a given case is then a further question.
 There are several other cognitive activities that pose similar problems of relevance, belief fixation (arriving at a new belief on th e basis of diverse and perhaps conflicting evidence), retrieving from memory information that is relevant to solving a current problem or carrying out a current task, and forward-looking tasks such as deciding what to do next, deciding what is morally permissible or obligatory and making plans.
 Apparently, the classical approach in all these areas must be as Fodor suggests, to attempt to put a frame around what is relevant: That is, to try to introduce rules which determine, for any given item of information, what is relevant to that item of information and what is not. Call such solutions to problems of relevance, ‘frame solutions’.
 Frame solutions appear to be doomed to failure. Human cognitive systems are opened. There is no limit to the things a human being can represent, and anything one can represent is potentially relevant to anything else one can represent. Relevance depends upon the question, topic, or problem at hand - in a word, upon context. For virtually any pair of items of information you pick, there will be some context in which one is relevant to the other. (It has been suggested that the price of tea in India is not relevant to the question of whether Fred has had breakfast by 8:30 am. The obvious reply is that it is relevant ‘if Fred happens to be heavily invested in Indian tea and the market has just fallen savagely (Copeland, 1993).
 Our suggestion, then, is that there are no such relevance frames in human cognition, but what other kind of solution is possible within the classical framework? Cognitive science lacks the slightest clue as to how representation-level rules could update memory appropriately or find relevant information efficiently for open-ended belief systems of the kind possessed by humans. Indeed, it seems entirely likely that it can’t be done by systems of rules at all. As Fodor (one of the staunchest defenders of classicism) has written:
 The problem . . . is to get the structure of the entire belief system to bear on individual occasions of belief fixation. We have to put it bluntly, no computational formalism that show us how to do this, and we have no idea how such formalism might be developed . . .In this respect, cognitive science hasn’t even started: We are literally no further advanced than we were in the darkest days of behaviourism (Fodor, 1983).
The reemergence of connectionism in the 1980's was in large part a response to the problems in classical cognitive science. As problems persisted, many researchers looked elsewhere for a better prospect of positive results, and the only other game in town was parallelled distribution in the processing - connectionism. But this raises a fundamental question that has received surprisingly little discussion: Does connectionism have features (fundamentally different from those of classicism) that suggest that can make progress, not just on other problems, but on the very problems that slowed progress in classical cognitive science?
 Classical systems, by their very nature, invoked both representation-level rule execution and representations with language-like syntactic systems. Thus, syntactic structure and cognitive-level rules are two places to look for fundamental differences between connectionism and classicism.
 Certain kinds of rules are very prominent in connectionist theory, but they are not representation-level rules. Activation updating within individual nodes and local activation passing from one node to another occur in accordance with rules. (In current connectionist modelling, these are programmable rules. This is why connectionist works can be simulated with standard computers, as theory are in virtually all connectionist modelling. But it is not part of connectionist theory that node-level rules must be programmable.) However, the processing that takes place locally between nodes and within individual nodes is not in general representational. Not all local node activation in a network model and represent atomic content, and in some models the activation of a single node never has representational content - all representations, even the most basic or atomical, consist of activation patterns over a whole set of nodes. Thus, the fact that individual nodes are rule-governed leaves open the question of whether the processes that representations undergo in connectionist models must conform to rules.
 There is an important sense in which even node-governing rules are absent from connectionist systems: Networks do not contain explicitly represented rules of any kind. It is sometimes thought that the absence of explicit rules constitute as a watershed difference between connectionism and classicism. But this is a mistake. Such that the rules posited by classicism can be hard-wired into a computational system rather than being encoded as representations. (Indeed, at least some rules executed by a classical computational system must be hard-wired. The node-governing activation-update rules of a connectionist network are analogous to basic-wired rules of classical systems.)
 It is more common to focus on lack of syntactic structure as an alleged difference between connectionism and classicism (Churchlands, 1989, 1995: also as by virtue, Fodor and Pylyshyn, 1988 - as a deficiency). Such authors claim that the activation vectors that constitute representations in connectionist systems lack syntactic structure. (A vector is essentially an ordered n-tuple of items, an activation vector is an ordered n-tuple of activation values of specific nodes in a neural network.) This means that the processing of representations in connectionist systems is fundamentally different from the largely syntax-driven processing of representations in classical systems. These writers do not raise the question of whether connectionist processing conforms to programmable rules, implicitly, at least, they evidently suppose that it does, but they would suppose that the rules at work in connectionist systems describe processing as effecting vector-to-vector transformations, and suggests that such transformations conform to rules that are sensitive to the vectorial structure of the representations. The approach, which we call nonsentential computationalism, repudiate a fundamental assumption of classical mention.
 Nonsentential computationalism is not obviously a correct interpretation of all extant connectionist models. On the contrary, there are certain models that are naturally interpreted as involving both representations that have syntactic structure and processing that is sensitive to the structure (Jordan Pollack (1990) and Paul Smolensky (1990), and discussed in Horgan and Tienson, 1996). Nor is nonsentential computationalism obviously the most natural or most attractive foundational framework for connectionist cognitive science. One serious reason for doubt is that framework for computationalism in effect offers just a seriously limited variant of classicism. It is a variant because it continues to hold that cognition is implemented by processes that conform to programmable rules (so it can be more powerful than classical cognitive science). It is limited because it eschews an extremely powerful way of introducing semantic coherence into the computational manipulation of representations: The syntactic encoding of propositional information.
 One favouring nonsentential computationalism might be expected to reply that connectionist models get by without any explicit stored memories, with lots of information in the encumbering end that networks are not programmed. However, to the extent that connectionist precessing conforms to representation-level rules, we could get these same features in a classical system, in which all the rules are hard-wired rather than explicitly represented in lots of information is implicitly accommodated in the (hard-wired) rules rather than being explicitly stored in memory.
 But is it possible for a connectionist system that employs representing to fail to conform to rules that refer to these representations? Indeed it is. In the first place, it is not necessary for the temporal evolution of a connectionist network to be tractably computable. The natural mathematical framework for describing networks is the theory of dynamical systems (Horgan and Tienson, 1996), and the temporal evolution of a network is not tractably computable, there is no reason to believe that the cognitive evolution of the cognitive system which the network realizes will be tractably computable through representation-level rules.
 But in the second place, it is important to understand that a connectionist model may not conform to representation-manipulation rules even if it does conform to sub-representational programmable rules that govern individual nodes and local inter- node transactions, as do, that most current connectionist models conform to programmable node-governing rules: The networks are simulated on standard computers. As a prelude to explaining why not, we begin with a preliminary point that is important and not widely recognized. It is possible for a connectionist system to be nondeterministic at the sub-representational level of node activation updating and local inter-node activation passing. This is because the same connectionist representation can be realized by many different sub-representational states of the system, and the representation-level outcome of processing can depend upon the specific way that a representational state is realized sub-representationally.
 One source of multiple reliability of representations is different degrees of activation of nodes. The realization of a particular cognitive stats, say, ‘A’, might consist in each of a given sets of nodes being active, to, at least, a certain degree, say 0.8. Then some realization of this cognitive state will have node ‘N’ mere highly activated than node ‘M’: Others will have node ‘M’ more highly activated. It can then happen that from some activation states that realize ‘A’ the system goes into activation states that realize cognitive state ‘B’ while from others it goes into activation states that realize a different cognitive state ‘c’: So, there will be no way of knowing the cognitive-level outcome just from knowing its initial total cognitive state. Being non-determinate at the cognitive level can be a valuable asset in many kinds of competitive activities, such as playing poker and feeling for one’s life: (note that no randomizing dice-throw rules are involve at any level of description, either representational or sub-representational, as would be required to make a classical system nondeterministic).
 This preliminary point establishes an important moral: Namely, that key features of a connectionist system at the sub-representational level of description need not transmit upward to higher levels of description, because inter-level realization relations can work in ways that block such transmission. In that of saying, that tractable commutability of state transition can also fail to transmit upward in connectionist systems, so that a system can fail to conform to programmable, reorientation-level rules, even though it conforms to programmable sub-representational rules.
 Given that the transition of the underlying network are tractably computable, one might think that the cognitive transitions realized in the network could be computed like this. Starting from a cognitive state, (1) Select an activation state that realizes this cognitive state, (2) Compute the network’s transitions from this activation state through subsequent activation states, and (3) For each subsequent activation state, compute the cognitive state (if any) realized by that state.
 Although the assumption that the transition of the network are tractably computable guarantees (2), there is no guarantee that step (3) - or even step (1) - will be possible. The function from activation states to cognitive states need not be tractably computable. It is possible, for example, that the simplest, most compact way to specify that function might be through an enormous (possibly infinite) list that pairs specific total activation states with specific total cognitive states - a list far too long to b e written using all the matter in the universe , let alone to constitute a set of programmable rules.
 If the cognitive transitions by a network are not computable in the way just suggested, they need not be tractably computable in any other way either. Thus, one should not infer from the fact that a network’s activation-state transitions are tractably computable that it implements a cognitive transition function that is tractably computable. Nor should one suppose that an algorithm for computing its cognitive transition.
 The possibility that the realizing function may not be tractably computable is not a mere abstract possibility. Certain connectionist learning algorithms allow models to select their own representations (Pollack, 1990; Berg, 1992), discussed in Horgan and Tienson, 1996). Representations are modelled along with weights as learning progresses, this allows for more efficient schemes of representation, with weights and representations ending up made for each other. It is easy to suppose that complex cognitive systems that worked in this way (as natural cognitive systems apparently do) would have very complex, rich, subtle realizing relations that are not tractably computable.
 Given that it is possible for a connectionist cognitive system to fail to conform to programmable representation-level rules, several questions arise. First, cognitive transitions are not effected by executing such rules, how are they brought about? Second, are there reasons to think that it is desirable for a system not to be rule desirable Third, if a system does not conform to rules at the cognitive level can it be coherent  enough and systematic enough to be called a cognitive system at all?
 A very natural way to think about cognitive transitions in connectionist systems is in terms of content-appropriate cognitive forces. Beliefs and desires work together to generate certain forces that tend to push the cognitive system toward output states that would result in particular actions. But those forces can be overcome by stronger forces pushing in different, incompatible directions. A single clue in a mystery might point to the guilt of some suspects and at the same time tend to clear certain other suspects to varying degrees. Thinking of the clue produces forces that tend to activate some possible beliefs about whodunit and inhibit others. The interaction of cognitive forces in a cognitive system can be very complex. Forces can complete, in that they tend toward incompatible cognitive states, or they can cooperate, tending toward the same or similar outcomes. There can be a large number of competing and cooperating factors at work in a system at once. Connectionist models that perform multiple simultaneous, soft constraints satisfaction provide suggestive simple models of the interaction of cognitive forces.
 In a connectionist network the interaction of cognitive forces is physically implemented by spreading activation. But when a representation is realized by activation of a large number of nodes, the cognitive forces generated by the overall representation are distinct from the local physical forces produced by the individual nodes implementing the representation. (The individual nodes need not be similar to one another in the kinds of weighted connections they have to  other nodes or representations so they might have different causal roles from one another.)
 The possible value of such a picture for dealing with relevance phenomena - phenomena associated with the frame problem in classicism - should be evident. Any two cognitive states that put out forces tending to activate other cognitive state’s will be capable of interacting causally when co-present in a cognitive system. And any two or more states that are relevant to the same problem will interact with respect to that problem - at least to the extent of tending to move the system in the direction of conflicting or compatible solutions. Thus, certain kinds of content-relevant interaction are automatic for systems that have states with content-relevant cognitive forces. Potential interactions do not have to be anticipated in advance in terms of form or content - a key difference from classicism systems, in which the operative representation-level rules must determine all such outcomes. Furthermore, forces interact with one another in a manner appropriated, not only to the contents of all the cognitive states currently activated in the system, but also to much non-activated information that is implicit in the system’s structure - in the weights, as connectionists like to say.
 In natural cognizers, there are many systematic patterns by which cognitive forces are generated (many of which correspond to the generalizations of commonsense psychology). Appropriately related beliefs and desires conspire to produce forces that tend toward certain choices. (It is arguable that this pattern depends upon syntactic or syntax-like structures of the belief and desire states.) Repeated observation of a pattern of events results in cognitive forces that tend to produce expectations of similar patterns. In such cases there is a causal tendency to make such choices, have such expectations, and so forth. But these are defeasible caused tendencies, that is, it is always possible that the tendency will be overridden by a stronger force or combination of forces. Thus, although there are generalizations about the cognitive transitions that correspond to these patterns of cognitive forces, there are no programmable rules corresponding to these generalizations because they have exceptions.
 Furthermore, these generalizations cannot be refined into programmable rules by specifying the possible exception. Because of the potential relevance of anything to anything, it is not possible to spell out all of the exceptions in a machine-determinable way (Horgan and Tienson, 1996). The defeasibility of causal tendencies poses a deep problem for classical cognitive science, since all potential exceptions need to be specified in just such a way: They need to be explicitly covered, for instance, by unless clauses within representation-level rules. In the cognitive forces picture, nothing has to be done to deal with exceptions: They arise naturally as a feature of the architecture.
 Although cognitive state transitions do not conform to representation-level rules, according to the connectionist inspired conception of cognition that we are suggesting, systematic patterns among cognitive processes (such as those mentioned) do conform to psychological laws of a certain kind: Soft psychological laws have in ineliminable - ceteris paribus - (all else equal) clauses, allowing for exceptions that are not specified in the laws themselves. It is important that the exceptions allowed by such laws include a virtually endless range of exceptions resulting from factors like physical collapse or its breakdown (e.g., having a stroke) or external physical interactions are not mistakes or errors, but the result of the proper functioning of the cognitive system. We believe that soft laws characterize the kind of consistency  and systematicity that natural cognizers actually have. They support explanation and prediction that ranges of cognitive psychology (Horgan and Tienson, 1996).
 It remains to be seen whether this nonclassical view of the mind will gain empirical support from ongoing work in cognitive science. Meanwhile, however, it is well to keep in mind that connectionist modelling does not presuppose or imply that human cognition conforms to programmable representation-level rules, and that there are serious reasons to believe that human cognitive capacities essentially outstrip the capacities of systems that excuse representation-level rules.
 What brain mechanisms might underlie the dynamic of perceptual processing? The way piecemeal object structures get coordinated resembles a process of mutual constraint satisfaction. The process will be nonlinear, to allow for correction of components once they are given a role within the  configuration, but dynamically evolving substructures which can be corrected as object structure emerges. This would imply a role for the primary visual cortex as a sketch pad of perception.
 Hologenetic development, however, appears not to be limited to the perceptual time scale. Not only is hologenesis found in micro development but is also observed in the formation of perceptual categories and in the case of perceptual pattern learning. Similar phenomena occur at the scale of perceptual development and in the learning of syntactic structure (as in the work on language acquisition by Elissa Newport and Jeff Elman). The growth of object  structure through a process of self-organization among the components could therefore be proposed as a process for perceptual dynamics across a variety of time scales.
 The brain principle we are looking for, therefore, must encompass both short -term and long-term processing loops. Christopoh von der Malsburg, and  Wolf Siunger and other theorists have proposed the synchronization of oscillatory activity as a mechanism for selective component binding as within our brains processing visual data in segregated, specified cortical areas. As in commonly remarked, the brain processes the ‘what’ and the ‘where’ of its environment in separate, distal locations. Linked, regarding the ‘what’ information that the brain computes. It responds to edges, colours , and movements using different neuronal pathways. Moreover, so far as we can tell, there are no true association areas in our cortices. there are no convergence zones where information is pooled and united: There are no central neural areas dedicated to information exchange. still, the visual features that we extract separately have to come together in some way, since our experiences are of these features united together into a single unit. The binding problem is explaining how our brains do that, given the serial distributorial nature of our visual processing. How do our minds know to join the perception of a shape with the perception of its colour to give us the single, unified experience of a coloured object?
 This problem has a venerable history in philosophy, first appearing in its modern guise in David Hume, as he, following John Locke, speculated on the rules that our minds must follow in uniting simple impressions into more complex ideas. He recognized that the rules of association alone could not be enough: Incoming stimuli are always changing, yet we manage to experience ideas as constant across time. Somehow our faculties of imagination step in and fill the gap between stimulus impressions and later memories and ideas. Immanuel Kant, too, recognized that mere spatial contiguity and temporal conjunction would not unite certain incoming stimuli into bound impressions at the exclusion of others. Both Hume and Kant concluded that our minds must add something to our perceptions so that our experiences are a three-dimensional, object-filled world.
 The history - and its solution - recapitulates itself in contemporary cognitive science. Like Hume and Kant, cognitive scientists recognize that the story of visual perception told thus far is incomplete. The brain must rely on something besides physical connectedness among cortical areas to generate united percepts. But what? Association, even in the head, is not enough. What would be?
 It may therefore seem that a system approach to perception could provide a better explanation for perceptual phenomena. But the system approach is not without its own problems. From a system point of view, it may appear of some miracle that  perception functions so well in situations where the condition require us to go beyond the information given, like limited vision conditions or conditions where the goal of the action is way beyond the horizon of visual stimulation. The constructist approach explains this from the overall tendency of perception to make sense of a situation. Pictures and films exploit this tendency of perception, including that of being misled by expectation. Such in seeing a bank robbery where, in fact, there is only a filmset of a bank robbery.
 The intuitive direction in the early period of cognitive science tended to limit its focus to events presumed to be taking place within the mind or brain. While all researchers would acknowledge that minds exist within bodies and that these bodies have to deal with the external world (both physical and social), most researchers assumed that they could disregard these considerations when studying cognition. Cognition focussed on the processing of information inside the head of the person. In order for this to happen, information had to be represented mentally: Cognitive processes could then operate on representations. Subsequently, represented information had to be translated into commands to the motor system, but this took place after cognitive processing as such was finished. Jerry Fodor (1980) articulated such theoretical justification for ignoring both the external world and the body in cognitive science, labelling the resulting framework,  as ‘methodological solipsism’, but opposition was already gathered in a number of quarters.
 One of the major inspirations for challenging methodological solipsism was the work of J.J. Gibson, a psychologist working in Cornell contemporaneous with the early period of cognitive science, but whose impact fell elsewhere. Gibson studied visual perception, but instead of concerning on the information processing going on within individuals as they see, he examined the information that was available to the organism from its environment. His major contention was that there was much more  information available in the light than psychologists recognized , and that organisms had only to pick up this information (Gibson, 1966). They did not need to construct the visual world through a process of inference or hypotheses formation. He argued, for example, that people do not need to construct a three-dimensional representation of the world, rather, there is information specifying the three-dimensional nature of the visual scene. In the gradient of texture density, changes in occlusion of object as the perceiver moves about in the environment, and so forth one of Gibson’s major contentions was that the perceiver must be understood as an active agent using its own motion to sample information about the environment. Gibson also stresses that not all organisms pick up the same information from the environment, but rather would resonate with information that is coordinated with their potential for action. Accordingly, he introduced the notion of an affordance, different objects afford different action to different agents (e.g., a baseball affords throwing to use, but not to frogs), and it is these affordances which organisms are turned to pick up
 Nevertheless, the immediacy of these experiences makes it easy to take perception for granted. Yet, perception requires the flexible coordination of complex neuro-anatomical resources. The eye, the optic nerve, and also a significant portion of the brain are involved in vision. We may further consider the eye muscles that are used for focussing and targeting of the gaze to be part of the visual system, as well as the muscles of the neck and shoulders with which postural adjustments are made.
 There are self-organizing processes that allow the appropriate response of how all these resources coordinated to let the system as a whole perform its function in the relevant circumstances, such that these processes permit rapid switches in response due to minimal changes in circumstance, as long as these are important enough. Thus, perception may appear immediate, but it is achieved through a variety of adaptional learning developments, and evolutionary processes, For these should for an essential part of the description of the system. The quest for such a description constitutes the systems approach to perception.
 Perception starts from a pattern of external physical stimulation (e.g., the photons that reach the eye) and is completed when this pattern is matched to an internally kept set of beliefs or representations of the world. A conceptual distinction is therefore needed between sensory processing and an inferential reasoning stage, which could be called perceptual in a more narrow sense of the word.
 Sensory processes are involved in measuring the physical stimulation. Employing linear, semi-linear, or threshold functions, they faithfully represent certain relevant aspects  of physical signals such as light intensity and  hue or sound intensity and pitch. Physical stimulation will arrive in a particular spatiotemporal pattern. The sensory process, however, is indifferent to this pattern. For  instance, suppose a detector measures the light intensity in a certain area on the retina. This patch of light will be registered as the same sensory feature regardless of whether it is part of a triangle, a square or just a random configuration. Further sensory processing will combine the output of earlier detectors into higher-order ones in order to identify features of increasing plexuity. Thus, there will be detectors for features such as contours, line elements, and curvature. Nevertheless, in sensory processing the identification of each feature of these feature s will still not be influenced by the overall pattern of which it is a component.
 In the constructivist account, sensory processes provide only the lines and angles of intersection: Perception tells you what object you are looking at. Perceptual processes operate on the sensory features to construe a perceptual representation. Unlike sensory features perceptual representations do not depend faithfully on stimulation: Ambiguous patterns such as the Necker cube, of which has two rival interpretations, referring to alternative views from which are mutually exclusive). The existence of alternative responses to the same pattern of sensory stimulation requires two alternative perceptual representations for the same pattern of sensory stimulation.
 Different patterns of sensory stimulation, may also elicit the same perceptual response. In particular, it is important that the perceiver recognize an object as the same under different orientations. An elephant is an elephant whether one is looking at the front, back or side. for this reason, perceptual representations are often assumed to have a view-point-independent frame of reference. Even in non-stable circumstances, such representations will provide a stable basis for further evaluation against the background of what we know about the world.
 The major problem from the constructivist point of view is how to get from objects and events in the world to perception of them. The fact that sensory processes, being indifferent to object structure and meaning, mediate between the world experiences imposes severe restrictions on perceptual models. By contrast, the need for mediation is denied by a systems account, on this view, perceptual systems operate and have evolved in close interaction with the world. So the perceptual system fits like lock and key, with the patterns of the environment. A crucial distinction between systems and constructivist approaches to perception concerns the construal of sensory  processes.
 The notion of sensory processes has its historical root  in the concept of sensation. A sensation is the phenomenal awareness of a primary quality (the brightness and hue of a colour, the loudness and pitch of a tone). Phenomenal awareness  means that th e perceiver experiences what it is like to sense the colour or the tone: Primary refers to the fact that these are the operants, presupposed in the notion of constructive operation. The concept of sensation has found its justification in classical operations about the perceptual process, which may  be based on false assumptions. The first question that should be answered is, therefore, Do sensations exist?
 The study sensation has evolved as a separate domain with its own research methods. Classical psychophysics, which started in nineteenth-century Leipzig with Gustav Theodor Fechner, tries to establish lawful connections between how perceives judge their experience, on the one hand, and physical quantities, on the other (brightness as a function of intensity). Exponents (Fechner/Weber) or power functions (S.S. Stevens) of the signal have been proposed to describe sensory quantity. Fechner’s proposal results from his assumption that just noticeable differences, propositional to physical intensity, are the units of sensation. This involves subjects detecting a weak signal (a light flash, a sound) or discriminating between two signals. How much are sensations a by-product of judgmental factors? Signal Detection Theory (Green and Swet, 1966) has provided a technique for distinguishing sensory sensitivity from judgmental bias.
 Saying that, the neurosciences have provided a classical description of the visual system, which is in good agreement with this notion of sensory processing, and therefore frequently discussed as the view of the neurosciences. On this view the visual system is a feed-forward processing hierarchy  which exhibits convergence. Rods and cones, the receptor cells involved in the registration of light intensity, neighbouring positions on th e retina, combine the signals to generate on-off patterns in ganglion cells. These are projected through relay stations in the thalamus called lateral geniculate nuclei onto cells of the visual cortex. These cells respond most accurately to contours or line segments in a specific orientation, resulting from the combined  project as an on-off overlapping lateral geniculate cells. Sensory processing, therefore, seem to combine physical signals into features of increasing complexity (Hubel and Wiesel, 1962) but, still is not without global information. The global pattern is not represented in the individual cells of the cortex but is available for further processing because each retina projects to the visual cortex in a systematic manner that respects the topographical organization of the retina.
 The classical view is an oversimplification from the perspective of more recent developments in the neurosciences (Zeki and Shipp, 1988). Besides convergence, divergence occurs in the visual pathways. From th earliest stage, the retina, a division into specialized pathways can be observed. For instance, two routes to the lateral geniculate nuclei with different cortical projections can be distinguished. One operates in a slow and sustained manner, has a high spatial resolution and restricted detection sensitivity.
 Modern neuroscience in general suggests a division of labour in the brain according in different sensory modules, each specialized for a certain modality (colour, contrast, odour, temperature., and pitch). Many important attributes of perception, however, are modal (duration, rhythm, shape, intensity, and spatial extent) or multi-modal (such as being a brush-fire, which involves the heat and smell and the glow). So the notion of sensory modularity increases the need for perceptual integration.
 This apparently is still in agreement with the principles of constructivism, which maintains that integration is achieved by processes a post-sensory, inferential nature. Unimodal perception will, therefore, precede integration across the modalities on development. According to a systems point of view, it is the other way round. Amodal and multi-amodal aspects of perception are primary properties, precisely because of the importance of these structures in the environment. The child will therefore start responding in the multi-modal structure, and development is aimed at differentiation.
 David Lewkowicz and his colleagues have, over several years collected ample evidence that young infants (4 months old) perceive inputs in different modalities as equivalent if the overall amount of stimulation is the same. these infants, due to the immaturity of their nervous system, appear to react to the lowest common denominator of stimulation, which is quantity. Quantity is, therefore, modality-unspecific-that is, not associated with a specific sensory quality or process. Lewkowicz proposes that these early equalences may form the basis for later, more sophisticated equivalency judgment processes. For the attributes of time, for instance, infants differentiate according to synchrony first, and this differentiation forms the basis for the subsequent differentiation of responsiveness to duration, rate and rhythm.
 Research in sensory development suggests that perceptual integration is not achieved according to the constructivist picture of sensory developments as feed-forward signal propagation. Rather, significance of amodal and cross-modal information early in processing suggests that integration between the sensory modals occurs early in processing. Such a notion of inter-sensory processing is in accordance with a system account of perception, which emphasize the role of coordination between the components of the system, rather than their isolated contributions to perception.
 The neurosciences support the notion of inter-sensory perceptions at all possible levels of description. At the smallest scale, this is realized through inter-neurons. Which provide individual cells within the individual pathway with lateral, mostly inhibitory connections. Lateral inhibitions are useful, for instance, to selectively enhance boundaries in the pattern sensory stimulation, because identically stimulated neighbours will cancel each other’s activity. This example illustrates that integration of sensory stimulation into a coherent pattern does not wait until sensory processing is completed but begins in the earliest stage. Lateral connections also occur between different sensory modules and may serve to flexibly enhance or reduce the contributions of a sensory module to the process.
 In addition to feed-forward and lateral connections, there are also backward connections, which are likely to play an important role in perception - for instance, from the higher visual areas back to the primary visual cortex and from there back to the thalamus. This in accordance with the downstream operation of semantic information. Pattern code could be mapped downward in the sensory detection system to correct its output. This wold make sensation dependent on backward knowledge and sensory process. But the effects mentioned of categorization (tomato verses apple) on the shape of there colour patch perceived and of word meaning on perceived pitch suggest otherwise. The interactive inter-sensory chapter of early processing in accordance with the notions of self-organization favoured by the systems approach.
 The central problem of constructivism - how to get from isolated sensory features to the representation of integral structure - appears to be a misconceptualization. Isolated sensory features do not seem to exist. The close interactions observed, both within and between the sensory modules appear more in accordance with the view that the sensory system communicates with the world on the level of patterns than that communication is on the level of isolated signals. On the other hand, perceptual object structure doesn’t appear to have the abstract characteristics that constructivism attributed to it. It may therefore seem that a system approach to percept on could provide a better explanation for perceptual phenomena. But the system approach is not without its own problems. From a system point of view, it may appear that perception functions do well in situations where the conditions require us to go beyond the information given, like limited vision conditions or conditions where the goal of the action is way beyond the horizon of visual stimulation. The constructivist approach explains this from the overall tendency of perception to make sense of a situation.
 In the process of setting up a system approach to perception, brain processes cannot be neglected. The problem is to find a general characterization of these processes in accordance with the systems approach, the dynamics of perceptual organization in the brain could be approached from the perspective of self-organization. The idea that the brain is an instrument for stepwise creative syntheses. This notion forms the basis for the constructivist approach, which requires that inference processes be posited to explain how the perceiver makes sense of a situation. Alternatively, the principle of hologenesis illustrates that a system account of these phenomena is possible.
 Nonetheless, before scientists could make claims about the functional organization of the brain, they needed to learn something about its general architecture. At the end of the nineteenth century major advances were made at both the micro and the macro level in understanding the brain. At the micro level the crucial breakthrough was the discovery that nerve tissue is made up of discrete cells - neurons - and  that there are tiny gaps between the axons that carry impulses away from one neuron and the dendrites of other neurons that pick up these impulses. In the 1880's Camilio Golgi introduced silver nitrate to strain brain slices for microscopic examination. Silver nitrate had the unusual and useful feature of staining only certain cells in the specimen, thereby making it possible to see individual cells, with there associated axons and dendrites, clearly. Santiago Ramón y Cajal argued that the nervous system was comprised of distinct  cells (a view that Colgi, however, never accepted). Sir Charles Scott Sherrington then characterized the points of communication at the gap between neurons as synapses and proposed that this communication was ultimately chemical in nature.
 While processes at the micro level of the neuronal substrate would figure prominently in understanding cognitive processes such as learning (which is widely thought to involve changes at synapses that alter the ability of one neuron to excite or inhibit another) and became the inspiration for computational modelling using neural networks. (An approach in which, all over the mediation of Donald Hebb took over the term connectionism from earlier, associationist approaches to conceptualizing the brain such as Wernicke’s). A key figure in this development was Warran McCulloch, a neurophysiologist who began his career at the University of Chicago. He collaborated with Walter Pitts, then an 18-year-old logician, in a widely cited 1943 paper that analysed networks of neuron-like units. McCulloch and Pitts showed that these networks could evaluate any compound logical function and claimed that, if supplemented with a tape and means for altering symbols on the tape, they were equivalent in computing power to a universal Turing machine. The units of the network were intended as simplified model neurons and have been referred to ever since as McCulloch-Pitts neurons. Each unit is a binary device (i.e., it can be in one or two states as for being on or off) that receives excitatory and inhibitory inputs from other units or outside the network, the state of a network of these units emerges over a number of cycles. On a given cycle, if a unit receives any inhibitory input, it is blocked from firing. If it receives no inhibitory imput it fires if the sum of equally weighted excitatory imputs exceeds a specific threshold. A unit with this design is appropriate not only as a model of a simplified neuron but also as a model of an electrical relay - a basic component of a computer - and hence McColloch-Pitts neurons helped by others, including John von Neumann and Marvin Minsky. McCulloch and Pitts also made a link to logic: The neurons could be associated with propositions, and because of the binary nature of these units, their activation states could be associated with truth values.
 As attractive as some theorists found the comparison of the brain to a computer at the architectural level, many others moved beyond the logic-gate level of focus and began trying to analyse how nervous systems carried out more complex psychological tasks, such as those of perception. These ambitious researchers included Pitts and McCulloch themselves, who, in a 1947 paper tackled two alleged problems: How some one can recognize an object as the same when it appears in different parts of the visual field and how superior colliculus is able to transform spatial maps of sensory inputs into motor maps that direct such activities as eye movements. Here they abandoned the earlier paper’s focus on propositional logic in favour of spatial representations and analog computations. A further departure from the earlier paper is an emphasis on networks that rely on statistical order and operate appropriately despite small perturbations. Moreover, as part of their evidence for specific computational models, they compared diagrams of these computational states as with diagrams of specific neural structures.
 The focus on perception continued in the central parts of Donald Hebb ‘s 1949 book, ‘The Organization of Behaviour’, The subtitle ‘Stimulus and response’ - and what occurs in the brain in the interval between them, points to one of the main emphases of Hebb’s analysis, the development of internal structures that mediate stimulus and response. Hebb sought to overcome the opposition between the more localizationist approaches of the Gestalt theories and his own mentor, Lashley. The key of his alternative was the notion of neuronal cell; assemblies, which considered of interconnected, and hence self-reinforcing, sets of neurons which represent and transform information in the brain:
 Any frequently repeated particular stimulation will lead to the slow development of a ‘cell-assembly’, a diffuse structure  Comprising cells in the cortex and diencephalon (and also, in the basal ganglis of the cerebrum), capable of acting briefly as a closed system, delivering facilitation to other such systems and usually having a specific motor facilitation. Each assembly action may be aroused by a preceding assembly, by a sensory event, or - normally - by both. The central facilitation from one of these activities on the next is the prototype of ‘attention’. (Hebb, 1949)
Hebb proposed that these subassemblies were created by an interaction between cells, whereby every time one cell figured in the firing of another there was strengthened, as, ‘When an axon of a cell ‘A’ is near enough to excite a call ‘B’ and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency , as one of the cells firing ‘B’, is increased.
 Even so, another kind of advance involved linking different macro-level brain areas with specific cognitive functions. This required overcoming the view widely shared in th eighteenth century that the brain, especially the cerebral cortex operated holistically, without any localized differentiation of function.
 However, one problem that researchers faced in attempting to localize mental functions in the brain was the lack of any standardized way of designating parts of the brain. The folding of the cortex gyui (hills) and sucli ( valleys)” Anatomist’s have named some of them and used the most prominent sucli to divide the brain into different lobes, as, the frontal lobe, parietal lobe, occipital lobe, and temporal lobe. But each lobe itself contains a number of anatomically distinct regions. Using each criteria as responses to various stains and the distribution of cells between cortical layers, a number of researchers at the end of the nineteenth century produced more detailed atlases of the brain. Of course, that by Korbinian Brodmann (1909) became the most widely adopted, and his numbering of brain regions is still widely employed today.
 It comes about, that a proper understanding of intentionality is crucial to the study of a number of topics in cognitive science, including perception, imagery, and consciousness. The term itself, intentionality, can be misleading, in suggesting intentional action, doing something intentionally, with a certain aim or purpose. In cognitive science, the term is used in a different, more technical sense. Intentionality involves reference or aboutness or some similar relation to something having what the scholastics of the Middle Ages called intentional inexistence (Brentano, 1874).
 When Mary thinks of George Miller as a cognitive scientist, the intentional object of her thought is George Miller is a cognitive scientist. She has a mental representation of him as a cognitive scientist. What Mary thinks about has intentional inexistence in the sense that her thoughts may be wrong and she can have thoughts about things that do no not even exist. She may think incorrectly that George Miller is a computer scientist or even that Santa Claus is a computer scientist.
 If you treat intentionality as a relation to an intentional object, you must remember that it is not a real relation in the way that kissing or touching is. A real relation between two existing things independently of how they are conceived. When a woman kisses a man and the man she kisses is bald then the woman kisses a bald man. But Mary may represent him as hairy. Similarly, Mary can think of someone who does not exist but cannot kiss or touch someone who does not exist.
 Looking for something is an example of an intentional activity in this technical sense of intentional as well as in the more ordinary sense having to do with what you are aiming at. You sometimes look for something that turn out not to exist. Ponce de Leon searched in Florida for the fountain of youth. Also, there was no such thing to be found.
 There can be intentionality without representation. For example, needing something is an intentional phenomena. The grass in my lawn can need water even though it is not going to get any and even if there is no water to give it. But the grass does not represent the water it needs.
 Other examples of intentional phenomena include spoken and written language gestures, representational paintings, photographs, film, road maps, and traffic lights. It is controversial how these last instances of intentionality are related to the intentionality of thoughts and other cognitive states.
 Nonexistent intentional objects like Santa Claus and the fountain of youth raise difficult logical puzzles if taken seriously as objects. What properties do thy have? What sorts of properties does Santa Claus have, as he is conceived by a certain child? Perhaps he is fat, lives at the North Pole, dresses in red, drives a sleigh, bringing presents to children at Christmas time, and has at least eight reindeer. But intentional objects cannot always have all the properties which they are envisioned as having, because, as in the case of the child’s conception of Santa Claus, a nonexistent intentional object may be envisioned as existent, and it is inconsistent to suppose that something could be both existent and nonexistent (Parsons, 1980).
 You must resist the temptation to try to resolve such problems by identifying intentional objects with mental objects such as ideas or mental representations. That identification does not work. The child does not indeed have an idea of Santa Claus, and Ponce de Leon had an idea of the fountain of youth. But the child does not believe that his idea of Santa Claus lives at the North Pole. Nor was Ponce de Leon looking for a mental representation of the fountain of youth. He already had a mental representation: He was looking for the (intentional) object of that representation.
 Is it enough to say that a nonexistent intentional object is a merely possible object - an object that exists in some possible world or other, but not in the actual world? That is not a completely general account, because some intentional objects are not even possible. Someone may try to find the greatest prime number without realizing that there is no such thing. The intentional object of the attempt - the greatest prime number - is not a possible object. There is no possible world in which it exists.
 One controversy concerning intentionality concerns how to provide a logically adequate account of talk of intentional objects. That is a controversy in philosophical logic (Parsons, 1980), sand may mot be especially important to the rest of cognitive science.
 The moral is that, on the one hand, you have to take talk of  nonexistent intentional objects with a grain of salt, without being too serious about the notion that there really are such things. On the other hand, you have to be careful not to conclude that the child pondering Santa Claus isn’t really thinking about anything or that Ponce de Leon wasn’t really looking for anything as he wandered through Florida.
 To what extent does cognition involve intentionality? In one view, everything cognitive is intentional: Intentional inexistence is the mark of the mental, according to Franz Brentano. Another view allows for nonintentional aspects of cognitive states, raw feels.
 Clearly , many feelings recognized in folk psychology have intentionality and are not simply raw feels. A child hopes that Santa Claus will bring a big red track and fears that Santa Claus will bring a lump of coal instead. The child is happy that Christmas is tomorrow and unhappy that he hasn’t been a good little boy for the last few weeks. A child’s hopes, fears, happiness, and unhappiness have intentional objects and intentional content, so that you are not anxious about anything or depressed about anything, but just depressed. Or do such states have a very general, nonspecific content, so that you are anxious about things in general or depressed about things in general, just not anxious or depressed about something specific? It is hard to say what turns on the answer to this question.
 Perceptual experience has intentionality inasmuch as it presents or represents a certain environment, how perceptual experience presents or represents things may be accurate or inaccurate. Things may or may not be as they seem to be. Sometimes what you see or seem to see doesn’t really exist, as when Macbeth hallucinated a bloody dagger.
 The intentional content of perceptual experience is sortally perspectival, representing how things are from here or even representing how things are as perceived from here. The content of the experience may even be in part about the experience itself. What is perceived is perhaps seen as causing that very experience.
 The dagger is an intentional object of Macbeth’s perceptual experience. That’s what he is or seems to be aware of. You may be tempted to think that Macbeth must be aware of a mental image of a dagger, but that is like thinking that Ponce de Leon must have been trying to find an idea of the fountain of youth.
 Any attempt to explain intentional content in terms of use or conceptual role, faces the following difficultly. Such as to the understanding the intentional content of a concept (i.e., understanding the concept) and understanding the conceptual role of the concept (i.e., understanding what the conceptual role of the concept is) are very different things. You can have a detailed understanding of the conceptual role or use of a concept without understanding the concept, and you can understand a concept perfectly without being able to specify exactly how the concept is used. For example, you might know exactly how a particular symbol is used in relation to other symbols and the environment without realizing that the symbol means plus. Similarly, you can fully understand addition and the concept of plus without being able to describe exactly how that concept is used in relation to other concepts and the environment.
 To have a concept is automatically to understand the concept, whether or not you know how the concept is used. Furthermore, to understand another person’s thoughts, it is not enough (and not required) that you understand how the concepts involved in those thoughts function. You need an understanding of the other person’s thoughts from the inside. You need to know what it is like to have such thoughts. You need to relate to the other person’s thoughts to equivalent thoughts of your own that you understand.
 Some theorists put the point like this: You need a first-person understanding of intentionality, an understanding from the point view of the thinker. It is not enough to have a third-person understanding from the point of view of an observer of the thinker (Nagel, 1974).
 This need not mean that a conceptual role or use theory is incorrect. Perhaps, intentionality is a matter of use or conceptual role. But you have to distinguish two sorts of understandings of intentionality, the internal first-person understanding, you have by virtue of being the person who uses representations in a certain way and the third-person which a person can know how to swim being able to swim and being able to describe what is done when someone swims.
 Intentionality is an important characteristic of cognition. It is useful to think of cognitive states as involving relations to intentional objects, even though the notion of an intentional object raises deep questions in philosophical logic. It is unclear whether all mental life involves intentionality, whether there are raw feels. Certainly, many kinds of feelings involve intentionality, emotions for example, and bodily feelings. Knowledge and perception have intentional content: Appreciation of this fact undermines the standard sense datum argument and helps to avoid mistakes in studying imagery. Understanding the intentionality of language, pictures, and other symbols and representations requires a distinction between using symbols to communicate ideas and using symbols to calculate or think with. The intentionality of symbols used in communication may be derivative of the original intentionality of symbols used in thought and calculation, however, it is controversial whether the mere use of symbols in the right way is enough to give them original intentionality.
 That being aforesaid, that, in the course of a simple event, one’s encountering of another would engage upon a wide variety of cognitive activities, among them problem solving, face recognition, speech production and perception, memory, and motor control. How does the mind - an apparently unitary entity - accomplish such a diversity of tasks? Is the mind partitioned into diverse mechanisms, each responsible for a different job? Or are more uniform, general-purpose mechanisms deployed for different cognitive purposes? Which tasks even count as the same, and which as different? Is visual recognition a single task, or are the mechanisms that recognize objects fundamentally distinct from those that recognize faces? Is speech produced and perceived by similar processes or by different ones? More general, how, and how much, do such different processes interact?
 It is to these and related questions that the debate over the modularity of mind is addressed. Because the issue is not the character of cognitive capacities per se, but the organization and distribution of the systems that underlie these capacities, the issue of modularity is often described as concerning the architecture, design principles, of the mind.
 Some controversies in cognitive science, such as arguments  about whether classical or distributed connectionist architecture best model the human cognitive system, reenact long-standing debates in the philosophy of science. For millennia, philosophers have pondered whether mentality can submit to scientific explanation generally, and to physical explanation, particularly. Recent, positive answers have gained popularity. The question remains, though, as to the analytical level at which mentality is best explained. Is there a level of analysis that is peculiarly appropriate to the explanation of either consciousness or mental contents? Are human consciousness, cognitive and conduct are best understood in terms of talk about neurons and networks or schemas and scripts or intentions and inferences? If our best accounts make no appeal to our hopes or beliefs or desires, how do we square those views with our conception of ourselves as rational brings? Moreover, can models of physical processes explain our mental lives? Does mentality in terms of overall brain functioning or neuronal or molecular or even quantum activities - or any of a dozen levels of physical explanation in between? Also, regardless of how they compare with explanations cast at physical levels, what is the status of psychological explanations that appeal fundamentally to mental contents?
 Cognitive architecture permits cognitive scientists to explain human cognition by appealing to the concepts and principles of machine computation. Still, beyond a commitment to the notion that cognition involves computations over representations, the previse directions in which this relation should lead us remain controversial. The emergence of distributed connectionist models observe past decade or so, has stimulated debates about the character of both the representations and the computations involved in cognitive processing.
 The behaviour of a computational system is not just a function of architecture constraints. Programs also play a decisive role. Without extensive knowledge of design, distinguishing those aspects of behaviour that arise as a result of the architecture from when the system is in question are organic, and the designer is natural selection. When cognitive systems consist of neurons, rather than computer chips, and the designer is evolution, instead of engineers, it is fairly safe to bet that at least sometimes the architecture realizes cognitive functions differently from digital computers do.
 Classicism holds that a model of our cognitive architecture provides only a functional characterization of the underlying mechanism. A vast array of physical arrangements can implement the configuration of functional relations which these abstract models describe. On any computational view, distinguishing a cognitive level from the neuroscientific level of explanation depends precisely on the fact that models of cognitive architecture involve abstraction, say from many of the brains physical details. Computations of both the classical and the connectionist varieties assume that the neural level will not prove the best level for characterizing the cognitive architecture. In that, many connectionists (e.g., Smolensky (1988) demur - arguably providing more fine-grained analyses of these issues in the process.
 For the purpose of theorizing proponents of classical models insist on a principle subdivision of the cognitive level into a semantic (or knowledge) level and a symbol (or syntactic) level and knowledge. As with commonsense psychology, considerations of meaning and rationality order semantic materials. The pivotal assumption in classical proposals, however, concern the symbol level.
1. Mental symbols are context independent representational primitives that posses their representational contents by virtue of their forms.
2. A finite set of such symbols can represent distinct semantic contents uniquely, because these symbols are the fundamental constituents of a quasi-linguistic system that posses a contenative syntax and semantics (that comprehensively parallel one another).
3. The formal syntactic features of these symbols correspond precisely to neural properties that are pivotal in the etiology of behaviour.
Proponents of modularity argue that the mind comprises separate subsystems carrying out relatively specific functions, relatively automatically and autonomously. Theories differ as to how isolated, automatic and specific these modules are claimed to be, and as too which cognitive processes are thought to be modular. Theories of modularity may be distinguished, in other words, in terms of their answers to the conceptual question, as to, ‘What makes something a module’, and the empirical question, ‘Which cognitive processes are modular, so described’?
 Although largely unpopular earlier in this century, some form of the modularity thesis is now a prominent, even dominant view. One reason for this change in the intellectual tide concerns the role of empirical evidence in this debate. Current defenders of modularity theory are distinguished by the fact that experimental data are marshalled in support of the view.
 The appeal to empirical evidence does not easily resolve the debate, however, because there is wide disagreement over how this evidence should be interpreted, questions remain as to how and how much interaction there is: Both among modules and between modules and nonmodular systems. There are also questions about the internal structure of modules themselves. Are they further decomposable into sub-modules, and if so, how, and how much, do sub-modules interact with each other and with their parents? Do the properties associated with modules constitute necessary and sufficient criteria for being a module, or are they merely generally characteristic properties? Are some properties more essential than others? If so, which ones?
 In addition to the conceptual question of (What makes something a module?) And the empirical one is (Which specific processes are in fact modular?), a third more methodological dimension cuts across the discoursing of our debate, that the modularity thesis is not just a descriptive claim about the internal organization of the mind, but a normative claim about how the mind ought to be studied.
 Jerry Fodor’s book, The Modularity of Mind (1983) has become a central reference point for debative constructions about modularity. At the time of its publication, however, a modular approach had already been defended in a number of domains. Such an approach is to be found, for example, in David Marr’s principle of modular design, in Kenneth Forster’s autonomous model of lexical access, in Noam Chomsky’s of a language organ, in Michael Posner’s distinction between automatic and strategic processing and in Herbert Simon’s concept of a nearly decomposable system. Fodor’s contribution was thus less to initiate discussion about modularity than to systematize and promote it.
 We can understand Fodor’s central claim about modularity in terms of the three dimensions enumerated above conceptual empirical, and methodological. At the conceptual level, Fodor claims that modular systems are distinguished by their character -ceptual level. Fodor claims that modular systems are distinguished by their characteristic properties and functions. Fundamentally, he distinguishes three kinds of mechanism s. (1) Transducers. (2) Modules, and (3) central systems. The function of transducers is to receive energy impinging at the organism’s surface and translate it into a representational form accessible at the organism’s surface and translate it into a representational form accessible by other psychological systems. The function of central systems is that of inference and belief fixation. The function of modules is to mediate between transducers and central systems. Although this mediation may operate in either between direction. Fodor discusses almost exclusively modules which take transduced representations and infer hypotheses about their distal sources which then become available for use by central systems. More general, Fodor (1983) says, that the function of such modules is ‘to present the world to thought’.
 Modules are intermediate between transducers and central systems not only in terms of the order of processing but in terms of the complexity of processing as well. Like central cognitive mechanisms, modular mechanisms are supposed to be inferential and computational: But, like transducers, they are assumed to be reflexive and automatic.
 In ‘The Modularity if Mind’ Fodor identified nine properties that are claimed to be responsible for the automatic, autonomous nature of modular processing. Modular - Fodor says, of (1) are domain-specific. (2) operate in a mandatory fashion. (3) allow only limited central access to the computations of the module. (4)are fast (5) are informationally encapsulated (6) have shallow output (7) are associated with fixed neural architecture (8) exhibit characteristic and specific breakdown patterns, and (9) exhibit a characteristic pace and sequencing in their development.
 In later essays, however, Fodor emphasizes informational encapsulation to the exclusion of the others as the single defining characteristic of a module. An informationally encapsulated system operates largely in isolation from the background information at the organism’s disposal. informational encapsulation constrains a priori the amount and type of data available for consideration in projecting hypotheses about the distal layout. Moreover, this constraint on information is achieved architecturally rather than substantively. That is, in solving a particular computational task, the module mechanism can only make use of information within the module: It has no capacity to bring even relevant information to bear if it happens to lie beyond the module’s boundaries.
 It is important to distinguish informational encapsulation from domain specificity, which some other writers take to be the defining feature of a module. To say that modules are domain-specific, is to say that they operate on distinct classes of stimuli: Only specific stimulus domain will trigger the operation of any given module. Fodor (1983) describes the difference between informational encapsulation and domain specification, such is that :‘Roughly, domain specificity has to do with the range of questions for which a device provides answers (the range of inputs for which it computes analyses): Whereas, encapsulation has to do with the range of information that the device consults in deciding what to provide.’
 Central systems - those responsible for inference and belief fixation - are, according to Fodor, nonmodular and hence unencapsulated. Such systems are characterized by the absence of antecedently established constraints on the information which they can recruit in the course of their operation. More positively, in an analogy to the process of confirmation in science. Fodor describes central systems as isotropic and Quinean. Isotropic processes are those in which information from arbitrary knowledge domains may be relevant to the confirmation of a given hypotheses. ‘Everything the scientist knows’ Fodor explains (1983), ‘is, in principle, relevant to determining what else he or she ought to believe.’ By Quinean systems, Fodor means one in which the degree of confirmation of a hypothesis depends not only on its intrinsic features but also on its relation to all other system beliefs.
 At the empirical level, Fodor’s principal claim is that perception is modular but higher-order cognition is not. Perceptual, but not cognitive, processing is accomplished by encapsulated mechanisms which operate independently of the rest of the orgsnism’s knowledge. In Fodor’s usage, therefore, the phrase ‘modularity of mind’ implies only that some process (the  perceptual ones) are accomplished by encapsulated mechanisms, not that the mind in general is modular.
 The example that Fodor most often invokes to illustrate this view is the Muller-Lyer visual illusion, in which two parallel lines are flanked by arrows, pointing inward in one case and outward in the other. Although the lines are objectively of equal lengths, two lines are of the same length, they continue to look as if they are of different lengths. It is this persistence of the illusion and the discrepancy between how the lines look and what is believed about them that Fodor cites to support the claim that (visual) perception is modular. Even when the organism knows that the two lines are of the same length, it cannot use this knowledge to affect its perception, suggesting that the visual processes are encapsulated from such (module-external) information.
 A second empirical claim that Fodor makes is that language is like perception in being modular, rather than central, like cognition. Because perception and language are not usually classified as being of a common type. Fodor coins the term input system for what he claims is the (natural) kind of mental system comprising perception and language (though strictly this kind includes both input and output systems).
 Note: In passing, that the term cognitive is commonly used in two different senses, as a general, neutral term for all mental capacities, including perception, in which case it contrasts roughly with bodily, and in a narrow, and more restricted sense, in contrast with perceptual. It is this line usage that Fodor has in mind when identifying as cognitive perceptual. It is this latter usage that Fodor has in mind when identifying case cognitive, such nonmodular central systems as attention, memory , inductive reasoning, problem solving, and general knowledge.
 Finally, at the methodological level, Fodor argues that the  distinction between modular and nonmodular psychological systems is coextensive with the distinction between those psychological systems that can be fruitfully studied scientifically and those that cannot. Modular systems are good candidates for scientific investigation, central or unencapsulated systems are subject to unconstrained data search. This at once makes such systems rational - the y can take into account anything the organism knows or believes - but, it also makes them susceptible to what is known as the frame problem: The difficulty of finding a nonarbitrary strategy for restricting the evidence that should be searched and the hypothesis that should be contemplated in the course of rational belief fixation (Fodor, 1987).
 The frame problem is something inherently faced by any unencapsulated, rational system. On the one hand, the lack of constraint on potentially relevant evidence implies that there is no natural end to deliberation. On the other hand, evidence must be constrained if a system is to function at all, and it must be constrained nonarbitrarily if it is to function rationally. (Modular processing is not rational processing precisely because its data base of information is constrained arbitrarily - i.e., architecturally). Because the identity and degree of relevant considerations change from situation to situation. Fodor believes that relevance cannot be formalized in a theory, and therefore, that central systems cannot be the object of fruitful scientific investigation.
 Fodor’s view implies rather dire consequences for the future of cognitive science. Although cognitive science has been concerned to explain the processes of perception (especially vision) the centre-piece of the project has been the dream of explaining more general cognitive abilities such as thought, memory , and problem solving. Fodor’s claim is that these processes, being quintessentially unencapsulated, are ones that we have little hope of understanding and hence are ones that we should as a matter of research strategy, abandon, as bold by intellectual temperament, Fodor dubs this methodological point ‘Fodor’s First Law of the Nonexistence of Cognitive Science’, (1983).
 Fodor makes three principal claims about modularity: The empirical claim that perception, but not cognitive, is modular, the conceptual claim that modules, but not central systems, are informationally encapsulated, and the methodological claim that encapsulated processes, but not unencapsulated ones are amendable to scientific study. Taken together, these three claims form an argument against the possibility of doing cognitive (as opposed to perceptual) science.
 The modularity thesis has been investigated in most detail in the domain of language. In the dominant tradition of generative grammar, a tradition initiated by Chomsky in the 1950's, a core assumption has been that the processes responsible for language production and perception are largely innate and modular. To emphasize the functional independence of linguistic from other cognitive processes, Chomsky has described the language, module as an independent ‘mental organ’.
 Nevertheless, because generative linguistics concentrates  on explaining linguistic competence (the tactic knowledge that is said to underlie our ability to use language) rather than linguistic performance (the actual use of language in concrete circumstances), debates about modularity, which concern performance issues of how language is processed, have most often taken place in psychology and psycholinguistics, rather than in linguistics proper.
 Even so, generative grammar is a theoretical approach that seeks to describe and explain natural language in terms of its mathematical form, using formal languages, such as propositional logic the formal distinction between semantics and syntax. The semantics of a linguistic proposition are the objective conditions under which it may truthfully be stated, and the syntax of that proposition is the mathematical structure of its linguistic elements and relations, irrespective of their semantics.
 Recently, however, a new class of linguistics the theories has emerged. These theories seek to analyse natural language not in terms of their mathematical form, but rather in terms of their psychological functions. The focus is therefore, on the cognitive and social processes of which natural languages are constituted, including, symbols, categories, schemas, perspective, discourse context, social interaction, and communicative goals - in the broadest term to cover all these theories is functional linguistics.
 The functional approach to language holds that the forms of natural languages are created, governed, constrained, acquired and used in the service of communicative function, yet, no one would deny the importance of functions in human language, as we constantly use language to communicate intentions between one person and the next. For example, we can use language to tell another person how to drive a car, where to look for edible mushrooms, and how to avoid falling into crevasses when walking on glaciers. We can also use language to foster social solidarity by greeting and acknowledging other people with salutations and standardized phrases. Yet another use of language is to represent our thoughts and goals internally. Both inner speech and exteriorly written expressions allow us to talk to ourselves in ways that help foster creativity, inventing, and memory. Additional artistic functions of language include drama, poetry and song.
 Given the importance of these various functions of human language, it may surprising to learn that there is a major debate in linguistic and psycholinguistic circles in which functions determine the shape of language. To the outsider, it would seem almost obvious that the shapes and forms of things and are determined by the functions being served. We use nouns to refer to things and verbs to refer to actions. By choosing one word order over another, we can distinguish who did what to whom. In this way, the most basic forms of human language are functionally determined. But exactly how doe function have its impact on forms? Is the impact direct and immediate, or only indirect and delayed? Is there only one basic way in which functions determine forms or are there various types of form-function relations? It is even possible that the system of forms could become breed from linkage to function and take on some type of autonomous existence?
 The antithesis to functionalism is formalism. The formalist position holds that although language may serve a variety of useful functions, the actual shape of linguistic form is determined by abstract categories that have nothing to do with particular functions or meanings. On this view, language is a special gift to the human species, whose formal contours reflect  the abstract, reflective, and in particular nature of the human mind. Categories such as ‘verb’ or ‘subject’ are abstract objects that are processed and represented in a separate mental modular devoted to grammar. The objects of this modular are universal and derive not from functional pressures or ongoing conceptualization of the world but from the innate language making capacitates the language of modularity is informationally reencapulated, that this means that it relies only on its own abstract category and information to processes and represent language: It does not depend upon information from other aspects of cognition. According to this view, the liberation of linguistic form from any tight linkage to function has linked to the modular architecture that produces the power inherent in the human mind. Because language pressure of communication rather than in linguistic proper, the issue is not the character of cognitive capacities, but the organization and distribution of the systems that underlie these capacities, the issue of modularity is often described as concerning the architecture, or design principles of the mind: And all computational processes require internal representation as the uniform computations. Where anti-representationalist challenges has arisen from discussions of several computation-related issues, for which some cognitive scientists contend that the status of internal or interiorized representations may be as problematic as that of phlogiston.
 The core issue on which functionalism and formalism disagree is that of autonomy versus modularity. Formalisms claims that the shape of language in minimally constrained by functional pressure, since language basically follows its own rules, in a separate, informationally encapsulated autonomous cognitive module. Functionalists claim that language is continually subject to the need it expresses, is that of conceptual and social messages, and that these pressures govern the processes of language change, language learning and language processing. At best, the broadest term to cover all these theories is functional linguistics.
 Nonetheless, at the most fundamental level of analysis, functional linguistics rejects the generative grammar analogy between natural and formal languages, along with its concomitant distinction between semantics and syntax. In functional linguistic, natural language, like biological organisms, are composed most fundamentally of structures with functions. Linguistic structures vary from relatively simple entities such as word and grammatical morphemes to more complex entities such as phrases and linguistic constructions. All linguistic structures have functions, and in all cases this function concerns communication, including such things as reporting an event, identifying the roles played by participants in an event, asking a question, establishing a topic of discourse, and taking a particular perspective on a viewing aspect of a scene. For functional linguistics, therefore, the most fundamental distinction in natural languages is not between meaningful linguistic elements and their algorithmic combination, irrespective of meaning (i.e., mathematical semantics and syntax), but rather between structure and function, symbol and meaning, signifier and signified.
 Within functional linguistics, cognitive linguistics refer to the set of theories that are primarily concerned with the cognitive dimensions of linguistic communication. Although there were important precursors in the work of linguistics such ss Charles Fillmore and Leonard Talmy, cognitive linguistics had its clear origins as a scientific paradigm in 1987 with the publication of George Lakoff’s ‘Women, Fire, and Dangerous Things’: ‘What Categories Reveal about the Mind’ and the first volume of Ronald Langacker’s ‘Foundations of Cognitive Grammar’ - followed immediately by the founding of the International Cognitive Linguistics Association and its official journal Cognitive Linguistics. The fundamental stance of cognitive linguistics may best be summarized in terms of two key issues: The nature of linguistic meaning and the nature of grammar. In the view of some cognitive scientists, the cognitive linguistics approach to these two issues constitutes a revolution in our understanding of how human and cognition operate.
 In the process of linguistic communication the speaker of a language employ particular conventions/symbols to exhort their listeners to conceptualize particular events and situations in particular ways. It is therefore, misleading to say that language depends on cognition, as if they were two separate entities. Rather, the more accurate characterization is that natural languages are nothing more or less than ways of symbolizing cognition for purposes of communication. This cognitive linguistic view of language as one particular manifestation of human cognition is best illustrated by three phenomena (1) The dependence of word meaning on surrounding cognitive frames, (2) The myriad ways in which a single referential situation may be linguistically construed, and (3) The ever-changing meaning for which particular linguistic symbols are used historically, including metaphorical meanings: All of which will be treated successively in sequence.
 First, in many linguistic theories the semantics of a language is viewed in the manner of a dictionary. That is, speakers are seen to posses cognitive distinct mental lexicons, within which there is a list of linguistic items, each of which has a meaning that may be described independently with something like a list of semantic features. The problem with this view is that many linguistic items take their meaning from the role they play in large forms of life, and thus they require a description more encyclopaedic in nature. For example, the word ‘bachelor’ - which is formalized’ in some semantic theories as something like ‘adult + male + unmarried’ - does not apply easily to such unmarried adult males as Tarzan, the Pope, and others much the same. These individuals meet the formal criteria for ‘bachelor’, but they are not good exemplars, because they do not participate in the cultural setting from which the word takes its meaning, a cultural setting in words whose significance is embedded in larger cultural frames, include ‘trump’ (which requires the game of bridge) ‘pedestrian’ (which requires a mystery). Even though it is clearest with highly culturally bound words such as these. The same basic principle applies as well to many other words that seem initially to be more context-independent: For example, a leaf can only be understood in the context of a tree, and a knuckle can only be understood in the context of a finger (which requires a hand, and so on) - (Langacker, 1987). In general, the meaning of many, perhaps most, linguistic expressions can be adequately characterized only with respect to some larger conceptual domain that is not, strictly speaking, a part of its meaning, but only provides a frame for that meaning.
 Second, many linguists and cognitive scientists have implicitly operated with an objectivised view of linguistic semantics. On this view, a linguistic entity stands for things and situations in the world, so that entity’s semantics comprises those things and for situations in the world, for which it stands. But this view of linguistic meaning basically ignores semantic differences, that depend on the different perspectives that may be taken on one and the same objective situation. Clear examples of this more subjectivist view of linguistic meaning are provided by the alternative descriptions of single situations.
 The roof slopes upward./The roof slopes downward
 John kissed Mary./Mary was kissed by John
 The glass is half empty./The glass is half full
 He has a few friends in high places./He has few friends in high places.
In each case one and the same situation is described differently, depending on the point of view the speaker wishes to communicate (Langacker, 1987). People may also use different formulations to describe a single situation at different levels of detail. For example:
 This is a triangle,/This is a three-sided polygon.
 This vehicle is in my way./This blue van is blocking my way into the driveway.
 Susan managed to open the doo r with Jim’s key./Jim’s key opened the door.
 Bill flew to New York./Bill bought a ticket, drove to the airport, boarded an aeroplane, and so forth.
One and the same referential situation may also be described in different words depending on the background frame of the communicative situation. Thus, the exact same piece of real estate might be described thus:
 Hiker on a hilltop: ‘There’s the coast’.
 Sailor at sea: ‘There’s the shore’.
 Skydiver from the air: ‘There’s the ground’.
 Child on vacation: ‘There’s the beach’.
The main point of all these examples is that human languages provide their speakers with a whole battery of symbolic resources with which they may induce other people to construe a particular situation or event in particular ways. The ways in which a situation or event may be construed linguistically are myriad, depending ‘inter alia’ on the communicative intentions of the speaker, the canonical background frame of the expression, and the knowledge the listener may be assumed to posses in the communicative interaction.
 Finally, there is the fact that the meaning of particular linguistic symbols in particular languages use constantly changing as their speakers put them to new uses, including metaphorical ones. These changes of meaning are not rare events, and the use of metaphor’s is not a specialized, atypical use of language. Lakoff and Johnson (1980) argue and present evidence that most everyday language includes the use of linguistic items originally conventionalized for other semantic purposes. These range from fairly subtle extensions, such as running for political office and being in an organization, to more obviously metaphorical extensions, such as being out of one’s mind or being a lost soul. Moreover, what Lakoff and Johnson discovered was that in human linguistic communication people do not just use isolated semantic extensions and metaphors in sporadic, unsystematic ways but rather, they often structure whole experimental domains metaphorically. For example, following the metaphor that ‘Time is money,’ people say such things as:
 I spend too much time watching TV.
 That detour cost me 2 hours.
 The delaying tactics bought them more time.
But time may also be seen in terms of space:
 I don’t know what lies ahead for me.
 His youth is behind him now.
 I’II be there at 5:00 am the 11th of July.
An especially powerful discovery about the metaphorical dimension of language is that people often use more concrete domains of knowledge to structure and comprehend more abstract ones. This is manifest in people’s frequent use of terms for very basic aspects of experience, such as bodily actions and simple perceptual transformations of objects, to structure more abstract domains. For example, we understand the English expression ‘in’ and ‘out’ most fundamentally for such things as putting objects into and taking them out of containers: But, we also put arguments in and take arguments out of our speeches. We use ‘off’ and ‘on’ most basically for putting cloths on and taking them off our bodies or putting objects on and taking them off tables, but we also say that a tennis player is on her game or off her game. Lakoff and Johnson’s claim is that there are certain fundamental domains of human experience - constituted by what they call image schemas - that serve as prototypes of some very general referential situation, and thus as especially powerful source domains for metaphorical construal (Johnson, 1987. Overall, It may be said that semantic extension and metaphorical construal pervade human language use, and their existence demonstrates that linguistic meaning is part and parcel of a process in which people continually adapt their existing means of linguistic expression for particular communicative goals.
 The most general point to be made from all three considerations is that it is basically impossible to isolate linguistic meaning from cognition in general in the manner of a mental lexicon divorced from other aspects of human cognition and communication. Cognitive linguistics, therefore adopts an encyclopaedic, subjective approach to linguistic meaning in which human beings create and is linguistic convention in order to symbolize their shared experience in various ways for specific communicative purposes. These different experiences and purposes are always changing so they can never be captured by an itemized, objectivist description of linguistic elements and their associated truth conditions. For an adequate description of linguistic semantics from the cognitive linguistics point of view, what is needed is a psychology of language in terms of such things as cognitive structure, the manipulation of attention, alternative construal of situations, and changing communicative goals.
 In the cognitive linguistics view, the grammar of a language is best characterized as ‘a structured inventory of symbolic units’, each with its own structure and function (Langoff, 1987). There units may vary in both their complexity and generality, with words being only one type of symbolic unit at the simplest level of analysis, all the structure of a language are composed of some combination of four types of symbolic elements, word  makers of worts (e.g., the English plural -s): Word order, and intonation (Bates and MacWhinny 1989). Each of the several thousand languages of the world uses these four elements, but in different ways. In English, for example, word order is mos t typically used for the basic syntactic function of indicating who did what to whom, intention is used mainly to highlight or a background certain information in the utterance, and markers on words serve to indicate such things as tense and plurality. In Russia, on the other hand, who did what to whom is indicated by case-markers on words, and word order is used mostly for highlighting and background information. In some words language (e.g., Masai), who did what to whom is indicated through by virtually any semantic or pragmatic function in a particular language. Moreover, these structure-function relationships may change over time within a language, as in the English change from case marking to words if or indicating who did what to whom several hundred of years ago.
 These four types of symbolic elements do not occur in isolation, but in each language constructions composed of unique configurations of these elements (Goldberg, 1995). Linguistic constructions are basically cognitive schemas of the same type that exist in other domains of cognition. These schemas/constructions may vary from specific to general, for example, the one word utterance, ‘Fore’ is a very simple, concrete construction use for a specific function in the game of golf. ‘Thank you’ and ‘Don’t mention it’ are multi-word constructions used for relatively specific slots into which whole classes of items may fit: ‘Down with _ ‘ and ‘Hooray for _’ Two other constructions of the type that have more general application are:
The way construction:     She mad e her way through the crowd.
                          I paid my way through college.
                          He smiled his way into the meeting.
The let alone construction: I wouldn’t go to New York, let  alone Boston.
                             I’m too tired to get up, let alone go running around with you.                            I wouldn’t read an article about, let alone a book written by, that swine.
Each of these constructions are defined by its use of certain specific words (way, let alone) and each thus conveys a certain relatively specific relational meaning, but is also general in its different specific content (Fillmore et al., 1989).
 There are also constructions that are extremely general in the sense that they are not defined by any words in particular, but rather by categories of words and their relations. Thus, the di-transitive construction in English prototypically indicates transfer of possession and is represented by utterances such as ‘He gave the doctor money’. No particular words are a part of this construction. It is characterized totally schematically by means of certain categories of words. In a particular order, noun-phrase + verb + noun-phrase + noun-phrase. No construction is fully general, however, so in the di-transitive construction the verb must involve, at the least, some form of motion (as in, ‘He threw Susan money’, but not ‘He stayed Susan money’). Other examples of very general English construction are the various resultative constructions (e.g., ‘She knocked him silly’, ‘He cleaned the table off’), constituted by a particular ordering of particular categories of words, and their various passive constructions (e.g., ‘She is loved by Harry’. ‘She got kissed’). Which provide a unique perceptive on scenes and are constituted by a particular ordering of word categories as well as some specific word s (e.g., by) and marker (e.g., -ed). All these more general constructions are defined by general categories of words and their interrelations; so each may be applied quite widely for many referential situations of a certain type. These abstract linguistic constructions may be thought of as cognitive schemas of the same type found in other cognitive skills that is, as relatively automatized procedures that operate on a categorical level.
 Ann important point is that each of these abstract linguistic schemas has a meaning of its own. In relative independence of the lexical items involved (Goldberg, 1995). Much of the creativity of language comes from fitting specific words into linguistic constructions that are nonprotypical for the word. For example, the verb ‘kick’ is not typically used for transfer of possession, and so, it is not prototypically used with the di-transitive construction. But it may be construed in that way in utterances such ss ‘Mary kicked John the football’, because kicking can be seen as imparting direct motion to an object with another person as terminus, this process may extend even further to such things as ‘Mary sneezed John the football’, which requires an imaginative interpretation in which the verb ‘sneeze’ is not used in its more typical intransitive sense (as in ‘Mary sneezed’), but rather as a verb in which the sneezing causes direct motion in the football. If the process is extended begins to break down- as in ‘Mary smiled John the football’. The important point is that in all these examples from the construction itself, not from the specific words of which it is constituted. Linguistic constructions are thus an important part of the inventory of symbolic resources that language-users control, and they create an important top-down component of the process of linguistic communication - in keeping with the role of abstract schemas in many other domains of human cognition.
 All constructions, whether composed of one word or many categories of words in specific orders with specific markers and intonations, derive from recurrent events, or types of events, with respect to which the people of a culture have recurrent communicative goals. This means that a major function of all linguistic constructions is attentional - for instance , to take one or another point of view on a situation (the other asking a question). For example, the same event may be depicted as:
  Fred broke the window with a rock.
  Fred broke the window.
  The rock broke the window.
  It was Fred who broke the window.
  It was the window that Fred broke.
  What Fred did was break the window.
In each of these construal’s of the event, the perspective is slightly different, and Fred’s and the rock’s role in the process are made attentionally salient to different degrees (Croft, 1991), with each construal being made felicitously used for a particular communicative purpose in a particular discourse created for precisely these types of attentional functions.
 Different languages are constituted by different specific symbols and constructions, of course, in some cases these differences have become relatively conventionalized across linguistic structures within a language, so that we speak of different types of languages with regard to how they symbolize certain recurrent events or states of affairs. An important area of research in cognitive linguistics, therefore, concern the different resources that different languages provide for symbolizing certain universal events and situations (van Valin and LaPolla, 1997). For example, almost all people speaking almost all languages have general constructions for talking about someone causing something to happen, someone experiencing something, someone giving someone, an object moving along a path, and an object changing motion events, as by Tamy (1988):
English:  The bottle floated into the cave.
Spanish:  La botella entrô la cueva flo tando (‘The bottle entered the cave floating’).
In English the path of the bottle is expressed by the preposition ‘into’, and the marker of motion is expressed by the verb ‘float’: Whereas in Spanish, the path is expressed by the verb ‘entrada’, and the manner of motion is expressed by the modifier ‘flotando’. Because this difference is persuasive and consistent in the two languages, we may say that in  depicting motion by the verb, whereas, English is a satellite-framed language because the path of motion is typically expressed by the preposition. There are other typogical differences among languages well.
 The cognitive bases of linguistic constructions have been most thoroughly investigated by Langacker (1987, 1991). Most importantly, Langacker has provided an asccount of the different cognitive operations that characterize the two categories of word that form the heart of the most general constructions in most of the world’s languages, verbs and nouns. Verbs form the relational back-bone of linguistic expressions and have to do with processes that unfold over time or else states that remain stable over some period of time. Thus, to be able to say, that something has moved or changed, there must have been at least moments of attention: One in which an entity was in one location or state and another in which it was in another location or state. For example, we cannot make the judgment that ‘She crossed the river’ the basis of a single snapshot of a woman at any location in or near a river, but rather we must have something like a first snapshot in which she is at a location on the bank of the river, a temporally subsequent snapshot in which she is in the river, and another in which she is on the opposite riverbank. We can also say, ‘She is across the river’,  for this same situation (woman standing on one bank:, but in this situation there is no implication that a process of crocssing ever occurred. Note that the description of states, as in ‘She  remains across the river;, and also requires, at least two moments of attention in which the women stays in the same location onto the other side of the river (in on e snapshot   the might be engaged in initiating an activity). Interestingly, most languages allow their speakers to use some nouns as verbs in certain situations, in which case some kind of process interpretation is required as in ‘brush with a brush’ and ‘hammer with a hammer’, ‘dock the boat’, and ‘table the motion’, (typically an action closely associated with the object).
 Nouns are words used to indicate the participants in events or situations. Most prototypically these are spatially bounded entities such as people or trees or bicycles, but, nouns may also be used to designate temporally bounded entities such as Tuesday or corporations or virtues. For Langacker, the key cognitive operation involved the bounding of a portion of experience so as to create a thing as distinct from any experience, as illustrated by the fact that nouns may be used to talk about what are clearly events in nature (e.g., the parade, the party). Indeed, in most languages there are processes by means of which a verb form to swim may be turned into a noun like swimming. If It is thought of as a participant in an event or state of affairs, as in ‘This swimming strengthens my leg muscles’. The bounding process that creates nouns thus reflects not the independent structure of the word, but rather the fact that an important communicative function in linguistic communication is the identification of things to be talked about.
 The view of linguistic communication and the cognitive processes on which it depends is obviously very different from the generative grammar and other formalistic approaches. But cognitive linguistics can nevertheless account for all the major  phenomena of generative grammar. For example, of the generative grammar view, natural language structures may be used creatively, because speakers posses a syntax divorced from semantics. On the cognitive linguistics view, on the other hand, linguistic creativity results quite simply from the fact that speakers have formed highly general linguistic constructions composed of word categories and abstract schemas that operate on the categorical level. That linguistic categories and schemas are formed in the same basic way as other categories and schemas is evidenced by the fact that they show the same kinds of prototypically effects and metaphorical extensions as other categories and schemas (Lakoff, 1987; Taylor, 1996). Also, generative grammar analyses depend crucially on hierarchically organized tree structures that are seen as unique to language.
 A major objective of cognitive science is to understand the nature of the abstract representations and computational processes responsible for our ability to reason, speech, perceive and interact with the world.