Booksummary with Cognitive Psychology - Gilhooly, Liddy & Pollick - 1st edition


What is cognitive psychology? - Chapter 1

Cognitive psychology is the study of how people and animals collect information, store information in memory, retrieve information and work with information to achieve goals.

Preface

Cognitive psychology is concerned with how the brain represents and uses information about the outside world. It also tries to explain how errors in perception or judgment can arise. In short, cognitive psychology is the study of how people and animals collect information, store information in memory, retrieve information and work with information to achieve goals. Mental representations, inner representations of an external reality (such as images or verbal concepts) play a major role in this.

History and Approaches

In ancient times people were mainly concerned with the art of rhetoric, as a result of which useful reminders were used ( mnemonics ), such as the loci method. Here you form a collection of images that connect the objects to be remembered to a sequence of places known to you. The keyword method is used when learning a foreign language, where the student connects a new word with a similar sounding word in his own language and forms a mental image. These and other memory techniques are often based on imagination.

Associationism

From the seventeenth to the nineteenth century, the dominant approach to cognition was associationism. Empiric philosophers such as Locke and Hume believed that all knowledge came from experience and that ideas and memories were connected through associations. For example, associations can be formed when two events follow each other over time, or when two objects are often close together.

Introspectionism

In the second half of the nineteenth century Wundt tried to break up normal observations (for example, a table) into simpler sensations (for example brown, straight lines, textures). The method used for this was introspection, or self-observation in which the participants gave verbal reports of their sensations. Introspection required a lot of training, could not be learned by anyone, and only applied to certain mental processes. Moreover, the introspection itself could have an influence on the cognitive process to be studied.

Behaviorism

Partly in response to the shortcomings of introspectionism, Watson (1913) and Thorndike (1898) developed behaviourism. This paradigm only looked at observable behavior and stimuli as data, without including internal cognitive processes (such as introspection). The main goals of behaviorism were prediction and control of behavior. Watson suggested that all apparently mental phenomena could be reduced to behavioral activity. Other behaviorists, such as Tolman, were less extreme about the status of mental activity; he argued that experimental animals could indeed have goals, mental representations and mental maps (mental representations of a spatial layout). Tolman did a lot of research into mental or cognitive maps based on the behavior of rats in a maze. His research, for example, comes from the concept of latent learning, a situation in which learning does take place, but is not translated into behavior at the same time.

Although behaviourism had many successes with simple learning in animals, it was less applicable to complex mental phenomena such as reasoning, problem solving and language. Through the research of Tolman and also those of Macfarlane (1930), there was more support for the existence of abstract mental representations.

Information processing: the cognitive revolution

The information processing approach made mental representations more popular again, and was based on the programming of computers. In accordance with that, computer programs that solve certain problems can than be seen as similar to strategies people use to solve problems. Those strategies consist of fixed steps, decision-making, storing information and retrieving old information. A program that simulates a model of human thinking is called a simulation program. The information processing approach has been the dominant approach in cognitive psychology since 1960. Researchers try to explain achievement through internal representations, which are transformed through inner actions called mental operations. Information processing theories are often represented by diagrams that show the flow of information and operations.

Some information processing models used computer models to stimulate human thinking. Examples are Newells (1985) General Problem Solver and Andersons (2004) ACT-R model. An alternative way of modeling information processing is connectionism. Connectionist models simulate simple learning and perceptual phenomena through a large network of simple units, organized in input, output and internal units. The units are connected to each other by links of varying strength. This strength is adjusted by means of learning rules, such as backwards propagation, where the strength is adjusted on the basis of detected errors.

Examining mental strategies, information processing and storage, for example, is about the functional characteristics of the brain. These kinds of questions can be answered without thinking about the underlying hardware of the brain. According to functionalism, the nature of the brain and the details of neural processes are not relevant to cognitive psychology. Nowadays, however, more and more researchers are working on the neuroscientific side of cognitive psychology.

Cognitive neuroscience

The brain

The brain is the central part of the nervous system and is strongly structured and subdivided. First, it can be subdivided into the left and right hemisphere, which are connected by the corpus callosum, a thick band of nerve fibers. Both hemispheres are subdivided into frontal, parietal, occipital and temporal lobes. Deeper in the brain there are structures such as the thalamus, the hippocampus and the amygdala. To indicate locations in the brain, the following terms are often used; dorsal (at the top), ventral (at the bottom), anterior (at the front), lateral (on the side) and medial (in the middle). All the structures of the brain consist of neurons; specialized cells that exchange information by delivering electrical impulses.

Cognitive neuropsychology

Cognitive neuropsychology investigates the effects of brain damage on behavior with the aim of finding out how psychological functions are organized. This research area goes back to the Broca study (1861), which discovered that a patient had serious deviations in his speech ability after damage to a small brain area. This brain area, which is necessary for speech production, is now called the area of ​​Broca. This is a striking example of neuropsychology, where most functions are linked to the healthy working of specific brain areas. A precursor to neuropsychology was the now extinct phrenology; the idea that brain functions can be read from the bumps on the skull.

The idea of modularity suggests that cognition consists of a large number of independent processing units that work separately and can be applied to relatively specific domains. The opposite idea is that mental functions are not localized, but distributed over the brain. Nowadays, the idea of ​​localization is seen as very useful and is subject to much neuropsychological research. Cases which are especially interesting for neuropsychologists are those of double dissociation, in which people with different types of brain damage show deviations on opposite tasks.

Brain scans

There are two types of brain scans: structural imaging, where the static anatomy of the brain is shown, and functional imaging, in which brain activity is represented over time. Nowadays, the dominant method in the field of structural imaging is magnetic resonance imaging (MRI), which uses radio rays and a strong magnetic field around the participant. In the field of functional imaging, electroencephalography (EEG) is a dominant method, in which a report of functioning is given as a summary of electrical activity over a wide cortex area, measured by sensors on the skull. A functional method that produces a more localized image is positron emission tomography (PET). In this method, a radioactive substance is injected into the blood, after which the blood supply to different parts of the brain is measured. Increased blood flow is then interpreted as increased activity in that brain area.

Nowadays, the most used functional method is functional magnetic resonance imaging (fMRI), in which oxygen supply in the blood is measured. This method has a good temporal and spatial resolution. A disadvantage of fMRI is the complexity of interpreting the data. It is also suggested that the reliability of repeated scans is low. Moreover, the statistical procedures that are often used would make findings appear more significant than they are. Finally, the circumstances in which an fMRI is taken (at a complete standstill) are very specific and unusual.

Brain scans and cognitive processes

Despite the disadvantages, fMRI is widely used. A frequently used method to connect cognitive processes with the outcomes of brain scans is reverse inference. An example of this is: 'if cognitive function F1 is involved in a task, then brain area Y is active', and 'task area B is active in brain area Y', 'task B involves function F1'. Although these kinds of arguments are not conclusive, they are used to generate probable hypotheses for later research.

Networking

It may be useful to look at activities as networks instead of being highly localized. Research has shown that a large number of brain areas are active at rest, some of which are deactivated when doing a task. Therefore, it was deduced that there would be a Default Mode Network that reflects internal tasks such as daydreaming, imagining the future and recalling memories.

What are the principles of perception? - Chapter 2

Perception is the whole of processes that organize sensory experiences into an understanding of the world around us. Perception stands on a continuum between sensation, the processes by which physical properties are converted into neural signals, and cognition, the use of mental representations to reason and to plan behavior. Perceptual information can come in various forms, such as vision, sound and somatic perception (through touch and the sense of orientation of our body parts).

The physical world

Because human sensory organs are limited, they can never process enough information to accurately describe the physical world. In addition, there is the inversion problem, which indicates that information is fundamentally lost in the sensory encoding of the physical world. This happens, for example, when the three-dimensional images of the physical world are projected as two-dimensional images on our eyes.

Principles and theories of perception

Bottom-up and top-down processing

An important distinction in perceptual processing is that between bottom-up processing, where it starts with the original sensory input that is gradually transformed into the final representation, and top-down processing, whereby connections and feedback between the higher and lower levels are crucial. Those mediate the transformations with higher-level information. Although it is the question of which way of processing we make more use, it seems clear that there is often an interplay between the two ways.

The probability principle

The probability principle states that the probability that an object or event occurs is important for the perceptual processing of that object/event. This idea is linked to Bayesian decision theory, in which three components play a role in the question: which event is most likely responsible for my perception? The first component is the probability, or all uncertainty in the image (in the case of vision). The second component is the preceding, or all information about the scene before you have seen it. The third component is the decision rule, for example finding the most likely interpretation or selecting a random interpretation.

Information processing approach

According to ecological psychology, perception largely works in a bottom-up manner by using sight regularities called invariants. Those are properties of the three-dimensional object being viewed that can be derived from any two-dimensional image of the object. By discovering these invariants we could understand how direct perception works, or the bottom-up process in which objects and their functions are recognized. Marr (1982), however, doubted how direct this process was and suggested that information processing should be understood at three levels. The first level is the computational theory; examining the purpose of a calculation. With sight, hearing and sense of touch this is keeping us aware of the external world and helping us to adapt to its changes. The second level is the choice of representation for the input and output and the algorithm to achieve the transformation between input and output (for example, the transformation from air pressure to pitch and volume). The third level is to reach the computations with emphasis on the actual way in which the computations are achieved (machine, human, animal, etc.) and the shortcomings of that organism or machine.

The body and perception

The embodied approach to cognition states that, in order to understand a cognitive system, we have to take the system itself, as it is embedded in the environment, as an analysis unit. This approach is concerned with the following statements, all of which are under discussion: cognition is situated in the physical world; cognition knows time pressure; we use the environment to reduce our cognitive work pressure; the environment is part of the cognitive system and cognition must be seen in terms of how it contributes to action.

Human perception systems

Visual system

The encoding of visual information begins in the retinas and is then transferred to the primary visual cortex. Cones are special neurons in the retina that are sensitive to colored light and distinguish fine image details. Rods are special neurons on the outer edge of the retina that are effective in low light and detecting movement. The right visual world eventually ends up in the left half of the primary visual cortex in the brain and the left visual world eventually ends up in the right half of the primary visual cortex in the brain.

There are two primary ways for visual processing that lead from the primary visual cortex to the occipital cortex and beyond. The ventral flow leads to the temporal lobe and specializes in determining which objects are in the visual world. The dorsal flow leads to the parietal cortex and specializes in determining where objects are in the visual world. However, there is a lot of debate about the extent to which these two streams are independent from each other. Research on this subject is contradictory: on the one hand, brain damage in specific areas yields specific visual limitations. On the other hand, there are also perceptual features that are not that precisely localized in the brain.

Auditory system

The encoding of auditory information starts with a special structure in the ear called the cochlea and is then transmitted to the primary auditory cortex in the brain. In the cochlea, there is the basilar membrane, a strip of nerve fibers with hair cells that move in response to sound pressure. This vibration is subsequently converted into a nerve signal. The basilar membrane is responsible for encoding pitch by means of localization. The tonotopic map is where the auditory processing of different tones is arranged orderly in the cortex. The firing rates in the auditory nerve are, in addition to the basilar membrane, an indication of pitch. The secondary auditory cortex is important for speech perception and timing patterns.

Damage to the auditory cortex and the area around it can cause various abnormalities, such as aphasia (the inability to use language) and amusion (tone deafness).

Somato-supervision system

The somato-perception system is a combination of a number of systems: proprioception and vestibular sensation, which give us a sense of the position of our limbs in relation to our body and space, and sense of touch. The processing of the sense of touch begins with receptors in the skin from which pathways lead to the neurons in the brain. These pathways end up in the primary somatosensory cortex, which is located next to the central sulcus (the border between the parietal and frontal cortex). The organization of this area is somatotopic, with local regions of the cortex being dedicated to specific body parts. Furthermore, the area can be divided on the basis of specialization through the Brodmann areas. Damage to the somatosensory cortex can lead to loss of proprioception and fine sense of touch.

Multisensory integration

Several theoretical explanations are possible about how the perceptual system combines information from the different senses. The modality-appropriate hypothesis states that the sensory modality that has higher accuracy for a given physical property of the environment will always dominate the bimodal estimation of that property. For example, vision is dominant in spatial tasks. The maximum probability estimation theory, however, states that more reliable perceptual information is weighted more heavily than less reliable perceptual information. This last theory has not been studied much.

Recognition

The simplest approach to how recognition works in humans is that we compare a representation of an object with a stored inner representation. However, this is not a conclusive explanation; we can evaluate and recognize different perceptual inputs as the same thing. This is an essential feature of our effective recognition system; it can represent information in such a way that it retains the essence of an object in different transformations. Different theories to explain this have been proposed. The feature analysis proposes that we deconstruct an object in different components where each object has a unique characteristic list. The pandemonium model is a hierarchical model to recognize an object. The prototype theory states that the goal is to find out which member of a category is the best example of that category. For example, a robin is a more typical example of the 'bird' category than a penguin. The categorization process aims to set up maximum informative and distinctive categories.

Visual object recognition

As described earlier, the main problem with visual object recognition is the processing of three-dimensional images as two-dimensional projections on the retina. The viewpoint invariant relationship is every aspect of an object that is kept independent from the angle from which we look at the object. This concept was further elaborated in the recognition by components approach (RBC) as geonics: elements of a set of volumetric principles or forms that can be recognized from any point of view.

The multiple approach theory emerged as a counter-movement of the RBC approach and states that recognition is fundamentally image-based. For each object, a number of approaches would be stored, whereby intermediate representations could be linked to the stored approaches through certain mechanisms.

Somatoperceptical object recognition

The somato-perception system is also used to recognize objects. Haptic perception is the combination of skills that enable us to make the material characteristics of objects and surfaces representative for recognition. Haptic recognition exploratory procedures are used to determine how different types of touch of the hand with the object have different functions in the recognition of structure, hardness, temperature, weight, etc.

Visual agnosia and prospopagnosis

Visual agnosia occurs in lesions in the inferior region of the temporal cortex and is a condition in which people are not blind, but can not give meaning to their perception. Prosopagnosia is a special form of this where only the recognition of faces is severely limited.

Recognition of scenes and events

Recognizing a scene requires not only the perception of environment and individual objects, but also the perception of all objects together. Research shows that people are very good at quickly processing visual scenes. To observe a scene well, the eyes have to make many movements. An important question is therefore how those eye movements are directed. According to the bottom-up explanation, the eye movements are controlled by new image properties such as brightness, color or shape. According to the top-down statement, eye movements are guided by our goals and expectations. In addition to sight, information about a scene is provided by auditory observations.

For the recognition of events, including factors such as movement and sequence, schemas are important: frameworks that represent a plan or theory and support the organization of knowledge. The schedules create certain expectations about how the event will be observed and are adjusted if this is not in line with the event itself.

Social perception

Although faces are essential in social recognition, their appearance changes constantly (e.g. exposure, position, make-up, health and expression). Yet people can very accurately recognize faces of others. This is especially the case when recognizing familiar faces. Unknown faces, as with eyewitness testimonies, are sometimes very badly recognized. The Bruce and Young model states that the following processes are essential in face recognition: recognition (I know this person), identification and emotion analysis. The neural model of Haxby et al. (2000) suggests that face recognition is based on different areas in the brain, where a distinction can be made between immutable and changeable aspects of faces. An important part of this model is the Fusiform Face Area (FFA) which would be used selectively for recognizing faces.

In addition to faces, voices are also important for social recognition. Voices, regardless of linguistic content, transfer information. Emotional content of a statement is for example recognized by the prosody; the rhythm, intonation and stress patterns of speech. Because the voice of each individual is unique, due to the size and shape of the speech systems articulation, this is an important source of identity information.

Finally, it appears that people can derive a lot of information from movement, such as identity, emotion, gender and the action taken.

What are the processes of attention and awareness? - Chapter 3

There are different processes of attention and awareness, these are interrelated.

Attention

Attention is a limited resource that is used to facilitate the processing of important information. Attention is necessary because there is a lot more information around us than we could handle. Attention helps us carry out a task and select relevant information.

Taxonomy of attention research

External attention refers to selecting and checking incoming sensory information. Internal attention refers to the selection of control strategies and the retention of internally generated information such as thoughts, goals and motivations.

The attention system of the human brain

The attention system is a model of the human brain that presents three separate systems for alerting, orienting and executive functions. The alerting system consists of brain areas that are responsible for achieving a state of arousal. The frontal eye fields are areas in the frontal and parietal cortex that are involved in rapid strategic attention control. The alerting system is the 'on' button for our behavior when an event takes place, while the orienting system and executive functions are important for organizing our behavior in response to what happens.

Early theories of attention

The cocktail party problem describes how we can successfully focus on one speaker in a background of noise and other conversations. Two important explanations have been suggested for this capacity for dichotic listening: the filter theory and the resource theory.

The filter theory derives from experiments with dichotic listening, which showed that people could ignore a message in one ear while concentrating on another message in the other ear. One aspect of this theory is early selection; the idea that only one signal is left and other information is rejected. Late selection indicates the idea that all stimuli are identified, while only those that focus attention are given access to further processing. Within the filter theory, no definitive decision has ever been taken for early or late selection.

The resource theory also states that attention is limited, but instead of attributing this limitation to the information capacity of a single central channel, attention is seen as a limited resource that must be appropriately distributed. Attention can then be seen on the one hand as a 'spotlight' that illuminates interesting locations for us, or as a 'zoom lens' that determines how much of a scene is covered at a certain time. However, there is evidence that attention does not necessarily focus on a location, such as a spotlight, but rather on a certain object (or parts of it).

The dual task paradigm, where performance of participants on two tasks separately are measured simultaneously, shows that attention is indeed a limited source. Once the limit is reached, attention must be divided among the tasks and interference will occur. Since certain combinations of tasks (e.g. two visual tasks) yield more inference than other combinations (e.g. a visual and auditory task), it also shows that there is not just one central source of attention.

An important criticism of source theory is the question of how our attention system 'knows' which events in the environment are important enough to focus on. There is also evidence that may indicate that people sometimes do not divide their attention, but very quickly switch between tasks.

Attention mechanisms in perception and memory

Neural mechanism of attention in the primary visual cortex

Several neural models have been proposed to explain how attention can selectively increase the visual response of neurons. The standardization model of attention is a recent theory that unites previous theories. According to this model, attention has two functions: the ability to increase susceptibility to weak stimuli when presented alone, and the ability to reduce the impact of irrelevant distractors for the task if multiple stimuli are presented. In the distribution of the attention between these two functions, normalization plays a role, in which the original input is changed on the basis of the surrounding context.

Attention and working memory

The working memory is a central cognitive mechanism linked to separate storage locations for visual-spatial and phonological information. It serves as an interface between perceptual input and internal representations. Research shows that there is a close relationship between attention and working memory, although the exact nature of this interaction is still under discussion.

Paradigms for studying attention

Within attention research two broad trends can be observed: the emphasis on vision as a primary modality to explore attention models, and the development of experimental paradigms such as visual search, dual-task interference, inhibiton or return and attentional blink.

Visual search

This research focuses on the problem of how we use attention to find a goal in a visual environment. An important theory for this was the feature integration theory (FIT), in which recognition of a target was determined by two processes. The first process is pre-attentive and can simultaneously analyze an entire scene and recognize unique features. The second process is combining the individual characteristics. The latter has to do with the binding problem; in other words, that we know that related characteristics are processed separately, but are experienced as one whole.

Another theory in this area is the guided search, in which there is a non-selective path, which analyzes collective aspects of the visual input to lead the attention. This is done by means of divided attention, a process that is similar to the previously described pre-attentive process.

Inhibition of return

This refers to the phenomenon that after visual attention has been given to a location in the visual field and the attention has subsequently shifted, this location suffers from a delayed response to events. This mechanism ensures that we will investigate new locations rather than previously researched locations. Inhibition of return also ensures that we can ignore striking, but irrelevant parts of an image and focus attention on less noticeable parts.

Attentional blink refers to the phenomenon that if we look at a succession of rapidly presented visual images, the second of two stimuli can not be identified if it is presented very shortly after the first stimulus.

Attentional blink

Attentional blink is the phenomenon that substantial differences between two almost identical scenes are not observed when presented sequentially.

Inattentional blindness: This is the phenomenon that we can look straight at a stimulus but that we do not really perceive it if we do not focus on it. This can be seen, for example, when we watch a movie that has big differences between two successive scenes, which we then do not notice.

Awareness

Subliminal perception refers to the circumstance in which a stimulus is presented below the perception threshold, but still influences behavior. This is one of the components of consciousness research, although there is still no unambiguous definition of consciousness. An important reason for this is that consciousness is primarily a subjective, first-person experience of your 'own existence'.

Functions of consciousness

There are two general approaches to the function of consciousness. The conscious inessentialism states that consciousness is not necessary for the actions we perform and therefore does not 'exist'. The epiphenomenalism denies consciousness and states that it has no function.

At first glance, there seems to be a logical link between consciousness and free will. However, research shows that unconscious preparation for an action precedes conscious awareness of that action. This seems to contradict the earlier assumption, which makes some people tend to conclude that our sense of free will is merely an illusion.

A proposed function of consciousness is that it provides us with a summary of our current situation that integrates all incoming information. The global workspace theory suggests something similar, namely that consciousness facilitates flexible, context-dependent behavior. It has also been suggested that consciousness is an important mechanism for understanding the mental state of others, since it gives us an insight into our own reasoning and decision making.

Attention and awareness

Attention and awareness have many similarities. How can we distinguish between the two, if this distinction already exists? Lamme (2003) suggested that attention does not determine whether an input reaches consciousness, but rather determines whether a conscious report on the input is possible. Important here is the distinction between phenomenal awareness (experience in itself) and access awareness (what we intuitively think is consciousness and what is available for reporting). According to the model, we are aware of many things phenomenally, but in the absence of attention, these experiences quickly evaporate.

The link between consciousness and brain activity

Nowadays, there is more and more attention within research to relate aspects of consciousness to brain functioning. From research on patients where the two hemispheres of the brain were separated from each other, it appeared that consciousness is divided in a certain way over the hemispheres. Research into patients with a blind spot (in which brain activity shows that they perceive an object, but can not report this and therefore does not 'consciously' perceive) has led to questions about the nature of consciousness.

The term neural correlates of consciousness (NCC) is a method that tries to investigate how brain activity changes when a stimulus is consciously perceived or not. The goal of this approach is to find the minimal neuronal mechanisms that are sufficient for conscious observation. For example, research into binocular rivalry (the presentation of an image to each eye, where only one of the images can be perceived simultaneously) shows that activity in the primary visual cortex is necessary but not sufficient for consciousness.

What different parts and functions does the memory have? - Chapter 4

The memory has various functions, such as encoding (ensuring that information can be stored), saving and recalling. Traditionally, a distinction is made between the long-term memory, which stores long-lasting memories and information about performing skills, and the short-term memory, which can store a small amount of information briefly. The term working memory has a lot of overlap with the short-term memory; this is the part of the memory that enables us, for example, to manipulate active information or perform a calculation.

Sensory memory

According to the traditional approach, sensory memory is a temporary sensory register that temporarily prolongs the input of the senses for a very short time so that relevant parts of it can be processed. The sensory memory consists of different parts.

Iconic memory

The iconic storage is the sensory storage space for visual stimuli. Things that we see are thus still being kept extended and are just as accessible to us. Research shows that images are preserved for about half a second.

Echoic memory

The echoic memory is the auditory equivalent of the iconic memory. Here too, many auditory information can be stored for a very short time. Research uses, among others, the shadow technique, in which participants have to repeat a message that has to be recorded in one of the ears. It shows that the performance on this task will deteriorate after about 4 seconds. Different factors can also negatively affect performance. With backward masking, a masking stimulus is presented close to or immediately after the target stimulus so that the performance deteriorates sharply.

The haptic memory has not been adequately studied yet, but there is evidence that a sensory memory for sense of touch exists.

Short-term memory

The short-term memory (STM) keeps active information in consciousness for a short time. The information is very fragile and gets quickly lost. According to the Atkinson-Shiffrin model, information is first stored in sensory storage and afterwards transferred to the STM. Whether the information is ultimately stored in the long-term memory (LTM) depends on a number of factors. Repetition and elaborative repetition (in which the information will actually be processed) often cause the information to be transferred to the LTM. Expiry (loss of information in the STM over time) and replacement (loss of information in the STM due to other information coming in) impede the transfer to the LTM.

The model therefore states that the STM has a limited capacity. The digit span is a method to measure the capacity of the STM by recovering as many numbers as possible from free recall (recalling information without instructions). Most people have a capacity of around 7 (+ or -2). Chunking is a strategy to increase capacity by grouping small units of information together; thus, for example, by remembering 147 as a number instead of 1, 4 and 7 separately.

The free recall experiments show that people can remember items at the beginning (primacy effect) and at the end (recency effect) better than items in the middle of the list. This is probably because older items are more often stored in the LTM, while there is less time in the middle for that. Subsequent items are held in the STM. Tasks with multiple item lists also show the negative recency effect; in which items at the end of the list are remembered worse, because they are not stored in the LTM. The above mentioned experiments all provide evidence for the existence of a separate short and long-term memory. Moreover, this idea is supported by cases of double dissociation of function, in which patients with different types of brain damage have defects in either the LTM or the STM. However, it seems that the LTM depends on processes in the STM.

Random access memory

Research shows that other subsystems may be underlying in tasks such as digit span and the recency effect: The working memory (WM) can be seen as the 'workplace' of the human brain. Precise definitions of the WM and its relationship with the STM and the LTM differ between researchers.

In Cowan's embedded process model, the WM is seen as an attention focus with limited capacity and a temporarily activated subset of the LTM. This approach emphasizes the interaction between attention and memory and considers WM in the light of the LTM. Multiple component models, on the other hand, suggest that WM can be subdivided into components with the primary function of WM coordinating sources and aiming at identifying and examining the structures that are related to this function.

Baddeley's working memory model

According to Baddeley's working memory model, WM is not just a repository for retaining information in consciousness, but plays an important role in processing. The model represents four components of WM which will be described below.

Phonological loop

The phonological loop is the component of WM that provides temporary storage and manipulation of phonological information. This again has two subcomponents; the phonological repository, where speech-based information is stored for 2-3 seconds, and articulatory control processes that repeat information on a sub-vocal level. The number of words that can be stored has to do with the articulation time of the word (and not with the number of syllables), so that speakers of some languages ​​can store more words than others. If someone is asked to repeat something other than the relevant information (articulatory suppression), the ability to repeat subvocally is disrupted. Also, retaining the relevant information is more difficult if irrelevant speech is present during learning or if the words to be remembered are very similar in terms of sound.

Visuo-spatial sketchpad

In the Baddeley model, the visual-spatial sketch block is responsible for maintaining and manipulating visual and spatial information. This component again has two components; the visual cache (the storage location for visual information) and the inner clerk (which allows spatial processing). Research indeed shows support for the claim that the components for spatial and visual information, although strongly connected with each other, are separate.

Central executive

The central executive in Baddeley's model is the most important part of WM; it is a coordinating system that regulates the functions and components described above. According to the supervisory activating system (SAS) model, there are two types of cognitive control; automatic processes (for routine tasks and well-trained tasks) and a process that can disrupt automatic processing and select an alternative schedule. Indeed, research shows support for the existence of two such separate control systems. For example, people often make capture errors; failing to deviate from routined actions. In addition, people with the dysexecutive syndrome are able to perform well-trained routine tasks, but fail to learn new things or deviate from the established order. Cases of utilization behavior show patients who exhibit spontaneous and uncontrollable actions or compulsive interaction with objects.

Episodic buffer

The episodic buffer is a later addition to the Baddeley model. The previous model did not have its own repository in addition to the interaction with the STM. So an explanation was needed for the apparent interaction of the WM with the LTM and the way in which the WM sometimes exhibits a much larger storage capacity. The episodic buffer is accessible to the central operator or subsystems and has a connection to LTM. It is a temporary storage structure with limited capacity and allows integration of modality-specific information.

What are the functions and structure of the long-term memory? - Chapter 5

The long-term memory acts as a repository for all the memories that we possess. It consists of two components, the non-declarative and the declarative memory.

Memory and amnesia

The amnestic syndrome is a permanent and penetrating memory disorder that affects many memory functions. This involves both anterograde amnesia, e.g. limitation of memory for memories after the onset of the disorder, and retrograde amnesia, loss of memory before the onset of the disorder. Possible causes of the amnestic syndrome are brain surgery, infections, head damage or disorders such as Korsakoff's syndrome. In many patients with amnesia the linguistic ability and awareness of concepts is intact. As this knowledge is often retained, while other types of memories are lost, a number of models have been developed to explain this.

The structure of the long-term memory

The long-term memory (LTM) is a repository for all the memories we have. The multiple memory systems model states that the LTM consists of several components that are responsible for different types of memories. The non-declarative or implicit memory refers to memories that we do not consciously pick up, such as how to drive a car. The declarative or explicit memory refers to conscious memories of events, facts, people and places. Memory tests that use methods such as free recall (e.g. what is the capital of France?), Cued recall (which word starts with a P and is the capital of France?), and recognition (is Paris the capital of France?) mainly appeal to the declarative memory. A disturbance of the declarative memory often occurs in patients with amnesia.

Tulving (1972) proposed a three-part model of the LTM. In declarative memory he made an extra distinction between episodic memory, or memory for events and experiences, and semantic memory, or memory for facts and knowledge about the world. Not everyone agrees with this distinction. It is not always clear when a memory falls under the episodic memory or under the semantic memory, for example in the case of autobiographical memories.

Non-declarative (or implicit) memory

Skill learning

The non-declarative memory plays a role in many different tasks, such as classic conditioning, motor skills and priming. The procedural memory is an example of non-declarative memories, such as driving a car, tying your shoelaces or putting your signature. Such kind of knowledge is collected through practice and after a while often becomes automatic and unconscious.

Habit learning

The learning of habits takes place over time by means of repeated associations between stimuli and reactions. As it is often difficult to examine learned habits without the influence of declarative memories, this type of research often uses learning through probabilistic classification. Participants learn associations that are not obvious and can not be readily 'remembered'. Learning is hereby based on experience that one achieves through different trials.

Repetition priming

Priming refers to an implicit memory effect in which exposure to a stimulus influences a later reaction. In an experiment that uses priming, participants get to see a glossary with uncommon words. Later, they see words with missing letters, which they then have to add to come to existing words. This kind of research shows that the participants are primed by the previously displayed word list: they supplement those words more easily than words that are not in the word list. For example, similar experiments are used to show that repetitive priming is intact in amnesia patients in the absence of declarative memory. These findings provide support for the assumption that there is a distinction between declarative and non-declarative memory.

Declarative (or explicit) memory

Episodic memory

The episodic memory within the LTM is the system that enables us to remember previous experiences and consciously re-experience them. There are three important features of episodic memory. Firstly, it is a form of mental time travel, secondly, it has a connection to the self and thirdly, mental time travel is associated with autonomic consciousness. This type of consciousness allows us to imagine ourselves in the future, plan ahead and set goals. Episodic memory is severely limited in people with amnesia.

It is important to remember that memories in episodic memory are not an exact replica of the actual event; the memories are constructive and often supplemented by ourselves with other information. Bartlett (1932) emphasized the role of schemes in remembering events. Those are organized remembrance structures that allow us to apply experiences to new situations. The schemes create expectations and can be used to supplement missing information with memories (unconsciously). For example, research shows that participants supplement stories that they have heard and have to retell with their own knowledge about their well-known events and stories. In this way memories can be disturbed.

Prospective memory

An important function of episodic memory is the ability to use memories to influence future behavior. Memory that allows us to keep track of plans and perform intended actions, or "remember to remember", is called prospective memory. If this fails, there is often talk of interrupting routines, such as going straight home (as always), instead of making a detour to post a letter (breaking the routine). Ellis (1988) distinguished between pulses, intentions that are time-bound, and steps, intentions that can be carried out within a larger time frame.

Autobiographical memory

Autobiographical memories are episodic memories of events that one personally experienced during his/her life. These are strongly associated with the self and can be seen as our conscious life history. These kinds of memories are susceptible to distortion due to later events and the self-image one has. To such false memories (or inaccurate memories of events that did or did not happen) much research has been done. It turns out that false memories persist, sometimes even if someone has been reminded that the memory is not right. This can be caused by imagination inflation: strengthening a false memory through repeated recall.

A déjà vu is an illusion of autobiographical memory and can be described as 'knowledge that a situation could not have been experienced, combined with the feeling that it has'. This is a very common experience for which an explanation is not yet established.

Semantic memory

Semantic memory is the storage of general knowledge about the world, the people in it and facts about ourselves. People who live in the same culture often share a large part of their semantic memory. Meta-memory refers to the ability to inspect and control the contents of our memory, or to 'know if we know something'. In the case of amnesia much of the semantic memory, such as language and concepts, is retained. If other knowledge (such as things you learn at school) is also lost in amnesia, is under discussion. Research shows some evidence for the existence of a permastore. In other words, the long-term preservation of content that has been acquired and learned over and over again, even if it is rarely used afterwards. This may also be the case for personal semantic memory.

It is clear that a general distinction can be made between declarative and non-declarative memory. Within the declarative memory one can distinguish between semantic and episodic memory. However, the categories show a lot of overlap in certain cases. The degree of separation or overlap is subject to further research.

How does one learn and forget? - Chapter 6

There are different ways and methods for learning and different causes for forgetting.

To learn

The first step in learning new information is to encode that information into an internal representation in the working memory. That representation must then be processed into a memory remnant: a mental representation of stored information.

Theory of processing levels

According to the theory of processing levels, superficial encoding leads to weak preservation and deep encoding to improved preservation and memory. Learning does not have to be done intentionally; incidental learning (without intention to learn) can also work well. The difficulty of this theory, however, is that there is no objective measure for the depth of processing.

Memory strategies

There are different memory strategies that improve memory performance. Categorization is a memory strategy where items are classified into known categories, leading to better memory. The method of loci is a strategy in which a known route is proposed and images of the items to be remembered are linked to known places on the route. Interacting images is a strategy in which vivid and bizarre images are formed out of the items to be remembered which then interact in a certain way.

The effectiveness of this type of method can be explained on the basis of the dual coding hypothesis. This theory states that the meaning of concrete words can be presented both verbally and visually. Since abstract words can only be verbally presented they are more difficult to remember.

Encoding specificity

The principle of encoding specificity states that if the context of retrieval is similar to the context of encoding, the memory will work better. For example, if words are presented in capitals during learning, they are better recognized when retrieving if they are presented in upper case than in lower case.

Context-dependent retrieval

Research into context effects shows that the memory works better if the external environment during testing is equal to the environment in learning. For example, words are better remembered when people learn words under water and are also tested under water, than when the words are tested on land. Similar results have also been found for similar internal physiological states or states of mind in both learning and retrieval.

The spacing effect

The spacing effect refers to the phenomenon that material learned at scattered moments is better remembered than material learned in one connecting session. 'Cramming' the night before an exam is therefore less effective than learning the material over a longer period in smaller pieces. There are several possible explanations for this effect. For example, there may be more variability in the presentation of the material during scattered sessions, therefore, more clues are linked to the material learned.

Forgetting

We speak of forgetting if someone can not retrieve information that was previously available for memory. Many researchers have spent years studying research about forgetting, including Ebbinghaus. He devised a method with nonsense syllables (e.g. FEC, DUV, etc.). The results were presented in terms of savings, i.e. syllables that did not need to be re-learned in repeated trials.

Interference

Interference is a major cause of forgetting. We speak of proactive interference when previously learned material disturbs later learning and of retroactive interference as later learning disrupts memory for previously learned material. Research shows that the more the interfering learning resembles the original material, the greater the degree of forgetting. A common method in research into forgetting is the paired association paradigm. This is a memory task where participants get to see a list of word pairs on learning trials and get to see one of the two words during the test. They must then mention the second word of the word pair.

Decay and consolidation

If interference does not play a role, would forgetting still occur? Unfortunately, it is impossible to investigate the effects of decay over time while excluding all possible interference.

There is an approach which states that memories will expire unless they are consolidated (strengthened). Research shows, for example, that memory is better if learning is followed by a period of sleep than by a period of normal daily activity. This positive effect of sleep/inactivity is also called retrograde facilitation .

In neuroscientific research on sleep, the emphasis lies on long-term potentiation (LTP), or the long-term improvement in signal transmission between two neurons resulting from the simultaneous stimulation of these neurons. LTP is considered as an important mechanism in learning and remembering. LTP can not be generated in non-REM sleep; thus, during non-REM sleep, recent memories that begin to consolidate become protected against interference from new memories.

Research into retrograde amnesia and the effects of alcohol and benzodiazepines also shows strong evidence for the idea that memories over time consolidate. The hippocampus plays an important role in this process. Alcohol and benzodiazepines have about he same effect on memory as sleep; if mental activity and thus the formation of new memories is reduced, previously formed memories are protected against the effects of retroactive interference. This is consistent with the idea that forgetting is a retroactive effect of the memory formation associated with normal mental activity.

Functional approaches to forgetting

Although forgetting is often seen as something negative, it would not be practical to remember everything you have ever learned. In dramatic cases, people may suffer from intrusive memories of traumatic events that they would rather not remember.

Retrieval-induced forgetting and directed forgetting

The retrieval-induced forgetting (RIF) paradigm refers to the disrupted ability to retrieve items through previous retrieval of related items. For example, if you try to recover nice memories from a holiday, the less fun memories become more difficult to retrieve. In the directed forgetting (DF) paradigm, memory impairment is brought about by the instruction to forget certain items. If people then have to retrieve these items this is much more difficult.

The think / no think paradigm

This paradigm is a memory manipulation in which participants are instructed not to recall a memory, even if a strong indication is present. This shows that people can consciously regulate activation of the hippocampus which is related to recalling memories.

Everyday / real world memory

An important problem with the memory studies discussed above is the ecological validity: the extent to which the results of experiments in the laboratory can be applied in everyday situations. Research into memory is not always as representative or generalizable to the 'real' world. Below are a number of findings presented that are closer to our everyday life.

Flashbulb memories

Flashbulb memories are vivid memories of a dramatic event and the circumstances in which that event was experienced. Although people first thought that these memories were extremely lively and accurate, research shows that large inaccuracies in flashbulb memories are common. It is striking that people have a lot of confidence in their memories, while that often turns out not to be right.

Eyewitness testimonies

A number of factors suggest that eyewitness testimony should be treated with caution. The stress and anxiety associated with the events often proves to reduce memory performance. The formulation of the questions and the gestures and body language of the interrogator also appear to have a great influence on the testimony. These are all examples of retroactive interference.

Effective studying

Research among students shows that three learning styles can be distinguished. In the case of superficial learning, students try to memorize texts without seeking to understand them. In deep learning, students make an effort to understand the material and make it meaningful to them. In strategic learning, students try to find out which questions will be asked in the exam and devise strategies to learn the minimum requirement. Deep learning appears to be the most effective method, especially when combined with strategic learning. Testing yourself on what you have learned also works well.

Which representations of knowledge are there? - Chapter 7

We use concepts (mental representations of item-categories) to represent all objects belonging to that category. Our long-term knowledge of the world is therefore based on concepts and relationships between concepts.

Theories of conceptual representation

There are different approaches to the term concepts that will be discussed below.

Definitional approach

Some concepts, such as 'bachelor', can be easily defined (unmarried man). Other concepts, especially nowadays, are harder to capture in a definition. A lot of research is being done into alternative ways to represent and use poorly defined concepts.

Prototype approaches

Although people often think in categories, not all concepts can easily be sorted into a certain category. Typicality is the extent to which an object is representative for the category. People appear to be very good at making judgments on the basis of typicality. Rosch and Mervis suggested that members of a category share a family resemblance and can get scores for the extent to which they are similar. Judgments about typicality are subsequently made on the basis of how much an item is similar to other category members. The item that has the most family resemblance and therefore represents the category best is called the prototype .

Categories and concepts usually form hierarchies, such as animal, dog, Pekingese dog, etc. The middle level of the hierarchy is the most fundamental and is called the basic level of categorization. At this level, members of the category are very similar, but the category concepts are clearly distinguishable. For example, hammers and saws are very distinct from each other, while types of saws are very similar. The category of tool is then the basic level.

Although the prototype approach has many advantages, there are also a number of limitations. Indeed, abstract and ad hoc concepts are difficult to fit into the approach. It is also difficult to fit knowledge about variability in members' characteristics and their usefulness as indications into the idea of ​​prototypes.

Exemplar-based approaches

A popular alternative to the prototype approach is model theory (exemplar theory). According to this theory, categories are represented by saved examples, each of which is linked to the name of the category. It is therefore not a prototype. An advantage of this theory is that it represents variability within a category. For example, if people have to say whether an item is a pizza or a ruler when they only know that the item is 30 cm long, many people choose for the 'pizza' option. This is because they can think of more types of pizza that are 30 cm long, while of rulers less variable examples can be thought of.

Theory / knowledge-based approaches

Not all categories are based on superficial similarities or shared characteristics; some are for example based on goals (things that you would save from a burning house). Others are very diverse, such as 'drunken actions'. Such categorizations are driven by knowledge instead of likeness. The theory-based approach therefore assumes that concepts contain information about the relationships with other concepts and the relationships between the characteristics.

Essentialism

Essentialism assumes that all members of a particular category share an essential quality. Barton and Komatsu (1989) state that there are three different types of concepts. Nominal concepts have clear definitions, such as a triangle. Natural kind concepts are seen as naturally occurring, such as cats, dogs, rainy days, etc. The essential characteristic of this type is based on appearance. Artifact concepts are related to people-designed objects that are defined in terms of their function, such as televisions, cars, etc. The essential feature of this type is their function.

Grounded representations versus amodal representations

In many information processing approaches, conceptual knowledge (such as motor characteristics: touchable, rough, etc.) is represented by amodal, or abstract representations. Barsalou, however, states that the representation is more grounded: the brain represents an object (for example a chair) in terms of what it looks like, what it feels like to sit on it, etc. Simulation, the extensive re-experience of a previous experience plays hereby an important role. Research indicates a lot of support for ​​these well-founded modality-specific aspects of conceptual representation. Whether abstract concepts can be explained through reliving, however, remains controversial.

Imagery and concepts

Imagery and visuo-spatial processing: overlaps?

Visual imagination is often studied in terms of overlap with visual-spatial processing: the mental manipulation of visual or spatial information. As visual-spatial and imaginative tasks often cause interference, it is reasonable to assume that both processes use the same mental and neural sources.

Image scanning and comparing

We often scan and compare mental images for practical purpose: would that cabinet fit through the door? This cabinet is larger than the one in the store, etc. Research supports the idea that scanning, comparison and rotation of mental images is equivalent to operating 'pictures' in the head. However, Pylyshyn (1973, 1981, 2002) believes that imagination is only a by-product of underlying cognitive processes and therefore has no functional role in itself. He is thus convinced that amodal representations underlie the experience of imagination.

Ambiguity of images

The famous Duck-Rabbit figure and the Necker Cube are good examples of ambiguous figures that generate alternative and alternating structures. Research into these types of figures shows that people do have a fixed interpretation of mental images. This, however, does not have to be the case for real images. It is therefore plausible that mental images do not work exactly the way real pictures work.

Neuropsychology / neuroscience of imagery

If imagination is a perception of perception one can expect that the same brain areas are involved in both processes. Indeed, research shows that activation of the occipital lobe and early visual cortex play a role in both processes. However, some people with brain damage have intact visual perception but distorted imagination and vice versa. It seems that although brain areas show overlap for perception and imagination, they are not identical.

What is the motor system? - Chapter 8

The motor system includes the components of the central and peripheral nervous systems along with the muscles, joints and bones that enable movement.

Motor control

Woodworth (1899) was the first to propose different phases for planning and controlling movement. In the twentieth century motor control was approached mainly from a physiological perspective. Bernstein (1967) was the first to introduce the degrees of freedom problem: because the muscles and joints can move in countless different possible ways, the question is how to select a certain movement to achieve a certain goal. This resembles the inverse problem within the study of vision; there are also countless different ways to interpret a 2-D image that falls on the retina as a 3-D image. There are different approaches that offer an explanation for how movements are planned and executed.

Equilibrium point hypothesis

This theory emphasizes the special relationship between the brain and the muscles. It reflects an important intuition that our muscles, like springs, exert different forces depending on how much they are stretched. Every movement to a different posture is thus a movement from one stable posture to another. However, this theory is not properly applicable to more complex movements.

Dynamical systems theory

This theory describes motor control as a process of self-organization between an animal and its environment. Special mathematical techniques are then used to describe how the behavior of a system (in this case the human body) changes over time.

Optimal control theory

This theory regards motor control as the evolutionary or developmental outcome of the nervous system, which seeks to optimize organizational principles. It is in fact an advanced form of simple feedback mechanisms: movements are optimized on the basis of feedback (if a certain goal is achieved). To prevent delay in the feedback, the forward model is used; it obtains predictions about the relationships between actions and their consequences.

The three above described theories all contribute significantly to explaining how motor control works. The equilibrium point hypothesis shows that the complexity of a motor plan can be simplified on the basis of muscle characteristics. The dynamic systems theory shows that transitions between different action states can be explained on the basis of the development of a system over time. The optimal control theory ensures that optimal organizational principles can be integrated into the system of planning, producing and observing our actions. However, every theory is only a part of behavior and theory.

Production of complex actions

Explaining how we achieve goals through a sequence of movements requires more interaction with other cognitive processes. The following theories focus on explaining more complex actions.

Action sequences

The associative chain theory states that the end of a particular action is associated with stimulating the start of a subsequent action in the sequence. Lashley (1951) explained this on the basis of language production; words in a sentence would, for example, be linked to each other by associated links. The 'slip-of-the-tongue' phenomenon gives evidence for this: you accidentally say a word that is associated with the word you actually wanted to say. However, this theory does not explain which mechanisms and overarching limitations guide the associative process.

Hierarchical models of action production

Since the different mechanisms in Lashley's model worked simultaneously to create sequences, it was important to find out how these mechanisms were organized. Miller et al. (1960) and Estes (1972), among others, proposed a hierarchical layout of schemes. The temporal aspect of an action sequence (e.g. making coffee), or the sequence and how loose steps are triggered can be explained on the basis of recurrent networks. These are artificial neural networks with connections between the units, creating a circle of activation. Patterns of activation and inhibition within the hierarchy work through interactive activation. This means that the activation of a certain unit causes inhibition of the other units.

Brain damage and action production

Damage to the frontal cortex is often spread over different areas, and can lead to different syndromes in which the patient makes mistakes in producing action sequences. The action disorganization syndrome is an example of this. This syndrome is part of a broader family of movement disorders called apraxia, where the patient loses the ability to perform certain motor actions while the sensory and motor systems are still intact.

Action representation and perception

Theories of action representation

The idea of ​​the cognitive sandwich is that cognition is surrounded by perception and action. The above theories fit within this idea. Below, however, theories are discussed in which cognitive representations of action mix with representations of both perception and action.

Ideomotor theory

The ideomotor theory is a long-standing theory that sees action and perception as closely connected to each other. For example, a certain action is associated with the sensory outcomes of that action. In the nineties, this idea was elaborated within the common coding framework; a theory that states that both production and perception share certain representations of actions. Instead of a translation from sensory codes to motor codes and vice versa (like with the cognitive sandwich), this theory suggests that there is a layer of presentation where event codes and action codes overlap. Research supports this approach, showing that interference occurs when, at the same time, an appeal is made to the observation and production of an action.

Mirror mechanisms and action observation

Mirror neurons represent both the sensory aspects of observing an action and the motor aspects of producing that action. Neurons that are normally involved in performing an action are therefore sensitive to observing that action. Research shows that this is a general information processing strategy and not just a limited mechanism. There is a lot of controversy about the role of mirror neurons. It may be a way to discover the purpose of the action we perceive or a way to learn through imitation.

Embodied cognition

The embodied approach to cognition states that perception and action are very closely related. The idea that perceptual representations of the world are connected to representations of actions is illustrated by common coding and mirror neurons. The idea of ​​embodied cognition emphasizes the importance of our body in cognition, as well as the importance of the environment in cognition. Metaphorical gestures are an example of this; we often use gestures to express or clarify abstract or concrete concepts. This possibly provides evidence for the approach that our ideas are embodied in physical actions.

In what ways are problems solved? - Chapter 9

Problems can be solved through restructuring (changing the way the problem is seen) or through creative solutions.

Problems and problem types

A problem can be defined as a situation in which you have a goal but do not know how to achieve it. A problem can be either described as ill-defined or well-defined depending on the amount of information you have with regard to the initial situation, possible actions or goals. Knowledge-rich problems require specialist knowledge, while this is not the case with knowledge-lean problems. Adversary-problems are those in which there is a thinking opponent who tries to beat your goals (e.g. chess), while this is not the case with non-adversary problems (for example a puzzle).

History and background

The Gestalt approach

The Gestalt approach sees problem solving as perceiving new patterns. The key process was restructuring, where insight and understanding play a major role. A restructuring that leads to a quick solution is called an insight. There are two main barriers to insight: set, the tendency to linger in a particular approach to a problem, and functional fixity, the difficulty to come up with a new function for a well-known object. A set can be caused by intensive experience or training in certain problems, while functional fixation occurs more often in adults or if the object is presented in such a way that the usual function is easily associated with it.

The information processing approach

Within the information processing approach, problem solving in people is compared with the strategies computer programs use. A number of concepts have emerged from this. The problem space is an abstract representation of possible states of a problem. This includes the subtypes of state-action-space, which is a representation of how a problem can be transformed from the starting state by intermediate states to the target, and the goal-subgoal-space, which is a representation of how an overall problem goal can be broken down into subgoals and sub-subgoals.

State-action spaces

With a larger problem, it is more difficult to find the target state. There are three possible main strategies for this. With depth first search, only one state is generated from each intermediate state (for example, always choosing the right branch in a decision tree). However, this strategy does not guarantee that the goal is found or that the best solution is achieved. In the breadth first search, every possible move is considered at each level. This is a very intensive strategy, but ensures that the goal is achieved. In progressive deepening, the depth first strategy is used to a limited depth. When depth limit is reached it goes back to the beginning and is searched again to a limited depth. The goal is guaranteed to be achieved and may be faster achieved than a complete depth first if the solution is randomly found quickly.

Goal-subgoal spaces

In these types of problem solving, the problem is subdivided into subgoals and sub-sub goals. It is useful to use those strategies in case of a large number of possible alternative actions because it causes problem reduction.

Insight

The above strategies are applicable to problems that can be resolved by looking for a particular representation. In case of problems that require a change in representation, however, these approaches are less applicable. Although the Gestalt approach states that solving insight problems requires the special process of restructuring, some scientists believe that it requires only normal search and problem analysis processes. Neurological research shows that different neural processes are active in solving insight and non-insight problems. It therefore seems as if there is indeed a fundamental distinction between these two types of problems.

Recent theories of insight

There are two main approaches to problem-solving through insight.

The representational change theory consists of different phases: problem perception, problem solving (heuristic search processes), impasse (the initial representation causes a breakdown), restructuring (new encoding of the representation), partial insight and full insight . Constraint relaxation is required during restructuring which means loosening the restrictions on what should or should not be done to achieve the goal. This theory seems to be correct in certain algebra problems, but whether this also applies to other problem areas is questionable.

The progress monitoring theory states that the major source of difficulty in understanding tasks is the use of inappropriate heuristics. It is assumed that people use both a maximization heuristic, where the aim is to be reached as quickly as possible, and progress monitoring, in which one keeps an eye on whether progress is fast and efficient enough. If this is not the case, criterion failure occurs . According to this theory, insight is most often achieved when criterion failure is followed by constraint relaxation. Although there is a lot of evidence for the theory, it does not clearly explain how new strategies are precisely achieved.

    Knowledge-rich (or expert) problem solving

    To achieve expertise, it seems to require approximately 10 years of intensive training including conscious exercise, focused training and coaching.

    Research shows that experts often have an extensive memory for known patterns that trigger the right actions. However, this benefit is specific to the expertise domain. For example, chess experts have no advantage over laymen in non-chess-related memory tasks. It also appears that experts represent or 'see' problems differently than laymen, because they can draw from a more extensive set of schemes.

    Creative problem solving

    A creative solution is one that is novel and valued or useful in a certain way. Approaches in research into creative thinking and problem solving fall into personal explanations and theories or laboratory tests. Personal explanations were mainly used as a basis for models of creative problem solving.

    Wallas's four-stage analysis

    This analysis consists of four phases: preparation (becoming familiar with the problem, does not yet lead to a solution), incubation (problem is set aside for a moment), illumination (inspiration / insight, does not always lead to a solution) and verification (solution is achieved by consciously testing ideas from the illumination). According to Wallas, the incubation phase is crucial for solving the problem, which is supported by research. There are several possible explanations for this effect. You might think that conscious work is done during the incubation, but research shows that this is not the case. Research results point out that unconscious work plays an important role during incubation. Another explanation is that the break is only a possibility to rest and focus on the problem with more energy. A final possibility is that misleading strategies, wrong assumptions and related mind sets are forgotten during the incubation phase.

    Information processing theory of creative processes

    According to the geneplore model, there are two important phases during creative work; in the first phase, pre-inventive structures are generated and in the discovering phase these structures are interpreted.

    Increasing idea production

    Is it possible to take conscious steps to increase the flow of creative ideas? Research shows that small indications can have large unconscious effects on our thinking behavior. People appear to become more creative when they first have to think a few minutes of creative subjects than when they have to think about non-creative subjects. A creative environment can also unconsciously generate more creativity. The brainstorming method, which encourages the production of as many unusual ideas as possible, also has a positive influence on creative thinking.

    How does one make decisions? - Chapter 10

    Making a decision is a cognitive process in which a choice is made between alternative possible actions. Decisions can be risky, when there is a chance that one of the options could lead to negative consequences. They can also be risk-free, which means that the outcomes of the options are certain. Decision problems with one attribute have alternatives that differ on a single dimension. More often, however, it concerns decision problems with multiple attributes, with alternatives that differ on several dimensions.

    Expected value theory

    A number of mathematicians in the 17th century argued that the expected value had to be maximized when making choices. The expected value is the average value in the long term, which is determined by the probability and size of a result. In reality, however, people are usually not led by maximizing the expected value. For example, people engage in gambling, buy lottery tickets, take out insurance and make other non-profitable choices. A possible explanation for this observation is risk aversion: the tendency of people to avoid risky choices, even if they offer a higher expected value. Another explanation is risk-seeking: the tendency to take risky choices, even if the risk-free alternatives offer a higher expected value. It is plausible that people do not base their choices on objective monetary values ​​or opportunities, but rather on subjective opportunities: how likely someone thinks a particular outcome is (independent of the objective probability).

    Utility and expectation theory

    The idea of utility, or the subjective value of a choice, is emphasized in utility theory. In the case of money, this theory states, for example, that the utility of a monetary amount decreases as you have more money. The expectation theory (prospect theory) explains decisions on the basis of relative profit and loss. Loss aversion plays an important role in this: for example, the loss of 10 euros has a more negative utility than the profit of 10 euros has a positive utility. Related to this is the endowment effect: the tendency to over-value an item that you own and to require more money to sell it than to buy it in the first place. The status quo bias is the strong preference for maintaining the current state of affairs and avoiding change.

    Subjective probability and prospect theory

    The prospect theory therefore states that perceived opportunities differ systematically from objective opportunities or values. Because loss aversion plays a major role, the way in which alternatives are presented in a choice problem has a big influence (framing). If people are not affected by framing, they show invariance .

    Making probability assessments

    Tversky and Kahneman (1974) state that the use of heuristics, such as availability heuristics and representativeness heuristics, plays a major role in making probability assessments.

    Availabilty

    With availability heuristics, probability or the frequency of an event is estimated by how easy it is to come up with examples of that event. Because availability is not only based on frequency, but also on how recently that event occurred or its emotional impact, this heuristic can lead to false probability assessments.

    Representativeness

    In representativeness heuristic frequency or probability of an event or object is estimated on the basis of how representative or typical it is for its category. The conjunction fallacy plays a role in here: the erroneous assumption that the conjunction of two events (A and B) is more likely than either A or B.

    The base rate of an event is the overall probability of the event in a population. For example: the base rate of engineers in the Netherlands is the probability that a randomly selected person in the Netherlands is an engineer. Research shows that people often ignore the base rate. This is especially the case when information is presented in terms of percentages: when formulating the information in terms of frequencies, the base rate fallacy is often reduced or even removed.

    The affect heuristics

    In the case of affect heuristic, goal attributes are replaced by readily available feelings or affective judgments. For example, if people hear about the risks of nuclear energy, the assessment of their potential benefits will be greatly reduced. The risks and benefits are therefore not assessed independently of each other, but they influence each other strongly.

    Decision processes for alternatives with multiple attributes

    Multi-attribute utility theory

    Even when there are no risks, it is demanding to make a decision between options that differ on many attributes. The theory of multi-attribute utility states that the creator of a decision 1) must identify the relevant dimensions, 2) assign the relative value to the attributes, 3) calculate a total utility for each object by adding the weighted attribute values and 4) choose the object with the highest weighted total. Difficulties with this approach include that the relevant dimensions are not always known and that it is usually impossible to assign an objective value to the attribute.

    Elimination by aspects

    A slightly less demanding strategy is elimination by aspects. Here you select an attribute and eliminate all options that do not meet the criterion level for that attribute. You then do this with all attributes until one option is left. However, the value of the attributes may affect the order of elimination in this strategy.

    Satisficing

    The fundamental idea of satisficing is that people often do not choose to spend time and effort in maximizing utility, but are satisfied with a choice that meets a minimum acceptable level.

    Testing multiple-attribute choice models

    Research shows that people often choose not just one decision strategy when choosing between alternatives with multiple attributes. Instead, strategies are used that are a compromise between minimizing the cognitive workload and maximizing the utility of the outcome.

    Two-system approaches to decision making

    The two-system approaches to decision making states that there are two distinct cognitive systems. System 1 ensures fast intuitive thinking and System 2 ensures slow, conscious thinking. When making decisions one of the two systems is used, depending on the importance of the decision.

    Fast and frugal heuristics: the adaptive toolbox

    According to Gigerenzer et al. (1999) many simple heuristics have considerable validity in daily life and are sometimes more effective or just as effective as complex methods. Together, these heuristics form an 'adaptive toolbox', since they are often valid for the real-life situations in which they are developed. Even in important situations, such as when a doctor has to make a diagnosis, it appears that heuristics such as a quick decision tree are used.

    Heuristics and consequentialism

    The above approaches are based on consequentialism: the vision that decisions are made on the basis of the consequences that are expected to follow from the different choices. However, people often turn out to make non-consequentialist decisions. This is often the result of simple heuristics that often work well, but sometimes fail.

    The omission bias describes the tendency to downplay the negative consequences of omissions in contrast to commissions which have the same effects (for example not vaccinating vs. vaccinating your child).

    According to the consequentialist approach, punishment is only valuable if it has a deterrent effect and the behavior changes. Nevertheless, people in practice appear to see retribution as the main function of punishment. So they give little to the consequences of punishment.

    Finally, it appears that people in new laws (e.g. higher taxes for CO2 emissions) often indicate that they agree with the consequences of the law (better for the environment), but would not vote for the law yet.

    Naturalistic decision making

    Making naturalistic decisions means making real life decisions in the field. In the critical incident analysis method, people are asked to describe a recent situation in which they had to make an important decision. A common strategy among professionals turned out to be recognition primed decisions, in which the decisions are based on expertise and the recognition of clues in the environment.

    The question is whether theories such as the theory of multi-attribute utility can be applied to naturalistic decisions. Research shows that under time pressure, people make no decision by deliberately weighing all choices, but often choose the first option that comes to mind. When it comes to important decisions without time pressure, the decision process comes closer to theory.

    Neuroeconomics: neuroscience approaches to decision making

    Neuroeconomics is the study of neural processes that are underlying economic decisions. This type of research shows that utility or pleasure from a range of options can be presented through reward systems in the brain. An option with higher utility stimulates the reward systems more than an option with lower utility. Research also shows that System 1 activity, as discussed earlier, is driven by the limbic system. Activity of System 2 is reflected in the lateral prefrontal cortex.

    The ageing brain and financial decision making

    Research shows that older people more often make mistakes in financial decisions, because they emphasize potential benefits too much and weigh disadvantages less.

    The psychology of financial decision making and economic crises

    Taking risks is an important part of making financial decisions. Research shows that people in uncertain situations (such as in a financial crisis) base their decisions on perceived risk rather than on objective risk. It also appears that people with other financial decisions, such as buying or selling shares and buying on credit, are often inclined towards cognitive biases such as excessive trust.

    What is inductive and deductive reasoning? - Chapter 11

    Reasoning refers to the cognitive process of deriving new information from old information. Inductive and deductive reasoning do this in different ways.

    Deductive reasoning

    Deductive reasoning is drawing logically necessary conclusions from given information. The premises are the statements that are assumed to be true and from which the conclusion is drawn. Valid arguments are arguments in which the conclusion must necessarily be true if the premisses are true. There are two different types of deductive reasoning. In propositional reasoning, the statements are linked by logical relations such as 'and', 'or', 'not' and 'if' (for example: if it is Tuesday, we have a statistics exam.) We do not have a statistics exam, so it is not Tuesday). In syllogistic reasoning the statements are linked by logical relationships such as 'some', 'none' and 'all' (for example: All apples are red, some apples are sweet, some red things are sweet).

    Propositionally reasoning

    Logicians have developed a number of inference rules that can be used to draw correct conclusions from patterns of propositions. Some examples are:

    • Modus ponens: If P, then Q, and if P is true, then Q is true. For example: If it is Saturday I go to the cinema. It's Saturday, so I'm going to the cinema.
    • Modus tollens: If P then Q, and if non-Q, then not-P. For example: If it is Saturday then I go to the cinema. I'm not going to the cinema, so it's not Saturday.
    • Double negation: Not non-P, therefore P. For example: It is not not Saturday, so it is Saturday.

    Two examples of common miscalculations are the following:

    • Affirming the consequent: If P then derive Q that if Q is true, P is also true. For example: When it is Saturday, Tom goes to the cinema. Tom goes to the cinema, so it's Saturday.
    • Denying the antecedent: From if P then derive Q as non-P, then non-Q. For example: if it's Saturday, Tom goes to the cinema. It is not Saturday, so Tom is not going to the cinema today.

    Research shows that people are better at recognizing as correct statements in the ponens modus than statements in the tollens modus. It also appears that incorrect reasoning may result from misinterpretation of the premises. For example: if you take the premise 'If there is a dog in the box, then there is an orange in the box', you can think that means' If there is no dog in the box, then there is no orange in the box'. Whether this is true or not depends on whether the 'if ... then' relationship is seen as equivalence: here the relationship means 'if, and only if' and thus the above assumption is true. If the relationship is seen as a material implication ('if ... then' is only 'if ... then'), the assumption is not correct.

    The mental logic approaches state that people have a limited number of mental inference rules (schemes) that allow direct inferences if the conditions of the scheme are met. According to this model, there are 16 basic schemes in which people make few mistakes. Another approach is the mental models approach. It assumes that people solve logical reasoning problems by forming mental representations of possible states of the world, and making those representations into inferences. By explicitly representing only what is true in these models, the burden on the working memory is minimized. This latter approach also applies to syllogistic reasoning.

    Syllogistic reasoning

    As explained earlier, the task in a syllogism is to see which conclusion follows from a number of assumptions about categories of things. If the conclusion does not follow from premises that are true, the argument is invalid.

    Research shows that people have far more difficulties with syllogisms if the terms used in them are abstract than if they are concrete. Another effect that causes trouble is the atmosphere effect: the tendency to draw conclusions in syllogisms that are influenced more by the shape of the premises than by the logic of the argument. If, for example, both premises have 'all', people are inclined to accept a conclusion with 'all'. Another explanation for false conclusions in syllogisms are conversion errors, for example, if people assume that 'All X is Y', that 'All Y is X'. Another explanation is probabilistic inference: for example, if people argue that 'some cloudy days are wet', 'some wet days are uncomfortable' and so 'some days are uncomfortable'.

    Henle (1962) argued, in contrast to arguments such as those behind the atmosphere hypothesis and conversion errors, that people did actually rationalize. She stated that when people come to invalidated conclusions, this often comes about because they interpret the material differently than intended or undertook a different task than the one that was asked for.

    Research shows that people from collectivist cultures, in which practical and contextual knowledge is more important than formal and abstract knowledge (as in individualistic cultures), often interpret logical questions as a real demand for information from the real world. Individuals from individualistic cultures, on the other hand, often see these kinds of questions as decontextualized logical issues.

    The figural bias refers to the effect of the layout of a syllogistic figure (e.g. AB, BA or BC, AB) on the preferred conclusion. For example, if one gets the question: "Some parents are scientists. Some scientists are drivers. So ..?', Many people conclude' Some parents are drivers', instead of the equally valid conclusion 'Some drivers are parents'. Although this effect is not predicted by the atmosphere hypothesis, conversion errors or probabilistic inference, it is explained by the previously discussed mental models approach.

    The belief bias is the tendency to accept invalid but easy to believe conclusions and to reject valid but not easy to believe conclusions.

    Inductive reasoning: testing and generating hypotheses

    Inductive reasoning is the process of deriving probable conclusions from given information. There are two types of inductive tasks. In hypothesis testing, hypotheses are assessed for truth in the light of certain data. In hypothesis generation, possible hypotheses are derived from data for later testing. Both processes include that the hypothesis can not be definitively proven, but can be refuted.

    In hypothetico-deductive reasoning, a hypothesis is tested by deducing necessary consequences of the hypothesis and determine whether these consequences are true or false.

    Hypothesis testing

    A well-known task to examine hypothetico-deductive reasoning is the four-card selection task. People are asked to test a rule (for example if P then Q), by presenting them four cards with P, Q, non-P and not Q. On the other side there is also P, Q, non-P or non-Q. People have to test the hypothesis by turning over one of the cards.

    If a rule in the form if P then Q is tested, there are four possible scenarios that we can find: P and Q, P and not Q, not P and Q and not P and not Q. Only the second scenario is consistent with the rule. Research shows that people have a tendency to verify and confirm themselves when testing such a hypothesis: they turn the card around with P or Q, and not the card that possibly falsifies the hypothesis (not Q). However, if the information on the maps is concrete rather than abstract, people appear to perform better.

    A possible explanation for the poor performance on the four-card selection task is that people misinterpret the task, but then correctly reason with their incorrect interpretation. For example, there may be ambiguity about if P, then Q also means Q, then P. Another explanation is the matching bias: the tendency to simply select the cards that show the symbols that are mentioned in the line. It is also possible that people are better at the task if the situation reflects real rules that they know and for which they can easily come up with examples.

    The social contract theory suggests that rules expressing payment of costs for privileges are easily solved in 4 card tasks, because the correct choices would uncover cheating. This would result from the evolutionary mechanism that people have developed to detect cheating (for example if someone used to eat from the loot during the hunt, but did not hunt). Research into this theory shows that deontic rules (i.e. rules that relate to duties and terms such as 'must', 'should', etc.) are facilitating performance in the four-card selection task.

    Generating and testing hypotheses

    Compared to the four-card selection task, in daily life you have to generate and test a hypothesis more often instead of a given rule. This is being investigated, for example, in Wason's reversed 20 questions task, in which people get three figures and ar asked to discover the rule on which the figures are kept. Then they have to produce different number series, of which the test leader indicates whether they fit the rule or not. The results show that people often generate far too restrictive hypotheses instead of the simple rule that actually exists. Also very little use is made of a falsification strategy: participants are mainly busy generating series of numbers that meet the hypothesis they have come up with. Also other types of tasks show evidence for the confirmation bias.

    What is language production? - Chapter 12

    Language production refers to a number of processes in which we convert thoughts into language output in the form of speech, gestures or writing. Language production is important for many skills, such as social cognition, (the ways in which people become wise from themselves and others to function effectively in a social world), mental representation and thinking. Language production is conceptually driven: it is a top-down process that is influenced by cognitive processes such as thoughts, beliefs and expectations.

    Language and communication

    Language is our primary goal of communication and forms the basis of the majority of social interactions.

    Language universalia

    There are about 6,000 spoken languages ​​in the world, varying in a number of aspects such as the number and type of sounds, word order and vocabulary size. According to Aitchison (1996), there are a number of absolute universals that apply to all languages, such as that they have vowels and consonants, can express nouns, verbs, negatives and questions and are structure-dependent. However, the universals quickly become problematic: sign language, for example, does not use vowels and consonants, and tonal languages use changes in tone to alter the meaning of a word.

    Hockett (1960) proposed 16 characteristics of human language that distinguish it from animal communication systems:

    1. Vocal-auditory communication channel; there is a speaker and a listener.
    2. Broadcast transmission and directional reception: speech is emitted from the source (mouth of the speaker) and localized by the listener.
    3. Rapid fading: the spoken message expires after production
    4. Interchangeability: the speaker can also be a listener and vice versa.
    5. Feedback: the speaker has access to the message and can check its contents.
    6. Specialization: whether we whisper or shout, the message remains the same.
    7. Semanticity: Sounds within speech refer to objects and entities in the world: they have meaning.
    8. Arbitrariness: the relationship between the spoken word and what it refers to is arbitrary.
    9. Discreteness: The speech signal is composed of discrete units.
    10. Displacement: We can use language to refer to things that are not in the current time or location.
    11. Productivity: Language allows us to create new expressions.
    12. Cultural transmission: language is learned through interaction with more experienced language users within a community.
    13. Duality (of patterning): Meaningful elements are created by combining a small set of meaningless units.
    14. Prevarication: Language can be used to lie or deceive.
    15. Reflexiveness: We can use language to communicate about language.
    16. Learnability: a language can be learned by a speaker of another language.

    Components of language

    A phoneme is the smallest meaningful sound unit within a language. Phonetics is the study of raw sounds that can be used to make words (phones). There are about 100 but no language used all these units. Allophones are different phones (such as the 't' in trumpet or tender) that are seen as the same phoneme. A phoneme is thus a relatively subjective category. The tendency to perceive differences between allophones decreases with age. Phonotactic rules describe which combinations of sounds are allowed in a language.

    Morphemes are the meaning units of a language. They are the building blocks of words, and a single word can contain different morphemes. For example, the word 'fathers' has two morphemes: the free morpheme (can occur independently) father and the bound morpheme (has no meaning unless it is attached to a free morpheme) 's'. Function words, such as prepositions, provide a grammatical structure that indicates how content words relate to each other within a sentence.

    A word is the smallest unit of grammar that can be produced meaningfully and independently; it consists of one or more morphemes. Semantics is the study of meaning.

    The productivity of language refers to the possibility of generating new utterances. Two aspects of the language system enable us to use language productively: syntax and recursion. The syntax describes the rules that determine the construction of phrases and sentences. This includes, for example, the word order and the way of embedding phrases into sentences. Recursion refers to the possibility to extend phrases infinitely by embedding phrases in sentences.

    Discourse refers to multi-sentence speech and includes dialogue, conversation and narrative. Pragmatics refers to the understanding of the communicative functions of language and the conventions that govern language use. Effective discourse is based on a shared understanding between the discussion partners, such as knowing the rules of 'taking turns' and cooperation. Grice (1957, 1975) identified four conversational rules or maximums of effective conversations: the maxim of quantity (the speaker must provide enough information to be understood, but not too much), the maxim of quality (the speaker must provide accurate information), the maxim of relevance ( the speaker must provide relevant information) and the maxim of manners (ambiguity and vagueness must be avoided). If one of these rules is broken, it requires more cognitive processing to understand the conversation or to respond to the other.

    Speech errors

    Many theories about speech production arise from analysis of speech errors, for example, speech errors that are generated in the laboratory or arising from brain damage.

    Hesitations and pauses

    Disfluencies are hesitations or interruptions of normal fluent speech, such as silences or 'uhm' say. Occasionally, blurring facilitates understanding; Saying 'uhm' seems to increase listeners' attention to the following words.

    Slips of the tongue

    Fromkin (1971) was the first to make a systematic description of error types. He discovered that errors are not arbitrary, but are systematic and therefore informative about the nature of the underlying processing. The majority of speech errors are sound-based errors, and errors often occur on one linguistic level (for example phonemes or morphemes).

    The lexical bias refers to the tendency of phonological speech errors to result in real words. This may be because non-words are detected earlier and restored, while mistakes with real words tend to slip past the 'control'. In addition, content words are exchanged with other content words, while function words are exchanged with other function words. Furthermore, errors are consistent with the stress pattern in the expression.

    Tip-of-the-tongue state (TOT)

    If something is on the tip of your tongue, it means a temporary inability to gain access to a known word. Research shows that this condition is universal, occurs about once a week, occurs more often in old age and often involves proper names. We may have access to some information about the word, such as its initial letter.

    Theories of speech production

    It is generally agreed that there are a number of stages of speech production. First is conceptualisation (a thought forms and is prepared to be conveyed through language), then the formulation of a linguistic plan and articulation of the plan, and finally the monitoring of the output.

    Modular theories of speech production

    1. Garrett's model

    Modular theories state that speech production progresses through a series of phases or levels, each with a different type of processing. According to Garett's model, speech is produced on the basis of a number of phases in a top-down manner. The phases are the conceptual level (meaning is selected), the functional level (contents are selected), the positional level (content words are placed in order and function words are selected), the phonological level (speech sounds are selected) and the articulation level (sounds are prepared for speech). The idea that content and function words are treated differently is supported by research. Nevertheless, this model does not imply the prevention of non-plan internal errors. These occur when the intrusion is external to the planned content of an expression. For example: you stand in front of the library and want to say: let's get coffee, but instead you say: let's get a book.

    2. Levelt's model

    Levelt et al. (1999) developed a sequential system called Weaver ++, which focuses on the production of single words. The first two phases in the model relate to lexical selection, followed by three phases of form encoding, ending in the articulation. The model attributes an important role to self-monitoring at various levels throughout the processing phases which allows errors to be detected and recovered. This process is partly driven by speech comprehension. However, the model does not explain the existence of errors that are the result of interference from lower to higher levels.

    Interactive theories of speech production

    The Dell model

    The disseminating activation approach is based on connectionist principles (see chapter 1) and uses the concept of spreading activation in a lexical network. Processing is interactive here: one-level activation can affect processing on other levels. The model has four levels: the semantic level, the syntactic level, the morphological level and the phonological level. A word unit can influence phonological units (top-down distribution), but also semantic units (bottom-up distribution). This model explains many patterns in speech errors and some mistakes made by people with aphasia, for example. However, little attention is paid to the semantic level. An optimal model of speech production may include a combination of modular and interactive approaches.

    Neuroscience of language production

    Neurolinguistics is the study of the relationship between brain areas and language processing.

    Lateralization of function

    Sensory information arriving at one side of the body is processed by the contralateral (opposite) side of the brain. There are also various functions associated with left and right cortical hemispheres. If a cognitive function is lateralized it means that one cortical hemisphere is dominant for that function.

    The left hemisphere

    In most people, speech is lateralized in the left hemisphere of the brain. It is dominant for most language functions. However, the degree of lateralization differs within the population.

    Evidence from the typical population

    In the dichotic listening task, different stimuli are simultaneously presented to each ear. The results of this task show that there is an advantage in verbal stimuli presented to the right ear. Research with event-related potentials shows that different areas within the left hemisphere process information with respect to meaning and syntax. From research with transcranial stimulation (a non-invasive method in which cortical areas are temporarily activated or inhibited) it appears that Broca's area plays a crucial role in the processing of grammar. The right hemisphere plays a role in emotional aspects of speech and aspects of non-literal speech.

    Evidence from aphasia

    The Wernicke-Geschwind model is a simplified model of language function that is used as a basis for classifying aphasic disorders. For a schematic overview see p. 390 of the book. With aphasia there is a language deficiency due to brain damage. In the case of crossed aphasia, there is a language dysfunction due to damage to the right hemisphere in a right-handed individual. In Broca's aphasia, there is non-fluent speech, impaired speech and problems with grammar processing. With global aphasia there is an extreme restriction of language function. Wernicke's aphasia is a fluent aphasia, characterized by fluent but meaningless output and repetition errors.

    Writing

    The Hayes and Flower model of writing proposes a cognitive model of writing that focuses on three domains: the task environment (subject of writing, intended audience, etc.), the long-term memory (availability and accessibility) and the immediate cognitive demands of writing: the writing process. The model also discusses three general stages of writing: planning, translating and reviewing.

    Which processes of language comprehension are there? - Chapter 13

    Speech perception refers to the process of converting a stream of speech into individual words and sentences.

    Understanding speech

    Prosody refers to all aspects of an utterance that are not specific to the words themselves, such as rhythm, intonation and stress patterns. The speech signal is not produced as discrete units: there are few clear boundaries between words and successive sounds mixing with each other. In addition, factors such as age, gender and speech speed influence the sounds produced by the speaker. Moreover, the continuous, smooth nature of the speech signal makes speech perception a complex process.

    The invariance problem

    The invariance problem reflects the variation in the production of speech sounds across speech contexts. Phonemes are pronounced differently in different situations. Co-articulation, the fact that a speech sound is influenced by the sounds before or after, contributes to this problem.

    The segmentation problem

    The segmentation problem refers to the detection of distinct words in a continuous string of speech sounds. An important source of information for segmenting a speech signal are the sound patterns in a language, such as stress and prosody. In English, for example, people appear to use a stress-based strategy for distinguishing words.

    Cues to word boundaries

    The stress-based strategy is already present at a very young age (7.5 months). Children of this age can already distinguish words that satisfy the main emphasis patterns of English words. Phonotactic constraints describe the language-specific sound groupings that occur in a language. These give directions for word boundaries.

    Slips of the ear

    A slip of the ear occurs when we misperceive a word or phrase. Such errors are almost always caused by mistakes in recognizing the word boundaries. This is more common, for example, when hearing lyrics, because the prosodic information that leads to segmentation is reduced and the context sometimes gives less clues for word selection. Moreover, people have a tendency for segmentation based on indications from their mother tongue. Errors in recognizing word boundaries are therefore more common when listening to speech in another language.

    Categorical perception

    Categorical perception refers to the perception of stimuli on a sensory continuum as falling into distinct categories. Therefore, we are often not aware of the variation in which sounds are pronounced and can still perceive a certain sound in different situations as the same sound. Categorical perception is observed in infants from four months of age.

    The right ear advantage for speech sounds

    The right ear advantage refers to the finding that language sounds are processed more efficiently when presented to the right ear than to the left ear. Most likely this is the result of superior processing of language stimuli through the left hemisphere.

    Top-down influences: more on context

    The phoneme restoration effect refers to the tendency to hear a complete word even though a phoneme has been removed from the input. On basis of the context, people still perceive a whole word, for example in the sentence: The * (heel) of my shoe is broken. Whether this is due to top-down effects on perception or whether the restoration takes place after perception is the question.

    Visual cues: the McGurk effect

    Sight also plays an important role in accurate speech comprehension. This is demonstrated, for example, by the McGurk effect: a perceptual illusion that illustrates the interplay of visual and auditory processing in speech perception. For example, they hear the sound 'ba', but they see someone who utters the sound 'go'. Many people perceive a mix between the two sounds - in this case 'da'.

    Models of speech perception

    Models of speech perception try to explain how information from the continuous speech stream we hear makes contact with our stored knowledge about words. The models fall into two categories: the first assumes that processes of speech perception are modular (i.e. that knowledge of words has no influence on the processing of speech at low levels), the second assumes that they are interactive.

    The cohort model

    This model assumes a sequential nature of speech perception and suggests that incoming speech sounds have a direct and parallel access to the storage of words in the mental lexicon. If we hear the first phoneme of a word, it is therefore already possible to have expectations about the probably intended word. The set of words that are consistent with the initial sounds is called the initial cohort of the word. At the unique point, enough phonemes have been heard to recognize the intended word. However, this model does not explain how the beginning of a word is identified. It also says nothing about the role of the size of the cohort.

    TRACE

    The TRACE model of speech perception presents an alternative to the modular approach. It states that phonemic processes at lower levels are not influenced by higher processes. Top-down effects play an important role in speech perception according to this model. It is a connectionist model, in which activation by spoken input spreads over different processing levels. Multiple sources of information, such as acoustic information, instructions from other phonemes and the semantic context, influence speech perception. There are three levels of processing units that respectively deal with characteristics, phonemes and words. However, this model overestimates the role of top-down effects.

    Understanding words and sentences

    Lexical access

    Lexical access is the process through which we gain access to stored knowledge about words. Much research has been done on this process. The word naming tasks ask the participant to name a word, while response time is measured which is seen as the speed of access. Sentence verification tasks present a sentence frame with a target word, where the participant has to decide whether the word fits in the frame.

    There are a number of factors that affect lexical access. The frequency effects refer to the finding that the more frequent a word occurs, the easier it can be processed. However, this only applies to open-class words, such as nouns, verbs and adjectives and not to closed-class words such as articles and prepositions. Priming effects show that lexical access to words that are primed is faster and easier. The syntactic context also influences lexical decision time: people recognize words faster if a word is in the appropriate grammatical context of a sentence than if it is not. Finally, lexical access is affected by lexical ambiguity: in ambiguous words multiple meanings are activated. The decision time for the next phoneme is longer than for non ambiguous words.

    Syntax and semantics

    Parsing is the process by which we assign a syntactic structure to a sentence. This is being investigated, for example, in the field of psycholinguistics: the study that deals with the mental processes underlying language comprehension and production. Cognitive psychology is strongly influenced by the work of Noam Chomsky. His research showed that the grammatical structure of a sentence influences the processing time of that sentence. Frazier (1987) then described two main strategies for parsing, i.e. assigning the correct role of words within a sentence. Minimal attachment introduces new items into the phrase structure using as few syntactic nodes as possible. Late closure attaches incoming material to the phrase that is currently being processed, as long as grammatically permitted. This type of model assumes that parsing is incremental or that we assign a syntactic role to a word once it has been observed.

    Reading

    Writing systems

    Different languages ​​have different scripts, differing in the degree and manner of representation of spoken words. Logographic scripts represent morphemes or the units of word meanings, such as Chinese. Syllabic scripts use a symbol to represent each syllable. Consonantal scriptures represent the consonants of the language. Alphabetical scripts represent the phonemes or sounds. The latter type is most common among the world languages. A grapheme is the written representation of a phoneme. With transparent (or shallow) orthography ​​there is a one-to-one correspondence between letters and sounds. In opaque (or orthographically deep) languages ​​there is no one-to-one agreement. The same sound can be written in different ways and a letter can be pronounced in different ways (e.g. with homophones like the English 'reign' and 'rain').

    Context effects on visual word recognition

    The word superiority effect refers to the finding that a target letter within a letter string is detected more readily when the string forms a word. This shows that context has a major influence on visual word recognition.

    Eye movements

    Saccades are fast movements of the eye that are made when scanning or reading an image. Between saccades are fixations: the eye lingers briefly on an area of ​​interest in a visual scene. Research shows that fixation time on a word is reduced if it has been seen before and if the words can easily be recognized. Certain words are fixated longer than other words.

    Dual route model of reading

    This model suggests three routes for reading. Route 1, the grapheme-to-phoneme conversion (GPC) route, allows the conversion from writing to sounds. Route 2, the lexical route, allows reading via word recognition. Route 3 goes outside the semantic system and is responsible for cases in which a different word is read correctly even if the meaning is not recognized.

    The brain and language comprehension

    Neuropsychology of speech comprehension

    The brain area that seems to be most associated with language comprehension is Wernicke's area. The Broca area also plays an important role, especially for sentences with a more complex structure. Pure word deafness refers to a disorder in which there is a limitation for recognizing speech sounds, but not for non-speech sounds. With pure word meaning deafness, the patient can repeat the word but cannot understand it. The existence of these types of disorders suggests that there are three routes for processing spoken words: one for direct access to the phoneme level and two for familiar words and auditory analysis, giving access to lexical information.

    Neuropsychology of reading

    Evidence for the dual route model comes from research into acquired dyslexia. There is a difference between surface dyslexia, in which there is a limitation in reading irregular words, but not in reading normal words, and phonological dyslexia, where there is only a deficit in reading non-words.

    Electrophysiological data

    Electrophysiological research with event-related potentials shows the importance of different areas for reading, such as the inferior frontal and premotor cortex. Activation of brain areas differs between deep and transparent scripts.

    What role does emotion play in cognition? - Chapter 14

    Emotion plays an important role in cognition. Facial recognition, for example, is severely limited if the emotional connection is lost.

    What is an emotion?

    Emotion refers to a number of mental states, including anger, joy and disgust. They are short states that are related to a certain mental or real event. Emotions provide us with important information, for example, about the execution of our plans relative to our goals (whether they are achieved for example) and help to reduce discrepancies between actual and expected outcomes. As emotion has been seen as irrational and difficult to investigate, there has been little research for emotion in cognitive psychology for a long time. Brain areas that play an important role in emotion are the amygdala (fear, anger, disgust, joy and sadness) and the insula (among others disgust).

    Core emotions

    Emotions are associated with distinct facial expressions and gestures. Although there are display rules in each culture or social conventions that determine how, when and with whom emotions may be expressed. There is evidence for a basic set of emotional expressions among different cultures. However, the degree of universality of facial expressions is still under debate. Facial expressions in babies and blind people show that emotions are partly innate.

    Ekman identified six basic emotions: anger, disgust, fear, joy and surprise. Later on, there were a number of emotions, such as pride, contentment and hatred. Languages ​​differ in the way they name emotions. In English, for example, there is no word for 'bad luck'. The identified basic emotions might have been different if the research in this area was dominated by a language other than English.

    The 'core' of emotions

    There is more to an emotion than just a particular facial expression. Physiological phenomena, behaviors, beliefs and thoughts are all examples of phenomena that characterize emotions. According to Clore and Ortony (2000), emotions are characterized by a cognitive component (the appreciation of emotion), a motivational-behavioral component (our actions in response to an emotion), a somatic component (bodily reaction) and a subjective-experiential component.

    Theories of emotion and cognition

    An important issue in theories about the relationship between cognition and emotion revolves around the question of what comes first: cognition or emotion.

    Early theories and their influence

    The James-Lange theory of emotion

    This theory states that the experience of an emotion follows the physiological changes associated with that condition. Although this does not seem to be intuitive, there is evidence for the facial-feedback hypothesis: the assumption that feedback from the facial muscles can influence the emotional state. For example, if people are asked to show a smiling facial expression, they will feel happier afterwards. However, people who, for example, have damage to the spinal cord, report to have less emotion. Moreover, the conscious experience of an emotion sometimes precedes the physical change. If you realize that you have said something embarrassing, you will only blush afterwards.

    The Cannon-Bard theory

    The criticism of Cannon on the above theory was that the same physiological condition can be associated with different emotions. Accelerated resin stroke can, for example, be associated with both anger or fear. The same physiological condition can also occur without emotion (for example during physical exertion).

    Finally, the conscious experience of an emotion happens quickly, while for example visceral changes go slower. Cannon therefore proposed that the experience of emotion and the physical response to an event arise independently of each other. However, this theory omits the role of cognition.

    The two-factor theory

    This theory states that two factors create emotion: physiological excitement and our interpretation of it. If you find that your heart beats faster and you are about to write an exam, you interpret this as fear. However, if you have an argument you interpret the palpitations as anger. This theory has had a lasting influence on later theories of emotion.

    Affective-primacy: Zajonc's theory

    This theory states that cognition is not necessary for emotion and that the two systems can function independently. Although cognition can influence emotion in a later processing phase, the initial emotional response is not affected. Evidence for this approach comes from research into the mere exposure effect: the tendency of people to develop a preference for a stimulus to which they are repeatedly exposed. It is therefore possible that emotion can occur without cognition. However, the debate about whether cognition or emotion comes first is far from over.

    Cognitive primacy: Lazarus's theory

    This was the first appraisal theory, which means that the theory assumes that emotions result from our interpretation of events. Cognitive appreciation is thus fundamental to emotional experience and can not be separated from it. Appraisal depends on whether the event is seen as positive or negative, which sources we have at our disposal to deal with the event and monitor the situation. Indeed, research shows that how we think about a stimulus influences our emotional experience. However, multi-level theories state that both pre-attentive and conscious processes are involved in emotion and not just one of them, as in the two theories above.

    Effects of emotion on cognition

    Emotion and attention

    The attention bias refers to the tendency for emotional stimuli to attract or hold our attention. In the emotional Stroop task, participants are asked to name the color in which a word is written. If a word has emotional value, it keeps attention longer and performance on the task is reduced. The visual search task is also used to examine the effects of emotion on attention.

    Emotion and perception

    Emotion also appears to have an effect on early stages of perception. For example, the presence of an emotional stimulus increases sensitivity to contrast. Emotions also play a role in other senses. For example, the perception of loudness of a sound is influenced by the emotional value of the sound.

    Emotion and memory

    Extreme emotion can have a negative effect on memory, as we saw earlier with the flashbulb memories. Memories of emotional events are less detailed, more often incorrect and sensitive to bias. False memories can also be more easily recalled for emotionally charged events. Research shows that the timing of retrieval is crucial: the more time there is between the event and its recall, the greater the chance of error. Recollection for facts, however, seems to be better when learning is associated with emotion. The tunnel memory refers to the positive effect of negative emotions on memory for central details of an event and the negative effect on the memory for peripheral details.

    There is much evidence for the state of mood-congruency: the tendency to remember events that are consistent with the current mood state. This effect is often explained on the basis of network models, where reminders are treated as items in a network that influence each other through activation. State-dependent memory refers to the facilitation of memory when the mental or physiological state at encoding and retrieval matches. Some findings, however, are not consistent with an associative network model: people, for example, seem to pick up more positive memories when they are in a negative mood.

    Join World Supporter
    Join World Supporter
    Log in or create your free account

    Why create an account?

    • Your WorldSupporter account gives you access to all functionalities of the platform
    • Once you are logged in, you can:
      • Save pages to your favorites
      • Give feedback or share contributions
      • participate in discussions
      • share your own contributions through the 7 WorldSupporter tools
    Follow the author: Vintage Supporter
    Promotions
    vacatures

    JoHo kan jouw hulp goed gebruiken! Check hier de diverse bijbanen die aansluiten bij je studie, je competenties verbeteren, je cv versterken en je een bijdrage laten leveren aan een mooiere wereld

    verzekering studeren in het buitenland

    Ga jij binnenkort studeren in het buitenland?
    Regel je zorg- en reisverzekering via JoHo!

    Access level of this page
    • Public
    • WorldSupporters only
    • JoHo members
    • Private
    Statistics
    [totalcount]
    Content categories
    Comments, Compliments & Kudos

    Add new contribution

    CAPTCHA
    This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
    Image CAPTCHA
    Enter the characters shown in the image.