Summary with the 6th edition of Sensation and perception by Coren & Ward


Why study perception? - Chapter 1

All the things you know about the world, you know because you have senses. Without vision, hearing, touch, taste and smell people would live in a dark and silent world. The world as we know it, wouldn’t exist anymore. One might argue that the important things we know about the world, don’t come from our senses. We use telescopes to look in space, sonar to trace things in sea and CAT scans to look inside the human being. Although this is true, it still is the eye of the scientist that looks through a telescope and that looks at the outcome of a CAT scan and misread it. Our knowledge of the world is dependent on our senses and it is therefore important to know how our senses function. Also, it is of importance to know how well the outside world that is created by our senses corresponds to external reality. This is because our senses can mislead us. Sometimes you see a print in which it seems that the two lines differ in length, but in fact they don’t. Perception is a sophisticated activity. Classifications and comparisons are needed before we become consciously aware of the data in our senses. Sometimes, our perception might mislead us. When there is a disagreement between percept and reality, an illusion has occurred. One such illusion is the Parthenon in Greece. It is perceived as a square even though it roof is curved outwards and the columns inward. The columns differ within each other; the outer columns are somewhat thicker than the inner columns. All these subtle features make the shape of the Parthenon appear square like, even though if the Parthenon would be geometrically squared people would not perceive it as such.

Researchers in the field of perceptual research study sensation, perception, cognition, and information processing mechanisms. There are many theoretical approaches to perceptual problems. One is biological reductionism. This suggests that for every sensation there is a corresponding physiological event. The most recent version of this approach includes the belief of modularity perception, which is the idea of viewing the mind as a set of distinct, complete and specific modules. The idea is that perception works fast and no controlled intervention are needed to relay the information to higher brain centres. Another very well known approach is the direct perception approach. According to it no top-down levels are needed for perception, because the stimulus possesses all necessary information. These information are invariants, they are fixed properties of the stimuli.

The computational theories of perception put features of objects and aspects of the environment and the aspects of the stimuli together. This approach resembles very much the direct perception approach; however, they differ in the sense that the computational approach assumes that some kind of mathematical processing steps involving the environment or other objects in the environment is needed before the object is perceived.

Constructive theories of perception or intelligent perception combine a number of different factors to make a final percept. More precisely, previous experience and expectations might be involved in the evaluation of the environment or an object, thus in the process of perception. It is important to keep in mind that no one theory or approach can explain perception. So, it is very likely that each approach can contribute something to the field of perception.

How can perception be measured? - Chapter 2

Measurements of perception can be very tricky. Sometimes it is difficult to be objective about perceptions. The first issue scientist have to deal with, is human error. Scientists must rely on what people tell them they perceive. Scientists obviously can’t see through the eyes of the participants. We can measure the intensity of a certain colour, but we don’t know if a participant perceives the intensity of that colour the same way. Perception researchers have developed a varied set of procedures to test the perception of participants. This chapter is all about these procedures. Another issue that is closely related to the first one is that only one person has direct access to the perceptual experience. You may feel that you know how somebody must feel in a certain situation, but you can’t in fact know. We can place ourselves in somebody else’s shoes, but we can never know if we feel exactly the same way as that person. We can’t verify whether the two experiences are the same and the term for this kind of knowledge is first person data. Other names for this type of knowledge are phenomenology and introspection. In contrast, data perception researchers work with is called third person data. This means that the data is objective and then people who do the same things or follow the same procedures (during an experiment) also will have similar results.

The study of the relationship between the experienced sensation and physical stimuli is called psychophysics. The first person who came with this name was Fechner. According to Fechner the mind-body problem could be solved if he could solve three problems. The first problem was to find a way to measure the minimum intensity of a perceived stimulus. This is called detection. The second problem was to find a way to measure how different stimuli must be and this is called discrimination. The last problem was to find a way to measure sensation intensity, so he could know how the relationship between the intensity of the stimulus and the intensity of our sensation was. This is called scaling.

Detection

Energy changes have an impact on our sensory system. These energy changes can be electromagnetic (light), chemical (taste, smell), thermal (cold, heat) and mechanical (sound, touch) stimulations. Detection then refers to how much of an energy change is needed for an individual to hear, sense or see it. This is called the absolute threshold. Below a certain level, you can’t detect a certain stimulus. You can describe the relationship between certain intensity levels and perceived stimuli with a graph. This is called a psychometric function.

The absolute threshold can be measured with the method of constant stimuli. This means that experimenter gives the participant a headphone and sets the participant in a quiet room. Then the experimenter presents the participants with different tones (different in intensity). The tones are presented one at a time and each is presented many times in an irregular order. The participant must indicate whether he or she heard the stimulus.

There is however no dramatic point from not-hearing to hearing, it is a gradual process. That is the reason why researchers have to make a somewhat arbitrary decision as to what the absolute threshold is. Researchers agree that the absolute threshold is the stimulus intensity that observers detect exactly 50% of the time. The method of constant stimuli is time consuming.

There is a way to avoid the time-consuming aspects of the method of constant stimuli and this is done by focusing only on the stimuli that are near the absolute threshold. This method is called the method of limits. The researcher presents the participant with a stimulus with an intensity that can be heard and then decreases the intensity with small steps until the participants reports that he or she can’t hear the stimulus anymore. This is called an ascending series. Then the researcher starts with the an intensity that cannot be heard and then increases the intensity over multiple trials until the participants hears it this is called descending. Researchers can use adaptive testing to get more information about the absolute threshold. One of these tests that fall under adaptive testing is the staircase method. Every time a participant says they hear the stimulus, the researcher decreases the stimulus by one step. When the participant doesn’t hear the stimulus anymore, the researcher reverses the direction of the stimulus. This means that the researcher increases the intensity of the stimulus by one step until the participant hears the stimulus. The researcher can do this multiple times and average the stimulus values at which the reversals occurred. The person’s average threshold value is then available. This method is not time-consuming and reliable.

Of course, there are participants who lie about hearing the stimulus. They might do this because they think that it looks better if they can hear things better. Researchers might need to adopt some strategies for dealing with this issue. They can use catch trials. This means that they insert a trial in which no stimulus is presented and the participant therefore can’t hear the stimulus. If the participant says that he or she heard something, then the researcher knows this isn’t true. If the participants says too many times ‘yes’ on these catch trials, his or her threshold will be adjusted.

The signal detection theory

The signal detection theory suggests that any stimulus must be detected against a background of ongoing internal noise in our sensory systems and as ongoing noise in the environment. This means that a researcher must determine whether the signal was present against background noise (and that may be the reason somebody didn’t hear it or think he or she did hear it) or whether the signal was presented in a quiet environment. In this theory there is no absolute threshold. There are only observations which are either signal present or signal absent. This theory is used to see how sensitive somebody is to a certain signal. To measure this sensitivity and bias, the experimenter must use two types of stimulus presentation. The experimenter uses a signal absent trial, which is basically the same as a catch trial, meaning there are no stimuli present. A signal present trial means that the experimenter presents a signal.

Based on this, there are four types of outcome. When a signal is presented and the participant indicates that he or she hears it, this is called a ‘hit’ outcome. When a signal is presented but the participant indicated that he or she didn’t hear this, this is called a ‘miss’. When a signal isn’t present but the participant indicates that he heard something, this is called a ‘false alarm’ and when he or she indicates that he didn’t hear this, it is called ‘correct negative’. An outcome matrix is a table in which one can read how many times somebody had a certain outcome.

Why would somebody respond with ‘yes’ when the researcher did not send a signal? This does not mean that the person intentionally lied. The participant maybe expected a certain signal to be presented and thought that the faintest sensation is a signal. Research has found that when the signal is given more often participants are also more likely to say that they heard the signal even when it wasn’t presented. You can even make a probability distribution of the sensory activity level. When using a probability distribution, it is assumed that above a certain criterion the participant will say that he heard the sound and below the certain criterion the participant will say that he didn’t hear the sound. This criterion is called beta or β. The sensitivity is called d’ and this is measured by the distance between the centres of the signal present and signal absent distributions. The outcome matrix shows the proportion of trials on which the four possible results occurred.

The time between the onset of a stimulus and the beginning of a response to it is called the simple reaction time. This means that a stimulus has reached the sensory system and that the motor system of the brain is activated. This also means that we do not experience the things in the world the same moment they happen; instead we rather experience them at least 160 milliseconds later. The more intense a stimulus is, the faster the reaction time will be.

Discrimination

Discrimination in this context means the differentiation between two stimuli. One must judge how the stimuli differ from each other. The standard is the stimulus that the other stimuli will be compared with. Researchers use the standard in every trial and they alter the other stimuli on one dimension. All these stimuli together are called comparison stimuli. The difference between the standard and other stimuli can be measured with the difference threshold. The point at which half the time somebody says that the stimuli (let’s say the intensity of a light) is brighter than the standard and half the time somebody says it’s fainter than the standard is called the point of subjective equality. The stimuli in the so called ‘interval of uncertainty’ are perceived as very similar to the standard stimulus. The point of difference is called the just noticeable difference, or jnd. When an experimenter presents the standard stimulus and other stimulus not closely after another, the standard stimulus will probably be judged less intense than it actually is. This is an error that results from the fact that the other stimulus is being judged against a memory of the standard stimulus. The memory of the standard stimulus is not as sharp as the newly presented stimulus. This is called the negative time error.

Research has shown that there is a pattern between the size of the just noticeable difference and the size of the standard stimulus. When the weight of the standard stimulus is increased, the size of the difference threshold will grow in size. The relation between this can be put in a law: Weber’s law. This is ΔI = k I, where ΔI is the difference threshold, I is the intensity of the standard stimulus and k is a constant. The k is also called the Weber fraction and is equal to ΔI/I. K can be large or small, it all depends on the thing you’re measuring. If you measure light or sound, k can be large but if you measure electric shocks, k is small.

The previous part was all about difficulties with discriminating stimuli. Sometimes it’s obvious the stimuli are different, but it takes you some time to make the discrimination between these stimuli. Some differentiations are more easier to make than others. The time to respond involved in making the discrimination is called the choice response time. If participants are asked to decide which of two lines is longer they will take more time to make a decision when the difference between these lines is smaller. The more similar things are, the longer it takes to make a distinction between them. If people are given little time, they will make more mistakes. The relationship between the time and errors made is called the speed-accuracy trading relationship (this is a S-shaped curve).

Scaling

Scaling is looking at how much x there is of a certain something. Numbers can be assigned to objects, but not all psychological quantities can be measured with numbers. Shape can’t be measured in numbers. When you measure how much or how intense something is, you measure on a prothetic continuum. When the physical stimulus changes (for example it get brighter), the quantity also changes. When you measure what kind a certain thing is, you measure on a metathetic continuum. A light can be red, yellow, green or even have another colour. You can’t measure this quantitatively.

There are different types of scaling you can use to assign the intensity of sensations. Direct scaling is assigning a number to the magnitude of a sensation. It is easy and straightforward, but not necessarily trusted. Fechner designed a law, Fechner’s law: S = (1/k)ln(I/I0). S is the magnitude of sensation a stimulus elicits, I/I0 is the physical magnitude of the stimulus, 1/k is the inverse of the Weber fraction and ln is the natural logarithm. Fechner’s law falls under indirect scaling. Indirect scaling is often seen as unnecessary. Category scaling is a direct scaling method and one of the most used scaling methods. Sensations are placed into 1 to 10 categories. Category scaling is somewhat indirect, because the available responses are limited to a few category labels. This means that stimuli that are similar will be grouped in the same category, because there are not enough categories. Steven has found a solution for this: magnitude estimation experiments. In these experiments participants are free in assigning any number to the sensation elicited by each stimulus. This is put together in Steven’s law (direct scaling method): S = alm where S is the measure of the sensation intensity and m is an exponent that differs for different sensory continua.

Some of these scaling measures seem to tell us more about numbers than about sensations and stimuli. Stevens invented a scaling procedure that doesn’t use numbers at all. The participant needs to adjust the intensity of one sensation until it seems to be equal to the stimulus that is elicited from a different sensation. One example is that the participant needs to squeeze the hand until the pressure feels as strong as a certain light is bright. This is called cross-modality matching.

Identification

Identifying certain stimuli can be easy or difficult. It depends on the number of possible alternatives a person is asked to distinguish among. When you have two alternatives, it is easier to find the right answer than when you have twenty alternatives. When the observer identifies the stimulus without being affected by distortion and the identification of the stimulus corresponds to the actual stimulus, the perception is called veridical. Information theory is the measuring of performance of a communication channel. Information refers to the reduction of uncertainty. If we don’t have enough data, we need to guess things. This is meant with uncertainty. When a stimulus is presented and the observer’s response matches the label of the stimuli, information transmission has occurred. When there are eight alternatives for a stimulus, e.g. the answer can be 1,2,3,4,5,6,7 or 8; there are 3 bits of information transmitted to the observer. One bit eliminates exactly half of the alternatives. You have to calculate this with the logarithm, e.g. 32 = 8. This means that when an observer did not hear information about the stimulus, he or she would have to guess and it would have taken three guesses to identify the stimulus. When you put the answers (how accurate the answer matches the stimulus) of different observers in a graph, you will get a so-called confusion matrix.

Researchers have found that people are able to identify on average 7 different stimuli that vary along a single physical dimension. This is called the observer’s channel capacity. Seven seems to be very small, because we know different songs and we can distinguish among hundreds of faces. This is all true, but most things vary along many dimensions simultaneously and not just on one dimension.

More alternatives also have an influence on the identification time. There seems to be a relationship between the identification response time and the bits of information in a stimulus. This is called Hick’s law. Identification time is also influenced by other things. It can be influenced by internal factors, like motivation and attention. It can also be influenced by familiarity. When you have seen the stimulus not too long ago, you will be faster in identifying it. Because of the multiple influence factors, the identification time is often used in studies. The other factors will be controlled and the factor a researcher wants to study will be the only factor that influences the identification time.

How does the visual (nervous) system work? - Chapter 3

A neuron is a single cell and forms the building block of our neural system. A neuron can communicate with its neighbours with electrical and chemical signals. The three most popular neurons are the sensory neuron, interneuron and motor neuron. A sensory neuron gives information from the outside world to other neurons, interneuron share information between neurons and a motor neuron sends nerve impulses from the central nervous system to the muscles. All these neurons are composed of three parts: axon, cell body and dendrites. The cell body is the part that contains the nucleus. The nucleus is important, as it holds the genetic material and this governs the function of the neuron. The dendrites receive information from incoming nerve fibers of other neurons. The long fibers that conduct nerve impulses toward other neurons are the axons. Axons are usually covered by protective cells, the glial cells. The glial cells are protective and nutritive neurons. The white matter indicates neurons that are covered by myelin sheaths around their axons. These myelin sheaths improve communication speed between neurons. Grey matter, on the other hand, contains clusters of many cell bodies. A nerve is a pathway that carries information from one part of the body to another part and this pathway is formed by many axons together. The central nervous system (CNS) receives sensory information, carried by nerves. The pathway within the central nervous system is not called a nerve anymore, but a tract.

Neurons can communicate with each other by electrical charges. A neuron in rest is electrically negative on the inside with respect to the outside. This is called the resting potential. The negative inside is caused by the negative proteins inside the cell. Sodium and potassium are molecules that are also contributing to the resting potential. The sodium-potassium pump ejects three sodium molecules from the cell and lets two potassium molecules into the cell. Because of this continual flow there is a resting potential. Changing of the electrical potential from the resting potential to a less negative potential is called depolarization. Change from the resting potential to an ever more negative potential is called hyperpolarization. The change in a cell’s state is called an action potential or neural spike. The period in which the cell becomes negative again after an action potential is called the refractory period. The more sensory neurons are stimulated, the greater is their change in electrical potential. This is called the graded potential. Neurons that use action potentials and no graded potentials will have more spikes when the stimulation is increased. The overall activity level of the neuron is higher. The activity level of a neuron is called the firing rate. When the axons are longer, the spikes are communicated more quickly from one neuron to another.

There are two ways to study the neural functioning in sensory systems and in the brain. The first is at the microscopic level. People look at the behaviour of individual neurons in brain regions and known sensory pathways. The second way is to look at the macroscopic level. Scientists look at the larger brain regions and neural systems. When scientists look at a single neuron, they use microelectrodes on animals. They insert an electrode into the cell body of the animal and look at the difference between the electrical activity of the test electrode and a comparison electrode. They also use stereotaxic instruments to transfer a neural spike into a series of audible clicks. Of course, there are ethnical guidelines for this, because the animal should not be treated that badly.

To study regional brain functions, one must look what happens when neurons of a part of the brain are destructed (lesion) or removed (ablation). This is sometimes hard to interpret. Scientists also use Electro-Encephalograms (EEG), with which they can see the activity of cells in the outer layer of the brain. An EEG that is measured in response to a stimulus is called an evoked potential. The temporal resolution is very good, but the spatial resolution very bad, because the skull blurs the localization data. Magnetic-encephalography (MEG) can compensate for this limitations of the EEGs. Both spatial and temporal resolutions are good in MEGs. However, this method bears its own disadvantages, one such disadvantage are the cost. The MEG is very expensive. Another technique that can be used is Positron emission tomography (PET). This means that a radioactive form of glucose is injected into a person. The person receives a certain task (like watching a sad movie) and the activity of the brain regions will be studied. Scientists can see which part of the brain is active during a particular task. PET originated from a method called regonial cerebral blood flow (rCBF), that registered the blood flow changes in the pattern of brain activity. Radioactive substance was used for this purpose, which goes into the blood without being absorbed, including side effects such as dilated neighbour regions. The method of magnetic resonance imaging (MRI) is based on the same idea, namely measuring the blood flow activity in the blood vessels. An advanced version of the MRI is the functional magnetic resonance imaging (fMRI). The latter localises brain activity and by doing so provides information about the shapes and structures of those particular active regions. The haemoglobin binds with oxygen, but will show a slight change in magnetic properties when it releases it. Blood flow and oxygen used by neurons increases when activity, cognitive, emotional, perceptual, info processing tasks are being done. This change in activity can be seen in its magnetic properties. It provides maps of a temporal resolution of less than a second and a spatial resolution if 1 or 2 millimetres. Lastly, the book discusses the method of transactional magnetic stimulation (TMS), where a selected region of the brain is being knocked out by a powerful magnetic discharge. The target is instructed to do a cognitive demanding task during the knock out. The researchers, then, record the consequences of such knock out on thinking and perceiving. It can induce a short-lives blind spots and amnesia in human participants and it is most informative, when it is combined with EEG and MEG.

The eye

Many animals have a similar basic eye structure. The outer white covering of the eye is called the sclera. Interior fluids maintain the shape of the eye. The front of the eye, which looks like a window, is called the cornea. It functions like a lens. Between the cornea and the lens is a small part filled with watery fluid and this is called the aqueous humor. A larger part of the eye is filled with a clear substance and this is called the vitreous humor. The ring of colour is called the iris. The function of the iris is to control the amount of light entering the eye. The hole in the iris is called the pupil. The pupil is controlled by a reflex. If there is much light, the pupil will be smaller than when there is little light. The size of the pupil can also change because of our emotional and attentional levels. When we are interested in something, our pupils will become bigger.

The lens can change its focus and this is called accommodation. The lens can change its focus by changing its shape. Muscles control the lens and when these muscles relax, the pressure of the fluids in the eyeball and muscle tensions cause the eye to flatten. Distant objects will be in focus. Contraction of these muscles removes some tension from the lens and makes it possible to watch near objects. The lens looks now more spherical, meaning that it is rounder.

Around the age of 40, the ability to accommodate diminishes, because inner layers of the lens die, as a consequence the lens loses some of its elasticity. This can cause light bending or focusing problems and is known as refractive error. In presbyopia, the near point distance is increased, referring to the distance of the object until it is perceived as blurry. As a result, old people read from a fairly wide distance to avoid any blur. This can be corrected with optic help. An eye that has a normal accommodative ability is called emmetropic. If the eye is too short or the light rays are not bent sharply enough by the cornea, distant objects are seen clearly but near objects are not really in focus. This is call farsightedness or hypermetropia. If the eye is too long or the light rays are bent too sharply by the cornea, near objects are in focus and distant objects are a blur. This is called nearsightedness or myopia. The lenses of people can also become yellow when human beings spend much time living in bright sunlight. This is called phototoxic lens brunescence or browning of the lens.

The back of the eye where the image formed by the cornea and lens falls on is called the retina. The light is transformed in the retina into a neural response. The transformation of a physical entity (light) into a neural signal is called transduction. The retina contains three superior layers. Layer one are the photoreceptors that include two types, cones and rods in the human eye. Layer two, the bipolar cells of the retina make synapses with the photoreceptors and the ganglion cells. The third layer, the ganglion cells forward any kind of information to the brain. Horizontal cells have short dendrites and are closer to the photoreceptors. The amacrine cells are between the bipolar and ganglion cells. More than 30 different types of the amacrine cells exist. Horizontal and amacrine cells allow communication and interaction between adjacent cells and they modify the visual signal.

The optic axis is a imaginary line between the centre of the retina and the pupil. The most important part of it is a yellow patch, called macula. In the centre of the macula is a very small part, called the fovea centralis. The fovea is important for vision. If you look directly at an object, it means that the image of the object falls directly on the fovea. The fovea contains many cones. The duplex retina theory of vision means that there are two cells with different visual functions. The rods (scotopic) are used for vision under dim light conditions and the cones (photopic) are for vision under daylight or bright conditions. Individuals, who do not have retinas equipped with few rods or no rods, have normal functioning vision under daylight conditions but they become functionally blind in the dark. These people have night blindness. People with few cones or without cones lack colour vision, have poor visual acuity and find normal levels of daylight painful. When the light is dim, these people can see normally. They are said to suffer from day blindness. When a substance absorbs a lot of light, the object looks darkly pigmented. Chemical reactions in rods and cones reassemble the pigments from their parts. The rod pigment is called rhodopsin. These regenerate in the dark with the help of vitamin A. pigments in the cones are called iodopsin.

The neural pathway of the visual information that is transmitted to the brain is called the optic nerve. At the centre of the optic nerve lie the blood vessels and where this bundle of vessels exits the retina, there are no photoreceptors in this region. There is no light in this spot and it is therefore called the blind spot. Ganglion cell axons combine the activity of rods and cones. Ganglion cells will respond to light strikes and the firing rate of the cell will be altered. This is called the cell’s receptive field. One of the responses is the on response. This means that neural impulses will follow the onset of the stimulus. The off response is a response of the cell that gives impulses beginning at the termination of a stimulus. There can also be a hybrid response, the on-off response. Neural responses happen when the stimulus is present as well as when it is absent. It is assumed that lateral connections of off-centre cells are closer to photoreceptors. On-centre cells are more responsive to brightness, while off-centre cells are more responsive to relative darkness. Thus, the processing of darkness and the processing of brightness can be distinguished into two fundamental different processes. For instance, if animophonobutyrate (APB) is given, the on-centre cells are more responsive compared to the off-centre cells.

There are big cellbody and small cellbody ganglion cells. The big ones are called mango cells and the small ones parvo cells. Mango cells have a much broader range to communicate with neighbouring cells in the peripheral region of the retina. The mango cells are very sensitive to high contrasts, motion, and certain depth perception cues. Their activity is brief, whereas the activity of the parvo cell is ongoing until the stimulus disappears. The parvo cells are more concerned with colours, forms, and spatial analysis.

The visual brain

When we look at something, the visual image of the world is represented as upside-down and left-right reversed on the retina. There are two bundles that leave the eyes and these optic nerves come together at a point that looks like an x. This point is called the optic chiasm. The tectopulvinar system is an important pathway of visual information and it is important for the perception of motion and the control of eye movements. It goes from the brain stem to the superior colliculli. There are mango cells as earlier explained rapid and very responsive to sudden changes in illumination. Thus, no parvo cells can be found in the tectopulvinar system. The superior colliculli like margo cells is very sensitive to motion and locations, including audible and tactical stimulations. It combines and integrates visual and other sensory information. When you have received an image, there will be returning signals called back projections and these give feedback based on previous information that has already been sent to the brain. These signals are mainly found for the primary visual cortex (V1 Area) and centre of the visual motion processing (V5). From the superior colliculli the information are further transmitted to the pulvinar and the lateral posterior nuclei, which are near the thalamus. The fibres are then connected to the V2.

The most important pathway for humans is the geniculostriate system. This system terminates in the lateral geniculate nucleus that is part of the thamalus. The lateral geniculate nucleus has six distinct layers of cells. These cells become stimulate when the stimulus is in their field. These cells have either on-centre and off-surround or off-centre and on-surround. The upper four cell-layers hold parvo cells, whereas the lower two layers contain magno cells. Optic radiations are a large fan of fibres that emerge once the axons leave the lateral geniculate nucleus and then, connect to posterior brain parts, known as the occipital lobe. The occipital lobe is divided into different visual areas, 36 of the regions are known by researchers. Few of these regions, such as the primary visual cortex or V2 will be discussed in the next paragraph.

Visual cortex areas

The most important cortical visual area is the primary visual cortex, Area V1. This is the first step in the cortex and many signals received by other cortical regions pass through it and it receives the back projections. The occipital pole presents a rear part of the images that enter the retina. The calcarine fissure represents the upper half of the visual field. The lower half of the visual field is represented in another region above the calcarine fissure. The left occipital cortex represents the right visual field and the right occipital cortex the left visual field. So, if the upper-right occipital cortex would be damaged the person would not be able to see the lower-left visual field. Note how the visual field is reversed in the occipital cortex. Therefore, the primary visual cortex can be divided in these four parts (upper-left visual field representation, lower-left visual field representation, upper-right visual field representation, and lower-right visual field representation).

These parts were discovered while studying patients, who had injuries in these different parts. When a part of this field is damaged, somebody is blind in the corresponding field. This is called a scotoma. If the damage is as big as one of the four fields, it is called a quadrantanopia and if it is as big as an entire half of the visual field, it is called a hemianopia. The complete loss of vision is called cortical blindness.

The Area V1 has multiple cells. One such type of cell is called simple cells. These cells respond weakly to small spots of light, have little spontaneous activity, and do not respond to illumination covering the whole screen. They show activity for bars that are light or dark, but only if these bars are located properly and follow a predetermine orientation. This orientation specificity makes them only sensitive to edges of a particular angle. A second type of V1 cell is called complex cells. They have a larger receptive field compared to simple cells and they are not sensitive to particular orientations. They show a preference for moving edges and moving bars. The third discussed cell is the endstopped cell. These cells respond to specific lengths of bars and they show inhibited reactions to other bars. They also respond to edges of a particular orientation, moving in a certain direction if they fit a predetermined length.

Blobs are called the black regions and the white regions are called interblobs. Cells preferring a particular orientation tend to be grouped into slabs and columns. The hypercolumn is a small part of the visual cortex, where this grouping of orientation takes place. It receives inputs from both eyes and all visual orientations. The multiple parallel pathways send information about form, colour, and motion in a parallel manner to different lobes. For instance, looking at the geniculostriate visual pathway the lateral geniculate nucleus sends information from either or both parvo and magno cells. The V1 center receives the information next. If the information concern colour or form they came from the parvo cells, however, if they concern motion the signal came from the magno cells. Then, V2 receives the signals from V1. If it receives from the blobs (concerns colour) thin stripes become activated, but if it receives information from the interblobs (concern form) information V2 relays them to its interstripes, which deal with form. However, if the information from V1 was sent from layer 4 B (concerns motion) the V2 activates its thick stripes. Next, the prestriate cortex, also called extrastriate cortex, receive information from V2. If V4 gets information about colour it will forward the information to the temporal lobe (the ‘what’ pathway). The same will happen, if V3 receives form or/and local movement information from V2. If V2 sends information about global movement to V5, V5 will in return activate the ‘where’ pathway in the parietal lobe.

There are also other visual cortex areas. Area V2 looks like Area V1 and almost has the same functions. These two areas are also called the V1-V2 complex. Area V3 gives information about how forms are moving, rotating and changing and helps with the depth perception. Area V3 is the map for form and local movement. Area V4 helps us see colours. Area V5 detects speed and direction of motion. If somebody has deficits in Area V4, that person may had a problem with seeing colours. People whose Area A4 is damaged can see colours in their left visual field, but everything in their right visual field is seen in shades of grey. This is called cerebral achromatopsia. People with damage to their Area V5 can suffer from cerebral akinetopsia (translated to no motion), which means that they cannot predict the speed of approaching cars, because they do not perceive fluent motion.

There are also other brain areas that serve certain visual purposes. The parietal lobe is concerned with ‘where’ a certain object or thing is. People with damage to their parietal lobe can identify objects, but cannot process the information about their location. People can neglect objects in one half of their visual field and they might have difficulty reaching this object appropriately with their hands. This is called optic axatia. The temporal lobe is used to identify objects and known as the ‘what’ pathway. People with damage to their temporal lobe can pick up objects and reach for them, but they cannot identify these objects by sight. This is called psychic blindness (in animals) and visual agnosia (in humans).

The problem of visual unity addresses the question of how do we perceive the visual field as a complete picture, if the brain work with various fragments (e.g. color, form, motion, etc.) in multiple lopes at the same time (parallel processing). Neuroscientist started to view the behaviour of single cells in a more dynamic way. The neurons seem to be more dynamic in their responses than it was originally thought.

How can color and brightness be percieved? - Chapter 4

Vision depends on light. Light can also be called electromagnetic energy and it can vary along three dimensions: intensity, wavelength and duration. These three dimensions influence the perception of and brightness or lightness. Colour is more dependent on wavelength and brightness more dependent on intensity. Light exists out of photometric units and these units are a form of energy. Light can reach the eye from a light source like a lamp or it can indirectly reach the eye by reflection from things that have radiant energy falling on them. Every different aspect of light has another name and different measurement units.

  • Radiance is the amount of energy coming from a light source. The quantity of this energy is called lumen.

  • Illuminance is the amount of light falling on a surface, like a screen. The unit of illuminance is lux.

  • Luminance is the amount of light reflected from a surface. This is measured in Candelas per square meter.

  • The percentage of light falling on a surface that is reflected is called the reflectance.

  • The retinal illuminance is the amount of light reaching the retina. This is measured in Trolands.

Impressions of light intensity can be measured by brightness and lightness. Lightness is when an observer has to choose between white, gray and black. It correlates to the physical measure of reflectance. Brightness corresponds roughly to the physical measures of illuminance and luminance. The perception of brightness depends on how sensitive the eye state of a person is. The sensitivity of eyes can depend on a number of things. One of these things is the adaptation to a certain space. When you walk from a dark room into the sunlight everything seems so bright and your eyes need a moment to distinguish among objects. This is called light adaptation. When you walk from a lit room or bright outside into a dark room, everything seems very dark and objects are hard to distinguish, dark adaption. The adaptation to the darkness takes longer than the adaptation to the brightness. Researchers can use such situations to measure the absolute threshold for the detection of light. They can put participants in a bright room and afterwards in a dark room. After a couple of minutes the sensitivity of the eyes will increase and the absolute threshold will drop. These two adaptations are associated with two types of photoreceptors in the retina, cones and rods. Animals that are active during the day have more cones and animals that are more active during the night have more rods. Cones go together with daylight vision and colour (this is also called photopic) and rods go together with twilight vision (also called scotopic). Human beings have both cones and rods. The central fovea only contains cones and is therefore not very sensitive to light changes. The periphery of the retina contains mostly rods, which are sensitive to light alterations. For this reason, a dimmer target will be perceived more clearly if it is displayed onto the periphery of the retina.

Our perception of brightness is also affected by the wavelength of the light. Yellow light has a medium wavelength and appears to be brighter than blue light, which has a short wavelength. A curve in which you have the wavelengths and the sensitivity is called the luminosity curve. The peak sensitivity of these curves is for wavelengths around 555 nm (yellow) and it is less sensitive for short wavelengths (blue) or long ones (red). The change in the brightness of light of different wavelengths when the intensity changes, is called the Purkinje shift. When you look at red flowers in your garden during the daylight, you will find those flowers brighter than blue and green flowers. But when the sun goes down, the green and blue flowers change into grey and the red flowers change into black. Now the blue and green flowers seem to be brighter. Rods are not sensitive for red light in a dark setting. While the rods are adapting to the dark, the cones still function for long wavelengths in the dark and this is a reason why red light can be used well in the dark.

Our perception of brightness also depends on the time and area. When somebody is taking a picture under dim illumination, he or she has to lengthen the exposure time to get enough time to get a good picture. A short exposure will be enough in sunlight. This is called the Bunsen-Roscoe law. When this is applied to the absolute threshold, it means that a weak stimulus must be presented for a long time to be detected and that an intense stimulus can be presented for a short time to be detected. This is known as the Bloch’s law. This does however depend on the wavelength of the stimulus. Also, if the stimulus is big it will activate more neurotransmitters than small stimuli, even if the stimulus intensity doesn’t change. Ricco’s law states that if the area that is covered by a stimulus increases, the intensity can be decreased but the stimulus will still be detected.

Spatial context

Our perception of brightness of targets often depends more on the luminance of adjacent objects, thus the context than on the luminance of the target itself, this is called the brightness contrast. When you have two grey squares of the same colour, one surrounded by a black square and the other surrounded by a dark grey square, the grey square surrounded by the black square will look brighter than the grey square surrounded by the dark grey square. Objects will look brighter printed on dark backgrounds than light backgrounds. This is called the simultaneous brightness contrast. This does not mean, however, that our perception of brightness increases when the amount of light increases. There must be a form of inhibition where an actively stimulated part of the retina suppresses nearby retinal activity. When a visual neuron is inhibited by the activity of nearby neurons it is called lateral inhibition. The amount of inhibition depends on how the neurons are close to each other and how strongly they respond. A neuron will be inhibited more when the other neuron is closer to it and when the other neuron is stimulated more. Thinking about the example of the squares, one can explain how the grey square looks brighter when it has a dark colour surrounding it than a light colour. The surrounding light colour will inhibit the brightness perception of the grey square and the grey square will look less bright. A uniform dark and light area with an intermediate zone that gradually changes from dark to light is known as Mach bands. The same amount of stimulation is everywhere, except for the intermediate zone, where the inhibition is less for the lighter areas.

Brightness anchoring means that the highest luminance is used as a standard by which all of the other luminances are perceived. When the highest luminance increases, the standard against which other luminances are judged is changed and the other luminances will seem darker. The opposite of lateral inhibition is brightness assimilation. This means that a white colour lightens a certain colour rather than darken it. This only happens when the lights fall into a certain receptive field of the eyes. Still, more research is needed.

Colours

Most animals cannot survive without colour vision giving colour a functional evolutionary purpose. Animals probably developed colour vision to detect certain food. The human eye is able to register wavelengths between 360 and 760 nm. Newton discovered that light can bend and that the amount of bending depends on wavelengths. If you put different wavelengths together, you will get a white light. When we see an object, we see the wavelengths of those objects. That means that objects reflect certain wavelengths that reach your eye and absorb certain wavelengths and therefore you will not see these colours. When the surface of an object does not selectively absorb any visible wavelengths but reflects them uniformly, the object seems grey or white. The colour wheel or colour circle is a circle with separated colours, which are ordered and arranged according to their hue - to their variations in wavelength known to lay persons as colour. Monochromatic stimuli are stimuli that only contain one wavelength and are therefore pure, they are called spectral colours. Monochromatic stimuli do not include every hue from the colour wheel. There is no wavelength that produces the sensation of purple. Purple is made out of blue and red wavelengths. Saturation happens when white light or other wavelengths are being added to the monochromatic colour. The third sensory quality is brightness. It has to be considered when trying to describe colours. This third property of colour demands a solid representation of colour, instead of a flat one. The spindle colour or the solid colour is such a representation of: hue, brightness, and saturation.

In 1850s Hermann von Helmholtz and James Celk Maxwell discovered that two different colours can appear the same even though they are made up of different wavelengths of light, called metameric colours. Primaries are three monochromatic wavelengths that can be matched together and form another colour. Three particular monochromatic wavelengths allow such matches, namely red, blue and green. Combining different wavelengths of lights in order to produce new colours is called additive colour mixing. Mixing paints in order to produce new colours is called subtractive colour mixing. When you see a red object, it means the surface of that object absorbs short and medium wavelengths and reflects only long wavelengths (red) to your eyes. To produce the full range of colours from only three pigments you have to begin with primaries that are not pure in wavelength. This is why the brightest colour of the subtractive colour mixing diagram will be found at the periphery and the darkest colour in the centre.

Scientists have created many colour graphs in which you can see what the wavelength of the colour is and which colour mixed colours become. There is a colour circle, colour solid, colour triangle and colour mixture systems. When you mix colours that are exactly opposite of each other (like yellow and purple), you will get an achromatic grey. These colours are known as complementary colours. Scientists have even specified the colour of a stimulus. They created a triangle in which you can find all the colours and certain coordinates. This is called the CIE chromaticity space and y in this triangle represents the proportion of green in the mixture, x represents the proportion of red in the mixture and z represents the brightness of the stimulus. The CIE colour system allows us to specify any colour stimulus by its tristimulus values, which are the x- and y-coordinates for the hue of the stimulus and a z-coordinate for the brightness of the stimulus. Another kind of colour space that is becoming increasingly important in everyday life is the colour space used in television and computer screens. In the RGB columns and rows are divided in a screen. At the intersection of each column and row is a pixel (picture-element). Each pixel is illuminated by light from three different electro guns (green, blue or red). A very large number of colours can be produced at each pixel location by varying the relative contribution of each of the three wavelengths. If all are turned off the pixel appears black, whereas if all are turned-on the pixel seems white.

Trichromatic colour theory

For centuries, people have been fascinated by how we discriminate colours. Helmholtz showed that normal observers needed a mixture of only three primaries to match any colour stimulus. He thought that there were three different types of cones in the retina. Scientists thought that there are three different types of receptors: one for long wavelengths (called the L cone), one for medium wavelengths (M cone) and one for short wavelengths (S cone). This is the trichromatic theory. Support for this theory is found in studies of people with colour vision defects. This means that they can’t see every colour that people without the defect can see. There are five different varieties of colour abnormality. People, who do not have functional cones whatsoever, are colour blind(ness). They do not have any form of colour discriminating ability. They find daylight uncomfortable, but have normal night vision. Then, there are people with one functioning cone, called monochromats. They have no colour discriminatory ability and see colours as gradations in intensity. There are also individuals who miss one type of cones. These people have some colour perception and they are called dichromats. When the L cones are not functioning, the person will be insensitive to long wavelengths. This type of colour defect is called protanopia. Individuals with deuteranopia are insensitive to medium wavelengths and can’t distinguish green from certain combinations of red and blue. Tritanopia means that there is a malfunction of the S cones, or short wavelengths. This type of colour defect is rare.

Cones are differently distributed across the retina and the colour response is therefore different over different portions of the eye. There are no S cones in the central fovea and people can’t see blue things in the central vision. When distance from the fovea increases, sensitivity to green light diminishes. The same is true for red light. If the object’s distance increases from the fovea, the sensitivity for red light decreases. More men than women have colour defects, because the genes for the L and M cones are both on the X chromosome. If a woman has a defective X chromosome and passes it on to her son, he will become certainly colour blind, because he possesses only one X chromosome, which will be from his mother.

Opponent-process theory

Not all scientists agree with the trichromatic theory of colour. When people from all over the world are asked to pick out the pure colours from a sample of many colours they usually don’t pick three colours, but four: red, blue, green and yellow. With these four colours people are able to describe the colours of the world. Researchers have found that 4-month-old infants seem to see the spectrum as if it were divided into four colour categories. Hering, the founder of the fourth colour and the founder of the opponent-process theory, believes that the four primaries are arranged in opposing pairs. One opponent process can signal either red or green and the other blue or yellow. There is also a third opponent-process that concerns brightness perception, called the black and white opponent process. Support for this theory comes from cell studies. There are cells that are organised similar to the centre-surround organizations of the receptive field cells. However, the difference for colour-opponent cells is that the stimulus colour must hit the centre or surrounding of the cell, instead of adjacent the cell or pass the field (movement), and the size must match. For instance, L versus M (red-green) colour opponent ganglion cells exist or blue versus L-M (blue-yellow) opponent ganglion cells. The opponent colour neurons seem to be concentrate in the 4 parvocellular layers and the koniocellular layers, while the magnocellular channel appear to be specialized in carrying brightness information. Subjective colour (perceived colours in the absence of their appropriate wavelengths) appearance is a puzzle that researchers have not solved yet.

Perceptions of colours may be affected by the intensity of the stimulus. Rods will become active, when intensity levels are low and no colour is visible. If the intensity of red or yellow-green stimuli is increased, these colours look brighter and also take on a more yellow hue. When the intensity of a violet or blue-green object is increased, these objects appear to be bluer. This is called the Bezold-Brucke effect. When you look at a red object the whole time and then look at a white wall, you will see a green object on the wall. This is because the red responsive cells become fatigued. These are called chromatic adaptation and afterimages. Colours can also be inhibited by surrounding colours. This is called simultaneous colour contrast. When you have three grey squares, each the same colour grey, but all three presented on a pitch in a different colour, the grey squares will not seem the same colour. The square on a blue patch will appear yellow, the square on the red patch will appear green and the square on the yellow patch will appear blue.

Colour and memory have certain interactions. We will remember the colours of certain objects as brighter than they were in fact and we will remember tomatoes as more red and bananas as more yellow than they were in fact. When we see black and white scenes and coloured scenes, we will remember the coloured scenes better. But if the scene is coloured in an oddly way, we will not remember this scene accurately. Colours can also affect our mood. The colour blue is cool and the colour red is seen as hot. Research shows that people will turn the heater higher in a blue room than in a yellow room, even when the temperature in both rooms is the same. People who are depressed perceive the colour of things as darker than they are in fact.

How does the auditory nervous system work? - Chapter 5

Wat is sound?

Physicists see sound as a series of changes in air or water (or another medium). When music or another sound is very intense, you will feel these pulsations being transmitted through the air. You can feel something in your ear vibrating and even the rest of your body can sometimes vibrate. You can see the same thing happening when you pluck a guitar string. The string vibrates and collides with the air molecules around it. These molecules collide with other air molecules and this formed compression is called a wave. These waves can be picked up by a microphone and sent to speakers. Air molecules collide with the air molecules next to them and the pressure wave is moved through space. This way soundwaves can be transmitted over distances. Each of the collisions loses a little bit of energy and the pressure will be less intense when the sound wave moves away from the source. Sound can’t pass through a vacuum, because it does not have a medium through which it can travel there.

A pure tone is the simplest sound wave, also called a sine wave. The wavelength is the distance from one peak of the wave to the next. This is represented by the lamda, λ and the wavelength is also called a cycle. The number of wavelengths completed during one second is called the frequency and it is expressed in Hertz (Hz). One Hertz is one wavelength (cycle) per second. Frequency has a big impact on the pitch of a sound. Pitch refers to the non-physical phenomena of perceiving high or low notes. The physical equivalent of pitch is pressure amplitude. Pressure amplitude is the change in pressure produced by the sound wave. This is usually measured on a logarithmic scale. Sound pressure levels are expressed in dB (decibels). The formula for sound pressure level is: dB = 20log(P/P0). P is the sound pressure amplitude and P0 the reference pressure.

Another important aspect of sound waves is the phase angle. This is a certain point in the cycle the wave has reached and this can be specified by the number of degrees from 0˚ to 360˚. If the peaks and valleys of two pure tones coincide they are in phase. If they do not coincide they are out of phase. The relative phase is how much they are out of phase (the difference). Tones that are out of phase and have the same pressure, cancel each other out. This is used in active noise suppression to eliminate unwanted sounds. Complex sounds have a timbre, meaning that musical instruments have different wave forms, even if they are playing the same note. Complex sounds can be described by analyzing them into the sets of simpler waves, this method was invented by Fourier. Fourier found that any continuous, periodic waveform can be represented as the sum of a set of simple sine waves (Fourier components) if the wavelengths, phases, and amplitudes are appropriate. It is assumed that the ear performs a mechanical Fourier analysis for complex sounds, making it possible to hear the various simple sounds that went into the complex sound, known as Ohm's acoustical law.

The ear

Ears seem to have evolved from the sense of touch. They contain hairs and these respond to mechanical stimulation. The ear can be divided into three parts: outer ear, middle ear, and inner ear. Starting with the outer ear, the pinna is the fleshy part outside the head that is visible to everybody. Most people call it ear when they talk about it. Sound waves move along the ear canal to the eardrum. The ear canal increases the amplitude of sound frequencies and the eardrums vibrate in phase with these sound waves. The eardrum will move faster for high-frequency sounds and slower for low-frequency sounds. The vibrations of the eardrum will enter the middle ear, where it passes three tiny bones (ossicles). One is the malleus (hammer), the other the incus (anvil) and then the stapes (stirrup). The middle ear increases the pressure applied to the oval window. It can also decrease the pressure at the oval window. This is done to protect the ear from high sound pressure levels which can damage the ear. The middle ear is surrounded by air, but the pressure of the ear is kept equal to that of the surrounding atmosphere by means of the Eustachian tube. If there would be a difference in pressure, the eardrum would stiffen. When we have a cold, the Eustachian tubes are blocked and the pressure in the middle ears can’t be equalized to that of the air outside. This may cause pain or temporary hearing loss. The Eustachian tube can also be a route by which bacteria can travel to the middle ear and cause infections by fluids that build up in the ear, known as otitis media. Vibrations will be transmitted to the inner ear.

The vibrations that come from the stapes are forwarded to the inner ear via the oval window, a boundary point between the middle ear and the inner ear. The oval window connects with one of three tubes in the cochlea, the vestibular canal. The canal ends at the apex from where it connects to the second tube of the chochlea, the tympanic canal. The connection between the two tubes at the apex, a short section of bent corridor, is known as helicotrema. The tympanic canal has its own membrane-covered opening at its base, which separates it from the airspace in the middle ear, called the round window. These canals are filled with a water salt like fluid called perilymph. This fluid is not compressible, increasing the likelihood that movements of the stapes can cause the round window to bulge out or flex in because of the applied pressure. The third tube is the cochlear duct. The cochlear duct has two main membranes of called the tectorial membrane, which contains cells of the organ of Corti and the Reissner’s membrane. This organ of Corti contains hair cells that convert mechanical action in the cochlea into neural signals that are sent to the brain.

When the oval window vibrates, it causes the fluid to move down a canal. These moving things can be seen as waves. The elasticity and width of the basilar membrane direct the travelling waves and give them their speed. Mechanical vibrations in the ear are then changed into electrochemical fluctuations. This is called transduction. The hair cells are bend causing the thin filament called tip link to open the channels. Consequently, positively charged potassium ions flow into the cells. This will result in a depolarization and neurotransmitters will be released, in return stimulating the dendrites of ganglion cells which go from the auditory nerve to the brain centre.

The hair cells are connected to over 30.000 nerve fibres in the cochlea and in the spiral ganglion. There are two types of fibres, 95% of so-called type 1 fibers make connections with inner hair cell. The remaining amount of fibres (5%), known as type 2 fibers connect to outer hair cells. These fibers different in their size of axons and their functions: Type 1 fibers have large diameter axons and are covered with a myelin sheath that allows them to conduct neural impulses faster than the small diameter, unmyelinated Type 2 fibers.
The Type 1 fibers are transfer actual information about sounds, whereas Type 2 fibers seem to be involved in some sort of feedback loop. The responsiveness of the inner hair cells is modified to sounds of particular frequencies this way.

The hair cells not only send sensory (afferent) information to the central nervous system through the spiral ganglion, but they receive out-going (efferent) signals from nuclei in the superior olive, too. Input from the superior olive on the opposite/contralateral side of the brain is referred to as the crossed pathway. These control the protective processes for high-level binaural sounds (sounds reaching both ears) but are relatively unaffected by background noise. Input from the superior olive on the same/ipsilateral side of the head is referred to as the uncrossed pathway. This pathway is activated only by high-level monaural sounds (sound reaches one ear) in a noisy background. If intense sounds with a noisy background are present both pathways become activated to maximize the protection of the auditory system.

The passive process consists of variations in elasticity and width of the basilar membrane. The basilar membrane is stiffer in the beginning of the cochlear and smaller in width. At the end of the membrane it is less stiff and wider. These properties of the basilar membrane influence the direction and the speed of the travelling wave. When the frequency increases, the waves reach their maxima nearer the oval window. This mechanical analysis of sound by the basilar membrane is the basis for Ohm's acoustical law. The active process causes further tuning and thus, an increase in amplification by modifying the mechanical vibrations before they reach the inner hair cells, see Figure 5.12.

To study the neural processing of auditory information, scientist use electrodes and put them into neurons in the auditory pathway of non-human animals. Then they look at their electrical activity in response to different sound stimuli.

Auditory Neurons

There are different types of auditory neurons. A specific pressure level is required to make neuron fire (neural equivalent of absolute threshold). The required pressure levels vary for the different frequencies. Tuned neurons – tuned because a radio has to be tuned too until a particular good to hear broadcast is caught - are neurons that respond only if a sensitivity pressure level of a particular frequency is has been met. Two-Tone Suppression refers to the suppression or drop of neural activation because of the presence of a second tone with a different frequency (but still moderately similar to the tuned frequency). The two-tone suppression phenomena disappears when the outer hair cells are damaged, for instance by drugs. Letting to the conclusion that two-tone suppression is the result of mechanical actions of the outer hair cells stimulated by the second tone. Neural adaption means a first tone leads to vigorously firing, but if continued the firing rate drops progressively.

Even though there may be many neurons tuned to the same frequency, the response thresholds for these neurons can vary over a range of 20 dB.

Other neurons than the tuned neurons exist. Neurons in the central nuclei of the auditory pathway register specific aspects of sound stimuli. For instance, different such neurons are present in adult cats. For instance, onset neurons are responsive after the onset of a tone and then cease responding. Pauser neurons also give a burst of responses after the onset of a tone, but this is followed by a pause and then a weaker sustained response until the tone turns off. Chopper neurons give bursts followed by short pauses. Primary-like neurons give an initial vigorous burst if the tone is audible, but then the firing rate diminishes to a level that is sustained until the tone stops. Offset neurons reduce their response rate below their spontaneous acitivity level at the onset of the tone and then give a burst of activity at its offset.

The basilar membrane is said to response tonotopic, because different points of the membrane are differently vibrated depending on the frequency.

There are 60% of neurons that respond to pure tones, those that respond to the offset of a sound (off response), those that respond to the onset of a sound (on response) and those that respond to both (on-off responses). These responses are either inhibitory, which means that they decrease relative to the resting level or they are excitatory, which means that they increase over the resting neural response level. In this respect, they look like the visual system neurons (chapter 3 and 4). In the primary audio area, A1, there are more ordinary tuned neurons and in the other areas there are more complex sounds. Some of these neurons only respond to a limited range of sound amplitudes and some only to moving sound sources. Some neurons only respond to moving sound sources.

There are even more specific neurons in other animals. Cats have frequency sweep detectors. These are neurons that respond to sounds that change frequency in direction and range. Some of these only respond to frequency increases, others to frequencies decreases and there are even more specific neurons that only respond to increases of frequency in low frequency sounds.

So it seems that there are many neurons and that they have different functions. The auditory cortex can change with experience. If you train somebody to discriminate better for nearby frequencies of pure tones, the auditory cortex of this person will respond to the trained frequencies. There seem to be many important similarities between the auditory and visual system.

How can sound be percieved? - Chapter 6

Detecting that a sound is present is a simple auditory experience. It depends on frequency, pressure and duration, all psychological concepts. There have been a lot of experiments on the absolute threshold of a sound. These thresholds vary with sound frequencies. The minimum audible fields (sounds in an open field) are lower than the minimum audible pressures (sounds through earphones). When a noise is really intense, we experience pain. The difference between the threshold for hearing and the pain threshold is the dynamic range of the ear for a given frequency. Children hear better than adults, because humans lose sensitivity as they age (especially for higher frequencies). The frequency is not the only factor that makes us detect sounds. Another factor is the duration of the sound. When a sound is longer, the ear will be stimulated sufficiently so that we can hear this sound. According to the Hughes’s law, high pressure over a short time interval and low pressure over a long time interval can give enough energy to hear a sound. Another factor is whether you hear things with one ear (monaural) or two ears (binaural). The threshold for the binaural hearing is about a half that for monaural hearing. This can even be the case if the sounds are not presented simultaneously to the ears.

We often experience auditory masking; a sound we want to hear is obscured by sound we don’t want to hear (noise). One sound can be masked by a second sound. When these sounds are presented simultaneously, this is called simultaneously masking. Not all sounds will be masked by a masking sound. If the level of the target tone is really high, the masking tone will not completely mask the target tone. The greatest masking is found for tones with frequencies similar to the frequencies of the masking tone. Tones of a lower frequency than the masker are unaffected by the masker. Sounds can also be masked even when they are not presented at the same time as the masker. Forward masking occurs when the masker is presented before the target and backward masking is the opposite of this. This means that the masker is presented after the target. Forward masking is smaller when the time interval between the mask and the target is larger. Central masking occurs when a target and mask tone are presented simultaneously to different ears.

Sound discrimination and localization

Human beings can discriminate sound pressure differences the best for middle range frequencies. It is more difficult for higher or lower frequencies. Also, a difference is better heard if the sound is presented to both ears than to one. When a sound lasts longer, the difference is also better heard. The Weber fraction is the difference in frequency for different sound pressure levels. Sounds can come from everywhere. You can localize sounds behind you, above you, in front of you, left or right from you and from a distance or nearby. Sometimes one ear receives a sound from a direct path and the other ear only receives those sounds that are bent around the head. This ear is in the sound shadow part. There will be a sound level difference between the ears. A large sound level difference between the ears indicates that the sound source is off to one side.

Sound takes time to travel through space and there will be a time difference in the arrival of the sound at the two ears. Raleigh suggested that we localize low-frequency sounds by using time differences at the two ears caused by differences in distance from the source and that high-frequency sounds are localized by using the sound intensity differences at the two ears caused by the sound shadow.

When we are in a room, the sound of source will bounce around the room. The sound will reflect from the walls and floor many times. But still we don’t experience overwhelming auditory confusion. This is because the echoes that arrive several milliseconds later are treated as part of the original sound. The first sound will be the most important in determining the perceived direction of the sound source. This is not true for a high-frequency sound that arrives first and after that a low-frequency sound. The fusion of a sound and their echoes and the localization of the earliest arriving sound is called the precedence effect and muss be less than 35ms in order to be perceived as one sound. Echoes may help people locate the source of the sound. The reverberation sound is a cue to the distance of a sound source from an observer. Another factor important to the distance of a sound source is that high frequencies appear to be nearer, whereas low frequencies are perceived as being further, even though the different frequency come from the same source. Higher frequencies components are more easily blocked by obstructions than low frequencies. Furthermore, it was found that nearer sounds are perceived as higher in intensity than the same sound from a greater distance.

People move their heads in the direction of the sound source in order to perceive the sound better. The illusion that a sound comes from a visual object can be so influence that the person perceives it louder than it actually is. Loudness constancy refers to the diminishing sound pressure as the distance of the sound source increases. The head-related transfer function is a description of exactly how each frequency in a sound is amplified or damped by the body parts near the ears. This function is different from individual to individual.

Dimensions of sounds

People believed that the subjective experience of sound corresponded with the physical properties of sound. They thought that loudness was thought to be a reflection of sound pressure and that pitch was a reflection of sound frequency. However, it turned out that pitch and loudness are more complex and depend on the interaction of physical characteristics of the stimulus and the physical and psychological state of the listener. There are also other experiences associated with sound stimuli. Some of these are the perceived location of the sound (where it seems to come from), the perceived duration, the timbre (characteristic that allows us to distinguish a note played on a clarinet from that same note played on a hobo), the volume, the density (hardness of the sound) and the consonance or dissonance (sounds that go together and don’t go together). Pitch and loudness seem to be the most important, because people can classify sounds faster on the basis of these two things than on the other things mentioned.

Loudness has a lot to do with sound pressure. The decibels of sound pressure are not measures of loudness. To study the magnitude of loudness it is measured by a power function: L = aP0,6 and a is a constant. Stevens made his own ‘rule’ to measure loudness. A standard sound has 1000 Hz at a level of 40 dB and this has a loudness of 1 sone. To go from 1 sone to 2 sones, the level of the sound has to increase by 10 dB. For very weak sounds, below 30 dB, increasing the loudness requires smaller increases in level. The loudness of a tone is also affected by the frequency, duration and complexity of a sound. For example, when a sound has a low pressure level, the duration of that sound has to be longer to be detected in comparison to a sound with a high pressure level. Equal loudness contour shows the sound pressure levels at which tones of different frequencies appear to be equally loud as a standard tone. Spectral loudness summation is the summation of loudness across critical frequency bands. The summation is less when the duration of the tones, which are added are longer. Temporal loudness summation refers to the summation of loudness over time and binaural loudness summation is the summation of loudness from the ears. A complex sound includes all frequencies in some range (bandwidth) around a particular frequency. If the bandwidth is increased no difference in loudness are detected, however, the number of added frequencies change the perception of loudness. The more frequencies the louder the tone is perceived.

The pitch can be different in different contexts. There is musical pitch, which is the sound in a musical context and then there is acoustical pitch, which refers to the pitch of isolated sounds. Acoustical pitch is the pitch of isolated sounds (in non-musical context). The most important characteristic of the acoustical pitch is the frequency of the sound stimulus. Scientists usually use the mel scale to determine the acoustical pitch. Another important determinant for pitch is the sound pressure. Equal pitch contours refers to the adjustment of the pressure level of one out of two sounds until the tones match in pitch. When a sound of different pure tones is played, the most intense and pure tone in this complex and the sound with the lowest-frequency will be called the fundamental. Frequencies that are higher than the fundamental are called the harmonics. The fundamental frequency is the greatest common denominator of all harmonics present in complex sounds. 200 Hz is the fundamental of 400 Hz, 600 Hz and 800 Hz. If you listen to a complex sound that has no fundamental, the pitch of that sound will still sound like the pitch of a sound that contains both fundamental and harmonics. This is also called the missing fundamental illusion. This plays a role in the two big theories of pitch. The place principle suggests that some parts of the basilar membrane vibrate in sympathy with low-frequency tones while other parts vibrate in sympathy with high-frequency tones and so on. Pitches seem to be encoded as different places of vibration along the basilar membrane. This theory has had a lot of support, but one critical point is that it can’t explain the missing fundamental illusion. According to the other theory of pitch, the frequency principle, the pitch is determined by the overall frequency of firing in the auditory nerve. Also, this theory can help explain the missing frequency illusion, because according to the theory the fundamental has only some Hz among all the other Hz and therefore, should not be perceived as being not there. Volley principle explains how neurons fire in groups or squads. While one neuron is reloading its neighbours might be firing.

In our everyday life we hear a lot of sounds. How can our brain possible deal with all these sounds? People have auditory scenes. These are all the sound-producing events we experience. Each sound source varies in the frequency of the sound, the duration, time and location. All these events will produce a certain acoustic energy that reaches the ear. Humans need to build separate mental representations of the events from the sound mixture they receive. To do this, people need to retrack where the sounds came from. This is called auditory scene analysis. Then the mental representation of the auditory scene needs to be integrated with the representation of the auditory scene. The first mechanism of the auditory scene analysis is auditory grouping. This means that the sound received by the ear is divided into separate groups of shared frequency. Sounds that have similar patterns are grouped together into separate auditory streams. Auditory grouping is a fast and involuntary process. The auditory scene can also be interpreted with the help of another mechanism: auditory schemas. Schemas contain knowledge about regular patterns of sounds that have certain meanings. Sound that is selected by attention can be processed better. Every person has different schemas, because everybody has different experiences.

Music

Music is a series of sounds that possess a certain structural relationship. Musical pitch is determined by the tone height and musical notes. The octave is also an important aspect of music. This is about the frequency. When there are two sounds and when these sounds are separated by an octave, the fundamental frequency of the higher is exactly twice the frequency of the lower. Musical notes that have the same relative position in an octave seem more similar to each other than do notes that have different relative positions in the octave.

It is strange that notes that are close in frequency sound less similar than notes that are farther away in frequency. Some people are good in identifying musical notes. These people have perfect pitch. Musicians are better in identifying pitches than non-musicians, but there are some instances in which non-musicians have a good memory for the same pitch. Mothers can sing the same song to their children in the identical pitch and tempo as they did the week before. Identifying pure tones is difficult, even for musicians. When they hear a note being played on an instrument, they can recognize the note but only because they use the harmonics to help in identification. The left-hemisphere auditory association cortex of musicians with perfect pitch is larger than that of musicians without perfect pitch and that of non-musicians.

Another important factor for music is the melody. This is the sequence of pitch changes in a series of notes. Musical notes are also grouped together. This is done by the law of proximity. Notes that are close in musical pitch are grouped together whereas notes that are apart are grouped into different forms. When different instruments are playing at the same time, the sounds that have similar timbres are grouped together. This is done according to the law of similarity. Law of continuation is about the sequence of pitch changing in the same direction. Other important factors of music are rhythm and tempo. Tempo is the perceived speed and rhythm is the perceived organization in time. The rhythmic structure helps people to sing a song from memory at the same tempo at which the song was recorded.

Speech

Phonetic is how speech sound is produced and phonemics, how specific sounds distinguish words in language. The gestures of the speaker and the way somebody says something usually reveals the true intentions of the speaker. There are two basic types of speech sounds: vowels and consonants. Letters can be grouped in different groups which use different manners to convey the sound. The vocal tract is the air passage in our throats, mouths, and nasal areas. Closing movements of the articulators (the parts of the vocal tract used to shape speech sounds, e.g. teeth, tongue or lips) produces consonants and the opening movements consonants. Stops, like the letter p, are produced by stopping the airflow and then releasing it. Fricatives, like the f, are made by stopping the flow through the nose and leaving a small opening for air to flow through the mouth. Nasals, like the m, are produced by closing the mouth and the air flows through the nose. We hear speech in segments that we interpret as words separated by pauses. Natural speech occurs in a continuous stream without any breaks. Our brain has two special areas for speech processing: Wernicke’s area and Broca’s area. Wernicke’s area is in the left temporal lobe and responsible for speech processing and Broca’s area is in the left frontal lobe and concerned with speech production. Damage to these areas results in disruption in speech comprehension or production. The McGurk effect is a type of illusion in which the listener of a voice looks and listens to what another person is saying and bases his or her judgment of the word that is pronounced on the movement the lips of the speaking person are making. So, somebody can be saying ‘da’, but the mouth is instigating that the person is saying ‘ba’. We will therefore probably think that the person is saying ‘ba’. What people need to understand, is that we do not need visual cues to understand speech. When we listen to the radio, we do not see the people talking but we can understand everything. Also, looking at somebody’s lip movements can make people understanding speech less.

This occurs because of homophones. These are words that sound alike when spoken, but have different meanings. Homophenes are words with the same mouth movements, but they sound different. When lip readers look at the word married and buried, they will have trouble identifying it because these words use nearly identical movements of the mouth. Phonemic restauration effect refers to the fact that the context serves continuous speech understanding. For instance, if the letter s in legislation is ignored 19 out of 20 do not notice any strange things when asked later. Thus, the more cues available, the more ways a missing sound can make a meaningful missing part and the more predictable the context makes a missing part the more likely is this effect.

There are many theories about speech perception. There isn’t a theory that is completely correct. The trace theory and cohort theory are the most important ones. Theories of speech can be grouped by level of analysis, whether they utilize active and passive processing and whether they deal with the identification of phonemes or of words. Passive theories use feature detectors or template matching. Feature detectors are neurons that detect specific aspect of speech. Active models look at the context, in which the speech is occurring, the expectations of the listener and the memory. One of these active theories is the cohort theory. This theory suggests that a word is identified and in turn activates a cohort of words in the memory with similar phonemes. Another important active theory is the trace theory. This theory suggests that activating one word will lead to the activation of all words that are connected to it.

How can patterns be percieved visually? - Chapter 8

Contours are the building blocks of visual patterns. There are four fundamental characteristics of light that play a role in vision: intensity, wavelength and the distribution over time and space. Contours can be seen as sudden changes in light intensity across space. The visual system divides the visual field in regions of uniform brightness. These are called shapes. Shapes can be separated from the background or other shapes by contours. The regions in a retinal image where the light intensity changes abruptly are called first-order contours. A first-order contour in real life is the silhouette of a person. There are also other types of contours. Some are caused by texture differences which are not physically there but that are constructed by the perceptual system from the first-order contours that are present. This is called a second-order contour or subjective contour. The visual cortex is ‘made’ to detect first-order contours, because the V1 cells are activated by luminance or colour differences. You can see these cells as edge detection cells and they signal the presence of a contour to us and the various contours are pieced together in higher regions of the cortex to form shapes and patterns. Without contours, we use our ability to see. Scientists can do studies about this using a Ganzfield. This is a visual field that contains no abrupt luminance changes and therefore no contours. Many people who look in this field report seeing a shapeless fog and some even experience a blank out. This means that a person thinks that he or she can’t see anymore.

Acuity

It can sometimes be hard to detect contours and shapes. This is the case when details are very small and the stimulus changes that define the contours are not very large. Visual acuity is the ability of the eye to resolve details. Recognition acuity is the most common type of acuity and it refers to seeing small things, like doing an eye-test at the doctor’s. You will have to look on a poster with big letters on the first row and smaller letters on other rows. A visual angle is a measure of the size of the retinal image. If you look at certain letters from a distance, you may get confused whether a letter is an O or a Q. However, it is easy to see whether a letter is a L or a W. Sometimes an eye-test uses circles with a gap in them. The gap can be oriented up, down, left or right. The observer needs to indicate the position of the gap. These circles with gaps differ in size and the smallest detectable gap is the measure of acuity. Vernier or directional acuity requires an observer to distinguish a broken line from an unbroken line. Resolution acuity is the ability to detect a gap between two bars. Acuity does not entirely depend on our receptive fields. Receptive fields might have a certain size, but acuities can be much smaller than the receptive fields and still be detected. Detecting really small resolutions is called hyperacuity.

Acuity may depend on a couple of things. Two of those are colour and intensity of the target. If you step out of the sunlight into a dim room, you will not be able to read the letters of an article in the beginning. As your eyes adapt to the dim room, you will be able to read these letters.

Intensity is another factor that may affect acuity. If you increase the difference between the intensity of the target and the background, you will be more likely to see the target. Also, if you increase the time (another factor) the observer views the stimulus, the observer will be more likely to see the target. Acuity is also best in the central fovea and drops in the periphery. The parts of the retina with the highest acuity contain mostly cones and they are better at high illumination levels. The relationship between acuity and illumination may have certain implications. One of these is night myopia. This is the tendency to accommodate the eye inappropriately near when you are standing in the dark, even when the object of interest is far away.

Spatial frequency

The eyes have many receptors. It would be too much work if we had to analyze every point of light and its intensity before making statements about acuity. Scientists have therefore looked for ways in which they could explain the relationship between visual arrays and variables. One of those attempts is a mathematical technique, Fourier’s theorem. According to this theorem, it is possible to analyze patterns of stimuli into a series of simpler sine wave patterns. Each of these patterns would be seen as a varying pattern of light and dark if seen alone. A wave is what most of us had in physics or mathematics. The light is more intense where the functions rises and less intense where the function falls. The distribution (of rising and falling) is called a sine wave grating. According to Fourier’s theorem you can add a number of gratings together and produce a specific light distribution.

Visual scientists have also looked at the resolution ability of the eye and they found that an important factor was the contrast. This is the difference between the highest and lowest luminance level of a certain pattern. They usually use a contrast ration, which is the difference between the highest and lowest luminance level and divided by a certain number. To resolve changes in light intensity over space, people can use contrast matching. People are asked to adjust the intensity of light and dark regions of an object (like an image) until they appear to have the same contrast. Another way to measure sensitivity to different spatial frequencies is by looking at how much amount of contrast there is needed to detect that there is a grating present in a certain pattern. This is called contrast threshold. More sensitivity is needed to detect a contrast in higher spatial frequencies. This is because the eye has a high-frequency cutoff. Younger individuals are better able at detecting high frequencies and older individuals are less able to detect this. The neurons that are capable of such visual differences are called neural filters. They are called filters because they fire the most rapidly when a specific visual pattern excites them and fire less to other patterns.

Contour and interaction

The Hermann grid is a famous arrangement of black squares with white interspaces. It seems as if there are grey smudges at the intersections between the black squares. If you try to look at one of these grey smudges it disappears. This happens because of the ganglion cells. The intersections are seen as having less light than the black squares.

Visual masking is the reduction in the visibility of a contour that is caused by the presence of another stimulus that is close to the first one in space and/or time. People can separate the target stimulus and the stimulus that would interact with it in time in a couple of ways. One can first choose to present the stimulus for a short time and then replace it with a masking stimulus that overlaps the same position in space of the first stimulus. This is pattern masking. One can also show the target stimulus and than show the masking stimulus that looks like the target stimulus but that does not have overlapping contours. This is called metacontrast.

What you see when the target and mask are separated in time depends on the time between the presentation of these two. This is called the interstimulus interval or stimulus onset asynchrony. When you are presented with a highly visible stimulus but it is presented briefly, this stimulus will not stay long in your vision. When you present the masking stimulus later on, the first stimulus will be completely invisible to you. This is called backward masking, because the effect works backward in time. If the mask occurs too early, the amount of masking is reduced.

Some contours are created by colours and others by edges. Some are created by shadows and others by luminance. The visual system therefore needs to group contours together that truly belong together and to assign contours to surface edge or shadow edge. Scientists look at contours by examining the gradient of image intensity in a certain part of the image to see where the intensity changes the most. They represent the locations in a contour map. Contour grouping can be studied by asking observers to indicate shapes hidden in pictures. To differentiate shapes, features need to be found. Every object has certain features by which you can recognize it. The relevant-features can be size, texture, colour and other things. Experimenters can ask observers to search for a certain target in the middle of distracting items. The feature that makes the target ‘pop out’ is a basic visual feature. This is called the visual search task. Researchers may also give a texture segregation task in which observers have to identify the presence of an odd part in a briefly flashed display. Simple features can be combined to create emergent features.

Ground and figure

The Gestalt psychologists are interested how our perception comes to be organized into shapes and patterns and how some visual elements are part of the same figure of groupings and how others are part of other groups. They especially look at how figure and ground can be differentiated. Shapes can be figures but they can also be part of the background. A ball resting on a field of grass is a figure (because the ball is a round shape) and it rests on the ground that consists of many shapes (leaves of grass). The smaller an area is, the more likely it is to be seen as a figure. Also, a lower part of a pattern is usually seen as a figure, but it is not always the figure.

Gestalt researchers also study how elements in visual patterns become organized into objectlike perceptions. They made certain laws for some of these principles, called the Gestalt laws of perceptual organization. One of these laws is the law of proximity. This law states that elements close to each other tend to be perceived as a unit or figure. The law of similarity states that similar objects tend to be grouped together. This can be based on brightness, colour or shape. The law of good continuation states that elements that appear to follow in the same direction (for example a straight line) tend to be grouped together. When elements are moving, the elements that move together tend to be grouped together. This is the law of common motion. The law of closure states that people ignore gaps between elements because they want to see a closed figure. The law of Pragnanz states that the visual array will be formed from the simplest and most stable form as possible. This means that people have the tendency to select figures that have equal corner angels and side lengths and thus are the most regular and symmetrical. Sometimes it is hard to make up figures (or ground). This is because some shapes we tend to see exist only perceptually, not physically.

Figure boundaries can be defined by separating regions of the visual image based on differences in visual texture. Textures are groups of tiny contour elements or shapes that do not different in average brightness or colour.

How does perception of space and depth work? - Chapter 9

The perception of depth has two different aspects. One aspect is the perception of actual distance of an object and is an estimate of absolute distance. This involves egocentric localization. Egocentric means where our bodies are positioned relative to objects in the external environment. The other aspect of depth perception is relative distance. Does the book lie nearer to the glass or to the laptop on the desk? The observer needs to make object-relative localizations and these are estimates of the distances between objects in the observer’s environment. Relative distance is involved in the perception of whether an object is three- dimensional or a two-dimensional picture. Our brain is able to convert two-dimensional images into three-dimensional images. There are three different theoretical approaches that try to explain this:

  1. Direct perception: this theory suggests that all the information we need to see three dimensionally is present in the retinal image. Also, the visual scene is analyzed by the brain in terms of whole objects (and not just edges or colours). There is also an immediate impression of depth and there is no further computation needed to view depth.

  2. Computational theory of vision: this theory suggests that all the information we need for seeing three-dimensionally is present in the visual inputs. The interpretation of the three dimensions requires complex computations and different stages of analysis.

  3. Intelligent perception: this theory suggests we do not only have the visual information of that moment, but that we can also use information based on our previous experiences or expectations. This theory suggests that we do not only use the information given in the visual image, but also information from our past history.

When you look at a photograph, you will see the spatial relationships among various items. It is easy to see the relative distance between these objects with the help of certain cues, called pictorial depth cues. They are sometimes also called monocular cues, because they also appear when you look with one eye at a scene. One of the depth cues is interposition or occlusion. This means that a nearer object tends to block an observer’s view from a more distant object. This is a cue for relative depth. It does not tell us how far the objects are, it only tells us which object is closer to us. If an object is in front of another one, our visual system fills in the occluded portion of the object very rapidly. This is called an amodal completion. Light can’t pass through most objects and travels in straight lines. Objects facing the light source will be bright and surfaces away from the light source will be in the shadow. The patterns of the shadow can give information about the relative shape of solid objects. The lower part of an object will catch more light when the light comes from below. The upper part will be in relative shadow. The shading that defines the shape of an object is called the attached shadow. The presence of another object can affect the shading pattern of the first object when the second object is lying in the path of the light source. This object will give rise to a cast shadow. Both attached and cast shadows fall away from the source of light. The visual system does use different information from the attached shadow than from the cast shadow in analyzing the world.

Attached shadows give information about the shape of an object by light and dark regions. Cast shadows give a distorted silhouette of the objects casting the shadows. Attached shadows can’t tell us much about the relative depth, whereas cast shadows can help with determining the relative depth. The bigger the perceived distance between the object and the cast shadow, the bigger the perceived distance will be between the object and the shadowed surface.

Another way in which depth can be seen, is by aerial perspective. This means that an image of a distant object will be less distinct than the image of a nearer object, even if both images have the same colour. This is because of the scattering of light. Light that is further away must travel through the atmosphere for a greater distance and can therefore be absorbed more. The retinal image size of an object begins to diminish as the object moves further away. We use the retinal size of an object as a cue for relative distance. The familiar size can also play a role in depth perception. When you are used to playing cards, you will be familiar with the size of playing cards. When you see playing cards that are smaller, you will probably think that this card is further away. When you see a card that is bigger than the standard playing card, you will probably think it is close to you. The relative height refers to where an object is relative to the horizon line. An object that is further from the horizontal line seems to be further away than an object closer to the horizontal line.

Most depth cues discussed in this text are cues that can be found in the retinal image itself. Other cues for distance arise because of the way the visual system and visual stimulus interact with each other. These cues are called structural and physiological cues. One of these cues is called accommodation. This is the ability of the eye to keep the retinal image in clear focus. This means that the eye can become flatter or more curved. When the eye becomes more curved, you can see near objects better and when the eye is more flattened, you can see distant objects better. A blur can also serve as a cue for relative distance. If there are no other depth cues, observers will notice that certain blurred objects are not at the same distance from them as non-blurred objects are. Another distance cue is the binocular vision. The best vision of an object is seen when the object is focused on the two foveas. The eyes will move to bring the image to the fovea region of each eye. Vergence movement is the movement of the eyes in different directions. Convergence movement is the inward movement of the eyes. This is done when an object is close to you. A divergence movement is the movement eyes make when they move away from each other in an outward rotation to see objects that are far away. When you move, you have to use other depth cues than when thing do not move. When you are sitting in a train and you look outside at a certain point, the fixation point, the things that are closer to you than the fixation point will appear to move in the opposite direction than the train is and the things that are further away than the fixation point will be moving in the same direction as the train is. The nearer the object is to the retina, the faster will its motion be (relative to other objects). This is called motion parallax. This gives us information about the relative distance of objects. Kinetic depth effect is when motion cues give information about relative depth.

Having two eyes really helps us estimate relative depth. Humans have two eyes that are horizontally separated but which overlap in their view of the world. Each eye also has a different view of the objects and therefore also a different image of the world. The eyes receive slightly different images, called binocular disparity. Fusing two different images into a single unified percept is called fusion. When the two visions can’t be put together they create double vision and this is called diplopia. Depth cues are sufficient by itself to give us perceptions of depth, they have to work together to give us more information about the three-dimensional arrangements in space. Adding new depth cues increases the accuracy of depth estimates. When you see a car drive by and the car disappears behind the house, you will know that the car is further away then the house is. This ‘vanishing’ of objects behind other objects is called surface deletion. Surface accretion is the reappearance of the object. Sometimes depth cues are in conflict and can reduce the perception of depth. This often happens in pictures. Pictures are flat and they have many cues of flatness. There will be less attention given to the depth cues and the picture will look less three dimensional.

Depth is one aspect of perception of space. The location of an object is the other. The direction of an object relative to our bodies is called egocentric direction. The allocentric direction is the location of the object to another object. The egocentric direction exists out of two types of judgments. The first one is bodycentric direction. The body is used as a reference line for right and left. The second type is the headcentric direction, which uses the head as a reference location for right and left. Eye movements help us determine the visual direction of an object.

Research shows that simpler animals are born with the perception of direction and distance. In higher animals (most mammals) experience plays a bigger role in the perception of direction and distance. Researchers investigate factors that influence depth perception by controlled-rearing procedures. They rear animals in total darkness from birth until testing. The animals won’t have had any visual experience that way. The animals that needed experience with various visual depth cues to develop normal depth perception will have a problem to respond to distance cues. If an animal is born with depth perception, this animal will not have problems to respond to distance cues, even if it has been reared in the dark. Some researchers suggest that certain animals have an inborn ability for depth perception, but that they also need experience to react normally to depth cues. When cats and rats are reared in the dark, they will show little depth discrimination when first tested. But when they receive more and more experience in the lighted world, their depth discrimination will approve hugely. These animals will be indistinguishable from normally reared animals. However, there are sensitive periods in an animal’s development. This means that there are certain ages when depriving an animal of a type of visual experience will result in never learning these visual behaviours at all.

How does the brain process perceiving objects and scenes? - Chapter 10

Our whole world is filled with objects. The information we receive through vision from these things is the pattern of light that is reflected from the objects. The image is the pattern of light and this is a two-dimensional array. The image can be further specified in the total intensity of light (brightness) and the wavelength (hue). These two factors are determined by four general aspects of the environment. The first aspect is the light-source and this refers to the intensity and the direction of the regions that produce light in the environment. The sun and light bulbs are primary light sources. Things that reflect the light (like the moon reflecting the sunlight and the walls reflecting a light bulb) are secondary light sources. The second aspect is the surface reflectance. Some surfaces absorb short wavelengths and other surfaces absorb long wavelengths. This means that we will perceive different colours from objects that absorb long wavelengths and objects that absorb short wavelengths. Also, some surfaces are highly reflecting and will look glossy, while others absorb much light and will be matte. The third aspect is the surface orientation of the visible surfaces. This deals with the angle between the direction of the light source, the surface of an object and also the angle between the surface and the viewer. As these two angles become more unequal, we will perceive the image as darker. The fourth aspect is the viewing position. This is the relationship between the scene and the viewer’s eyes. When you move around, you will see different points of the scene.

Basic Assumptions

The four aspects mentioned above form an interconnected web. This means that if three of these things are known, the fourth can be found. The problem is, however, that usually not even three of these aspects are known with certainty. So the visual system has a lot of guesses to make. These guesses need to rely on cues. Many of these cues have been studied through visual illusions. Illusions occur when the guess made by the visual system about a scene is inappropriate. Researchers study these situations and can find general rules used by the visual system. When the visual system tries to interpret a scene, it makes some assumptions:

  1. Scenes are lit from above: one part of an object can have shading. This may occur because an object is curved and some parts of it will reflect a greater amount of light to the eye than others. This is also a cue for the perception of depth. Another explanation for the shading is that one park of the surface may be graded and this means that some regions of the surface are really darker than others and that there is no difference in depth. The visual system tends to interpret shading with the assumption that the light shines from above the scene.

  2. Surfaces are generally confex: when pictures of surfaces have a few cues for three-dimensional shapes, the visual system usually interprets them as being convex (solid). Things are not seen as hollow but as bumps.

  3. Objects are attached to surfaces: our visual system presumes that objects are attached to surfaces, even when we can’t see those surfaces.

  4. Objects are generally viewed from above: our visual system interprets objects as though they are being viewed from above rather than from below.

  5. Generic viewpoint: humans tend to interpret the relation between two or more edges as though it will hold for different viewpoints. When you look at a wall, you can see lines that seem to meet each other in the middle. You will also probably think that these lines would meet if you looked at them from a different point, but it often doesn’t turn out to be the case.

Constancy

We would see weird things if the only information we had about a certain object and scene came through the images of our eyes at a certain moment. We would see people getting larger when they came close to us and get smaller when they leave. There are different types of constancy that help us view certain objects. When somebody walks away from you, the distance between your eyes and that person grows larger and the size of the retinal image will grow smaller. To see an object as being the same size despite changes in objective distance and retinal image size is called size consistency. The more cues exist about the distance, the greater the sense that an object is farther away and the stronger the size constancy effect. Sometimes depth cues can show us that some objects are further away than others. One of the most popular visual illusions is the picture of two horizontal lines in equal length are drawn between two converging vertical lines. The upper line seems to be a bit longer and this is called the Ponzo illusion. This illusion is caused by the fact that two converging lines are interpreted as cues for distance. This is an automatic and unconscious process.

An example of size constancy in everyday life is the moon illusion. The moon is always the same size, but the moon on the horizon appears to be larger than the moon when it’s high in the sky. Our brains see the moon and stars as if they were painted on a large dome and this is also seen as a flattened bowl. When you look at the horizon, you will see many cues for depth, but if you look up into the sky you will not see any visual cues for depth. The moon at the horizon will look bigger than the moon at the sky. Shape constancy is the perception that the shape of an object is the same, despite variations in the shape that is projected from the object to the eyes. Size constancy and shape constancy are closely related, because they depend both on distance perception.

Perceptual constancy also affects light. The lightness constancy means that we know what proportion of light is reflected from an object’s surface independent of the amount of light that is shining onto a surface. Seeing the lightness of an object is dependent on two thins. One is the amount of light from all light sources that fall on the object and the other is the proportion of light falling on the object that is reflected the observer’s eye. The first one is called external illuminance, the second is called surface reflectance. White surfaces reflect most of the light falling on it and black surfaces will absorb most of the light. When these two surfaces are viewed under the same light, the white surface will appear lighter than the black one. Colour constancy means that the perceived colour of an object has a similar quality in the face of different lighting conditions. Human beings will see a red apple as red under the bright sun or when viewed through a fluorescent blue or yellow light. Position constancy means that even though objects are often in motion across the human retina, humans do not experience them as moving. This exists out of two types. Object position constancy means that when somebody moves around a stationary scene the relationship of the position of objects remain the same despite the change in viewpoint on the scene. The egocentric direction constancy means that the spatial relationship between the observer and the objects in the environment is changing as the observer moves in it.

Visual object

This whole chapter is about objects and we exactly seem to know what an object is. It is, however, difficult to describe what an object is. There is not one approach that can exactly tell us what an object is and in this section a couple of different approaches are discussed. Humans can choose to view one object and not another. The mental selection process that goes with this is called covert spatial orienting. Using this process, one can inspect someone or something in their visual field without letting other people know where he or she is looking. This may come in handy when you are playing a team sport (like basketball). Objects that are not the focus of covert orienting are seen less well. Figure and ground also play an important role in understanding objects. Some regions have a definite shape and other regions are much less well defined. A popular image is that of a black and white region. The white region resembles a vase and the black regions resemble the faces of two people looking from the side. When you look at the vase, the black regions are seen as a black background. The shape that has ownership of the edge at an image boundary is called the figure. The ground can be seen as the background of an image. When you see the vase in the image, this will be the figure and the two black faces will be the ground. When you look at the black faces these will be the figures and the white vase will be the ground.

Human beings have made many images in which one shape is seen as the figure and the other as the ground. Usually the shape that is more enclosed, has a greater contrast with the background, has more symmetry, is of greater familiarity to the observer and is smaller in size will be seen as the figure. Sometimes when you look at an object you may see one thing, but if you look closer you will discover more details and even see more objects. Some visual objects have multiple levels. Hierarchical stimuli contain different kinds of information at several levels. Theories of visual object identification distinguish between two types of psychological processes. One is data driven, which means that information that arrives at the receptors is processed by a fixed set of rules. The second type is conceptually data and higher-level processes such as memory and past experiences guide a search for other patterns in the stimulus or image. The gist is important information for the perceptual understanding. The gist is the meaning of a scene. The layout of a scene refers to the relative locations of the objects and surfaces in the scene.

How do we perceive time? - Chapter 11

Human brains respond to changes and the basic unit of human perceptual experience is the event. Objects and actions interact together and form events. When you have objects and events you can plan future actions. The meaning of languages is not in the words themselves, but it is in the sequence or timing of those words. The sequence of words determines the actual meaning humans perceive. A man-eating shark has another meaning than a shark-eating man. Both phrases contain the same elements, but the sequence or timing of the elements determines the meaning we perceive. Time is needed to understand the actions and relations between objects. Perception is limited when the time dimension is lost. Even when we want to perceive still pictures, moment-to-moment changes in stimulation are needed. When you look at a picture and don’t move your eyes at all, the retinal image is constantly in motion. This is because we have eye movements over which we have little control. The eye jiggles and shivers in the socket because of tiny microsaccades. These cause the retinal image to move from place to place on the retina. This causes the contours in the retinal image to move continuously over different retinal receptors. Researchers can sometimes give observers special contact lenses that glue the image in place. The image will stay on the same retinal receptors, even how the eye moves. After a few seconds, the entire visual field fades from consciousness. There are no contours and no colours. When the image is flickered, the visual field will reappear in consciousness. It seems that time is a critical property for stimulating the individual receptors. If there is no change, receptors cease to respond.

Our perception of an event does not occur in real time, but with a delay. When somebody gets stimulated by a flash of light (or another stimulus), the neural activity associated with the flash of light begins with a small delay. The neural activity, however, will continue for a duration that is longer than the duration of the original flash. The perceived duration of the light will be a sort of illusion and appears longer than it actually is. This overestimation of the length of a stimulus is called a visible persistence. Sometimes, the delay between the visual stimulus and our ability to perceive it can give problems in responding effectively to threats in our environment. You need to see a stimulus, the stimulus needs to receive our retina and we need to act. When two brief flashes are being showed after each other and in separate spatial locations, temporal integration occurs. The two flashes appear to form a single unified stimulus. When two brief flashes are showed in succession but in the same spatial location, backward masking occurs. This means that the second of the two flashes will be perceived more accurately than the first. When you show a visual illusion in which there are spatial inconsistencies (like prints of Escher), it will take some time to see that there are inconsistencies. The eye can’t detect these inconsistencies immediately. When a print is made smaller there will be less time-consuming eye movements needed to look at the print. However, the size of a print is not the only factor that affects the time of detecting inconsistency. The amount of information also plays a role in detecting inconsistencies. The more information, the harder it is to detect inconsistencies.

Sometimes targets can’t be seen if the stimulus is presented when there is another target nearby and when both targets have the same location and time. This is called visual masking. Forward masking means that when two stimuli are presented the first stimulus interferes with the perception of the second stimulus. Simultaneous masking means that perception is impaired because another stimulus is present. Backward masking means that the presence of a second stimulus interferes with the perception of the first stimulus. The forward stimulus is the weakest of the three types. Forward masking only occurs when the first and second stimulus are very close in time. For backward masking the presentation of the two stimuli close together in time is not needed. There are two types of backward masking: temporal integration and object substitution. Object substitution means that it takes a certain amount of time for a target to be processed well enough so we can recognize it. When the processing of the first pattern is not completed before a second patterns appears in the same spatial location, masking occurs. Temporal integration is basically the same as forward or simultaneous masking. So it is basically when two stimuli are presented really close together.

Sometimes it is a problem to figure out which parts of the information you receive belong together. Sometimes you receive two stimuli almost at the same time but you have to figure out if these stimuli belong together or not. Some researchers call this the temporal stability and plasticity dilemma. The temporal stability will look at on object that is undergoing change because the stimulus changes because people move or because there are other stimuli in the visual field. Perceptual plasticity is a process that attempts to segregate the parts of the sensory array.

The experience of time

Human beings have no ‘time organ’ which can detect the passage of time. The human notion of time can be associated with some form of internal clock. The experience of change can also impact our perception of time. Time affects the way in which humans perceive, act and think. Every human language has verbs to specify time. Time is not a thing that can be perceived directly and it is therefore difficult to study it. Time involves two qualities of the perception: awareness of a present moment and the impression that time passes. The subjective now is the ongoing consciousness or the current experience. Many researchers call this the specious now, because the conscious experience happens with a taped-delay. Duration estimation is the perception of how much time has elapsed between two events. The perception of order or sequence involves the determination of which event came first and so forth. The planning of an ordered sequence is anticipating things that have to come and have to happen in a certain way, like planning what to say (because you have to say things correctly if you want others to understand you). There are two mechanisms by which we perceive time. The first is the biological clock and this theory thinks that there is a physiological mechanism that humans can use as a timer for the perception of time. The second is the cognitive clock and according to this, time is derived from cognitive processes based on sensory information, number of events and paid attention to cognitive events.

The body seems to show physiological processes that have periodic changes. The warmest point of the body is in the afternoon, while the coolest point occurs during the night. The pulse and blood pressure also show day-night variations. These are called circadian rhythms. These activities are created by an internal biological clock and are not just functions of the regular changes in light and temperature. People will have different cycle lengths, but most of the humans will have a cycle that runs for approximately 25 hours long. Our day, however, is 24 hours long. The internal clock and local day-night clock is synchronized and this is called entrainment. We adapt to the local environment. Light is the primary zeitgeber, which means that light is the primary time giver. Many animals have an internal clock that is synchronized to light. Researchers have found that the suprachiasmic nucleus controls the timing function of the brain. There are also other events that have an effect on the perceived time. When the body temperature increases (like when somebody has fever), the speed of physiological activities increases too. Cognitive clocks theory suggests that the perception of time is not based on physical time but on the mental processes that occur during intervals. This is dependent on the mental monitoring of change. The more events happen or the more changes that occur, the faster the cognitive clock ticks and the longer the estimation of the amount of time that has passed. Also, when there is an increase in the amount of information processed, there is also an increase in the estimated duration of the interval. The more things you store in memory, the longer you will judge the time passed to be. The temporal processing model suggests that the more you pay attention to the passage of time, the longer the time interval appears to be.

How do we perceive motions? - Chapter 12

Researchers always wondered whether the perception of motion is a primary aspect of vision or that it is derived from the more primitive aspects of vision (like the perception of space and time). Research showed that motion was related to spatial location at different points in time. Many researchers believe that motion and space are the primary visual experiences from which the perception of time is derived. There are single neurons that are sensitive to the presence of motion in a particular direction and velocity. One of the most famous visual illusions is the waterfall illusion. If you look at the image of a waterfall or an image that is moving continuously in one direction and then afterwards look at a stationary image (like the telephone), that image will be moving in the opposite direction for a short period of time. It seems like the neurons have become fatigued and that this lets us system see motion in a direction opposite to that of the original stimulus. These cells are located in area V5. the technique used to study the waterfall illusion in the library is called selective adaptation. During the 1970s the hypothesis that motion was a primary sensation was confirmed in physiological studies.

Researchers think that a specific arrangement of neurons can ‘create’ neurons that are sensitive to the direction and speed of movement. These are called the Reichardt detector. There are two general visual pathways in primates and the older tectopulvinar pathway is more involved in motion perception that the geniculostriate pathway. In the geniculostriate system there are two different divisions with their own specialties. The magnocellular system is more responsive to moving stimuli and the parvocellular system is more involved in form and colour perception. The speed of a moving target depend on where it is in the visual field. This is because mango ganglion cells are not equally distributed across the retina. Motion in the periphery appears to be of higher velocity. The human ability to detect target movements increases with distance from the fovea.

We can perceive motions in two ways. One is by moving our eyes to follow a moving target and the other is by detecting shifts in the relative positions of parts of the visual system. The system that responds to image changes is called the image-retina system and the system that interprets motion from our eye movements is called the eye-head system. The ability to judge differences between two velocities is facilitated by other stationary stimuli. The image-retina system involves two sources of motion information. One of those is the subject-relative change. This provides only information of the movement of the target relative to observer’s position. The second one is the object-relative change which involves the movement of the target relative to another object. Sometimes problems can arise because multiple stimuli in motion appear. When you see three spots of light appearing at one side of a screen and a couple of seconds later three other spots appearing at the other side of the screen, you will probably think that the first spots have moved (motion).

The perception of motion can also bring about other problems. Sometimes people can’t tell whether something is moving or not. Researchers presented participants with a dot that was moving slowly and most of the participants couldn’t tell whether the dot was moving or not. When a stationary dot was placed next to the moving dot, participants could tell that one of the dots was moving. Interestingly enough, the participants couldn’t tell which dot was moving. When the researchers changed the context and surrounded the dot with a rectangular frame that was stationary. Observers could immediately tell that the dot was moving rather than the frame. Researchers then varied the conditions and made the dot stationary and the rectangular frame moving. This created an illusion, because observers thought that the dot was moving instead of the rectangular frame. This is called the induced motion. The mind can also be fooled into seeing motion from rapidly presented sequence of still stimulus frames. This is called apparent motion.

Functions of motion

One of the most important functions of motion is segmenting figure from ground. Some animals have visual systems that are only active when a moving stimulus is present. These systems only react to changes in the environment. In human beings motion assists in determining figure-ground relationships by the principle of common fate. This means that when the portions of an image move together are seen as the same object or surface. Motion can help us identify objects. Motion also helps us to see the shape of an object and the three-dimensional surface structure. You can see much when you look at an object, but you can even see more (like the back of the object) when this object moves. The characteristics of an object that can be provided by motion are cues that help identify that object. Some objects can even only be identified by their unique pattern of motion. A well-known example of this is a flying bird. When a bird is high up in the sky, you can’t really see much of its shape or colour. But the motion cues can tell a story about this bird. Larger birds have a slower beating of their wings and you will be able to see whether this bird is a raptor or a starling. Human beings can recognize an friend out of a group of people by studying the motion people in the group are making. Another function of motion is to evaluate the speed and direction of an object relative to other objects. Of course, for human beings (and other animals) it is important to know which direction and with what speed an object is going. This object might be a potential danger and we need to know when to get out of the way. Motion of certain objects may also induce movement of the self. When you are sitting in a bus and the vehicle beside you is starting to move, you may feel that you are moving. This is called vection. One of the most important functions of motion is the sense of balance. Our inner systems work together to coordinate the maintenance of an upright posture as we move our bodies or parts of our bodies. Even the most primitive animals have organs that are sensitive to changes in motion of the body.

How can the brain filter information? - Chapter 13

Our world is really busy. Not only do we have a lot of things to do, but a lot of things are happening around us. We hear a lot of things happening around us and some events demand an orienting response. The attention of a person can be drawn to a source of sudden change in the sensory world. To some stimuli you will pay attention, but to others you will not. When you listen to one event, you will probably not pay attention to another. You are filtering out some events and paying attention to one. The available separate sources of information about the world are called information channels. Paying attention to one single event is called focused attention. Sometimes people also divide their attention among several events and this is called divided attention.

People can select among several stimuli and orient their sensory receptors toward one stimulus and away from the other stimulus or stimuli. This is the simplest way to select among different stimuli. This approach requires actively pay attention to the world. The most primitive form of orienting response is adjusting the sense organs by turning one’s head, eyes or body. This way, people can pick up more information about an event. Cats and dogs prick up their ears when they hear a sudden sound. These movements that occur automatically are called orienting reflexes. The orienting reflex is very reliable and newborn infants also show the orienting reflex. Loud sounds, movements and appearing bright lights are the most effective orienting stimuli. Sudden events can also pause the breathing, dilate the pupil and decrease the heart rate. The responses that can be seen are called overt orienting responses. However, there are also unseen orienting responses and covert and overt orienting responses enhance our perception of an event. Covert and overt orienting responses usually occur together, but some researchers also think that covert orienting responses can occur without the overt orienting responses presence. Our attention is usually drawn by sudden appearing targets. The attention is oriented to the object itself as well as the location of the object. Paying attention to the appearance of objects usually happens when these are new objects (so not objects you have seen the previous few minutes). Stimuli in one modality (for example visual) can capture attention in another modality (for example auditory).

Researchers have studied the physiology of the brain and they have looked at the neural mechanisms of orienting. When monkeys attend visual stimuli, some areas of their brain have neurons that fire more vigorously than they do when the object is not attended. Cortical areas of the brain involve orienting attention. The superior colliculus is an area of the brain that contains many neurons that fire when a stimulus appears at a specific location in the visual field. The firing of the neurons is more enhanced in this area when there is a shift in attention toward a visual target. The posterial parietal lobe also has neurons that fire when the individual makes an eye movement toward a target. The parietal lobe is an important area for covert orienting. When the parietal lobe is damaged, an individual is not able to pay attention to and notice stimuli from one half of the visual field (hemifield neglect). The electrical activity of the neurons can be studied by the use of EEG.

Filtering

When an individual has oriented to an event and continues to attend to this event and no other events, this individual is said to be filtering all information except the information of that object. When you are at a party and talking to somebody, you may hear a familiar voice at the other side of the room. You will probably listen to this familiar voice and occasionally nod to the person you are ‘talking’ to. You will not remember what the person you were supposed to be talking to had said the past five minutes, but you will perfectly know what the other person you were paying attention to had said. You have filtered out everything else. Researchers showed participants tow overlapping video programs, one of a hand game (trying to slap each other’s hand) and one of a ball game (throwing a basketball around while moving around). Some of the participants were asked to report each attacking stroke of the hand game. Other participants were asked to report each throw of the ball from one player to the other. There were also odd events happening. In the slapping hand game players shook hands and in the basketball game the ball was thrown away and the players played with an imaginary ball for a couple of seconds. People who had to pay attention to the hand game did not notice the odd event of the basketball game and vice versa. This study has been repeated many times and some new studies even have a person dressed in a gorilla suit walking across the floor. People don’t seem to notice this as well. This is called inattentional blindness. The temporal lobe seems to be responsible for this, as this is the place in our brain that seems to tell us what something is and it seems to have neurons that are responsive to different figures and colours. Some neurons may be responsive to rectangles, while others are responsive to circles. The question arises whether we are able to pay attention to more than one source of information at the same time. Researchers have asked participants in the hand and ball game to pay attention to both videos, but the performance deteriorated dramatically. Dividing visual attention between two sources is very difficult. Dividing attention between two auditory information channels is also difficult. You can switch back and forth between the two information channels, but when both of these are demanding, much information is lost.

Searching

Human eyes are exploring the visual field with high-speed ballistic movements. These are called saccades. Meaning and intension can guide our eye movements. When you are asked to look at a picture and estimate the ages of the individuals on it or their wealth, you will look at different places to find the answer to this question. Human beings have to learn how to inspect spatial locations in a systematic order and this ability is fully developed when children are six or seven years old and it becomes more difficult for the elderly. Knowledge of the world is also used to guide visual inspection. This may help explain why we look longer at unusual objects in a visual scene. We also tend to recognize and remember these unusual objects better. However, inspecting a scene doesn’t mean that we see everything in the scene. There is a decreased likelihood that people will move their eyes and their attention back to a location they have recently looked at. This is called the inhibition of return.

Sometimes searching a certain target is easy and other times it is hard. When you have to search for a target that differs from the distracters in having a characteristic the distracters don’t have, this is called a feature search. Searching for a target that has a conjunction of features is called conjunction search. Feature searches are easier than conjunction searches. According to the feature integration theory each feature of a stimulus is registered separately. Focusing the attention on one location at a time can help identify an object from a combination of features.

Conjunction search can sometimes also be seen as two feature searches. This is called guided search. This may help speed the search. Sometimes items can be grouped together into smaller sets of stimuli. This small set can be checked at the same time and this may also speed up the search. This is called a parallel search. When you have to search for a feature in an image with many distracters, search time is dependent of the number of distracters present. This type of search is called controlled search. However, when you have practised much with this task, the search time will not be dependent of the number of distracters anymore. The search will become a simple feature search and the search is automatic. Automatic search has positive and negative outcomes. The negative outcome is that it becomes more difficult to prevent responding to automatic targets. It is also more difficult to remember the found things under automatic searching. The positive outcome is that we can attend to more than one task at a time.

It is difficult to predict where and when a signal will occur. Human beings have orienting mechanisms that draw their attention to conspicuous stimuli. Human beings also have certain strategies to investigate likely locations where important stimuli might be. Individuals might get advanced cues which can help us to predict when an event may happen or they can rely on their past experiences. Individuals may get an expectancy about the event and prepare for its occurrence.

Attention may mean many different things. There are also many ways in which attention can be studied. It is therefore difficult to form a coherent and widely accepted theory about attention. There are different approaches to understand attention, but not one of these is able to explain everything that is known about attention. All theories of attention try to explain the filtering aspect. The oldest surviving theories are the structural theories. According to these theories, the perceptual attention is structurally limited. Only one or a few stimulus inputs can pass at one time. Other theories have looked at focusing and dividing attention. Dividing attention between two tasks or searching for more than one target is more difficult than focusing on one task. Attentional resources can be used up by a task. If the demand is higher than the resources available, the performance will suffer. This is the theory of limited attention capacity.

Consciousness

Everybody knows what consciousness is (sort of), but it is really hard to define. Some researchers think that it is better not to define consciousness, because it is really hard to agree upon what consciousness is. However, there must be some guidelines as to what consciousness is because people need to know what they are talking about. The first distinction that has to be made is the distinction between conscious and unconscious states. People who are in a coma, under anesthetic or even in a deep sleep are unconscious. People who are awake and behave normally are in a conscious state. Sometimes state can be ambiguous, like meditation, sleepwalking, intoxication and dreaming. Psychologists in the 19th century thought that everything that wasn’t in a human being’s consciousness could not affect the behavior of that person. This is also the reason Sigmund Freud became controversial- he was the one that claimed that it were unconscious processes that affected behavior, not conscious ones. Nowadays we accept that fact that unconscious events can affect our behavior. Preconscious factors involve perceptual stimuli that are unconscious at this moment, but that can be brought into consciousness. When you are paying attention to something, this thing comes into your consciousness. The lack of consciousness for stimuli one is not paying attention to is called inattentional blindness. Most of the time, we think we are aware of everything that happens around us and this is called illusion of complete perception.

Perceptual awareness means that we can report something’s presence and that we are aware of it. The study of consciousness places psychologists in a position known as monism. This means that whatever consciousness is, it is a phenomenon of the world like any other phenomenon. Monists can be divided into two groups: materialists and mentalists. Materialist believe that the material world is fundamental and that consciousness is a phenomenon of matter. Mentalists believe that consciousness is fundamental and that the physical world arises from consciousness itself. Most researchers are materialists. Monism is quite the opposite of dualism. Dualism proposes that consciousness is a phenomenon of the mental world or the spiritual world. Dualism believes that there is a physical and a mental world and that consciousness falls in the mental domain. Property dualism is popular nowadays. This form of dualism beliefs that everything that exists is made up of physical matter, but that some of that matter is special and that this gives certain things (like consciousness) special, nonphysical properties. Dualism can give problems because it places consciousness outside of the realm that can be studied by science. This was probably done because of religious considerations.

It seems if there are more forms of consciousness. This is called the stream of consciousness. Perceptions and thoughts are constantly changing, but everything seems to happen consciously. Substantive states of consciousness are intervals of time when the consciousness of a person is occupied with a particular perceptual thought. You might look at a picture of a person to see if you remember a person on it.

Sometimes we are aware of relationships between thoughts but not aware of particular thoughts. These are called transitive states. An example of this is when you are late for class and need to hurry up. You will keep thinking about hurrying up, but once you are on your bike you will not recall if you have turned off your television. Block developed four definitions for discussing consciousness. Phenomenal consciousness is the actual experience of stimuli. Access consciousness is the availability of mental contents for verbal reports. Monitoring consciousness is the thought about one’s own experience that is distinct from that experience (thinking about the fact that you are thinking). Self-consciousness refers to self-awareness.

Attention

Attention and consciousness are connected. When you do not pay attention to a certain stimulus but look at another, the first stimulus will be kept out of consciousness simply because your attention is drawn away from it. Attended things are most often in consciousness. However, this is not always the case. People with a lesion in their parietal lobe can detect a stimulus in their left visual field when it is presented alone. But when the stimulus is presented at the same time as another stimulus in the right visual field, the people will not be longer conscious of the stimulus in the left field. This is called visual extinction. Some research shows that even though people were not consciously aware of a visual cue, their behaviour was still influenced by it. When individuals look at a stimulus and the consciousness of that stimulus has reached a reasonable conclusion as to what it represents, no further investigation of the stimulus is needed. However, when the individual is informed that the stimulus can also be seen in a different way, the attention of this person will be again turned to the image to search for this other way. A popular image of this is the image where you can see a posh young lady or an old woman. When you look at this image you will probably see a young lady. If you turn the page upside down, you will probably see an old woman.

Deficits in consciousness

Certain pathological conditions may affect the perceptual awareness. This may sometimes reduce the ability to identify objects or their specific properties. These conditions may also affect the ability to direct or distribute attention. These conditions are called agnosias. People who suffer from agnosias are aware of some aspects of the sensory world but unaware of other aspects. These conditions can be caused by carbon monoxide poisoning (CO) or by diseases and injuries that damage the functioning of certain brain parts, like Alzheimer’s disease.

People with colour agnosias can recognize a colour and describe objects, but they can’t describe the colour of an object. Some agnosic patients can recognize objects when these are presented in familiar orientations, but can’t recognize them when they are presented from an unusual angle. Some patients can only hold one object at a time in consciousness and not pay attention to more than one object at a time. These suffer from simultagnosia. Visual object agnosia and simultagnosia are grouped together under the label visual integrative agnosia. Visual hemineglect refers to the inability to see one side of the world consciously. When people with this type of agnosia are asked to draw a symmetrical object, they will produce a distortion on one side.

Agnosia is not the only distortion of consciousness. Some people appear to have even greater visual deficits. People with damage in V1 have scotomas. If a stimulus falls in a scotomas area, the person can’t even report the existence of that stimulus in consciousness. But this person is able to reach out and touch the location of the stimulus and move his or her eye towards it. It seems that consciousness and localization are two different things. There might be even two different visual systems. There might be a perceptional system that tells us what is out then displays consciousness. There might also be another system that is for action and which allows us to respond to the location of a stimulus. The action system does not require the knowledge of what an object exactly is.

Awareness

Some emotionally arousing stimuli are capable of being processed without awareness, such as human faces. Emotional stimuli are unconsciously processed, even when individuals are not aware that this stimulus has been presented. This stimulus may produce emotional responses and automatic nervous system arousal. Change blindness tasks involve looking for a difference between two scenes. The scenes are almost identical, except for a single change. The two scenes are shown after each other, multiple times, until the person indicates the change. This change is readily obvious, but it does not appear in consciousness until the person pays attention to it. During this task, researchers can look at an fMRI to record brain activity and to find out where the neural part of consciousness is. When the change was detected, increased activity was found in the parietal and dorsolateral frontal cortex activity. Other responses were also found in other areas of the brain depending on whether the stimulus was a face or a place. Data shows that V1 is necessary for visual awareness, but this area is not sufficient to explain the full pattern of conscious awareness of stimuli.

Consciousness is a complex process and it therefore would be likely to serve a function. One function of consciousness seems to be to plan, to interpret the world and to code information. Without it, we would feel as if there would still be things that needed to be processed and we would be uncertain and eventually paralyzed from acting. Consciousness helps us to select from processes that interact and influence each other.

What is the life span development approach? - Chapter 15

There are two different approaches in considering how an individual’s perceptual functioning changes, namely the life span development approach and the perceptual learning approach. The first one will be discussed in this chapter and the latter in Chapter 16. In the life span development approach the individual(s) are viewed over their entire lifespan, whereas the perceptual learning approach is concerned with changes in perception that emerge from experiences, this is the short term approach. The approaches are not mutually exclusive of each other and they both are necessary to understand perception.

Development of the nervous system

Testing: When testing infants, physical measures are favoured because no interaction is required from the infant. In the case of invasive methods animal subjects are used instead of human participants. One of these less invasive methods is the event related potential (ERPs). The brain activity of the infant is measured, while she responds to stimuli on a screen. Researchers talk about visually evoked potential (VEPs) if the target is visual in nature. In other cases, brain imagining is used to get a better understanding of the perception of infants, then Positron emission tomography (PET) or Functional magnetic resonance imagining (fMRI) are common choices. The captured activity is correlated with visual or auditory stimuli. Results from these kinds of studies show that infants (3-6 months) response comparable to visual stimuli like adults do. Thus, the visual systems develops quickly in the first year, however, some regions in the brain mature only after the child is around 11 years old. The stimulus size, the stimulus pattern and speed with which it is shown can cause differences between infants’ and adults’ visual perceptions.

The brain of infants does not change much in regard to grey matter. However, white matter increases and this indicates an increase in myelinization. Myelin is the fatty tissue that covers the axon and speed neural information transmission. The midbrain and brain stem are myelinizated around infancy, whereas the primary cortical sensory areas and parietal lobe are completed around young childhood and the myelinization for other brain regions is done in the early teens. Applying this knowledge to the ‘life span developmental approach’, this change in myelinization is an indicator of perceptual speed, hence makes the time period interesting for researchers.

The loss of connection between neurons is known as neuronal pruning. The number of connections decreases after 8 months of age, declining to almost one half until age 10 and thus, resembling an inverted U-shape. One theory tries to explain the lost by claiming that specialized pathways of information flow develop and that they are devoted to particular functions. The segregation of the connections happens when the child matures.

An experiment was done with infants under 4 months and above 4 months of age to confirm the theory’s prediction. They were shown vertical stripes in one eye and horizontal stripes in the other. The younger participants behaved as they would see the stripes mixed together (grid pattern) in one eye, however, the older participants reacted to the pattern as if it would be a novel one. Thus, the younger infant had a neural representation that the older infant did not have. The area V1 for the younger infant is probably not yet isolated for these symbols and therefore both eyes forward the information to the same cells in the visual cortex and as a result the child cannot see the stripes in isolation.

Early Events: Disruptions in the nervous systems and the brain during development, even if minor, can lead to decreased visual and auditory functions. A study was done with students, in which they were asked to report minor difficulties during their birth, e.g. prolonged labor or low birth weight. The results showed that these minor disturbances lead to poorer vision and poorer hearing and that they affect the development of the brain and nervous system, even if the participants were in other respect normal. Other studies have shown that individuals who were bottle fed have slightly inferior visual, auditory functioning as well as reduced speech discrimination abilities. The mother milk has polyunsaturated fatty acids that are essential for early formation of neural tissue. These acids are often not present in bottle milk formulas.

The Visual System: The eyes grows about threefold bigger in size after birth and thus compared to the body barely changes. From the very beginning the infants’ eye has retina and cones, but they are still not fully developed at that time point. The visual functioning of the retinal receptors is a little bit more developed at birth in the periphery of the eye than in the central region. The parvocellular pathway originated from the small ganglion cells and is associated with color and detailed vision. It gives a sustained response. However, the magnocelluar pathway stems from the large ganglion cells and is related to movement and depth perception. It gives transient response. The adult has two magnocellent layers and four parvocellular layers. Latter develops faster than first, however, the trend changes after the infant is one year old. This could be explained by pointing out that the magnocellular pathways respond to simpler visual dimension of luminance and motion. The parvollecular pathway reacts to complex features of the stimulus. The responses approach that of adults around age 12/13 years. The tectopulvinar system is associated with the perception of motion and eye movement control (more Chapter 3). Maturiation for this system is reached around 3-6 months. After birth these neurons are not only very large, but also slow, weak, and not very sensitive to direction. In the primary visual cortex (V1), the layers of cortex that receive inputs directly from the eye reach maturity and myelinization faster than layers that receive or send information from or to the brain. In conclusion, many features of the adult visual systems are also present in infants’ visual system, but all characteristics are not present yet.

Perception of Infants

Methods to test Infants’ perception: Three methods are commonly used to measures the orienting reflex of infants, which is the eye movements, head turns or visual following of a moving or suddenly appearing stimuli. (1) In preferential looking the child is put in a chamber and is shown different visual stimuli on a wall or roof. The researcher observes, times, and notes down the reactions of the infant through a holes so that the child does not see him. The durations of the child’s gazes toward the stimuli are recorded and if she looks longer at one stimulus than the experimenter would conclude a preference for it. This measure has its limitation because if the child focuses the stimuli for the same duration that does not necessarily mean that the child does not have a preference. Furthermore, this method does not give insight into why the child prefers one stimulus over another. (2) The forced choice referential looking is a modified version of the preferential looking method. Here, the discrimination and detection is measured by showing only one stimulus at a time and recording the response with a camera or an observer. (3) Another method focussed on whether the infant can discriminate between stimuli. The behaviour is also here monitored and only one stimulus is shown, like in the force choice referential looking. The assumption is that the child will look longer at the novel stimulus until she saturated and starts to lose interest, called habituation. Next, a different stimulus is presented, if the infant starts to take more time at looking at the novel stimulus one can conclude that the infant successfully differentiated between the two stimuli, which is called dishabituation.

Eye movements and attention: The visual attention system of an infant consists of three components: alertness, spatial orientating, and attention to objects. Altertness is then the infant is aroused enough to process the stimuli. The processing of information improves with an increase in alertness. Usually, infants around 3 months experience a gradual increase in alertness and thus can process perceptual information better. When the infants show spatial orientating towards something they turn their eyes and maybe the head or body toward the target. It is very likely that the tectopulvinar system handles these reactions.

Voluntary Eye Movements: Adults and infants are very different in their eye movements when facing stimuli. Saccadic eye movements, is one of two voluntary types of eye movements and takes time to mature. These movements are fast, ballistic movements from one target to another. They can fairly well fix the eyes towards the target, which is centered on the fovea then. An adult makes one large move toward the target and a small corrective saccade, but an infant first needs more time before they move their eyes and then they need a number of short saccades until they finally posit their eyes toward the target. When the eyes follow a moving object (e.g. swinging on a swing) we say the eyes make a smooth pursuit movement(s). The infants can make smooth movements with their eyes, but they struggle to follow the move continually until the end, instead they follow the object for a time and then they cannot see it anymore they try to start looking at it again. They cannot anticipate how the object will move until the age of 8-10 weeks.

Non-voluntary Eye Movements: The process when infants follow with their eyes moving strips over and over again is called optikinetic nystagmus. Usually, all infant have it and therefore its absence indicates possible neurological problems. It is assumed that optikinetic nystagmus is automatic or reflexive in nature and likely controlled by the tectopulvinar system.

Infants respond more consistently to stimuli in the temporal visual field (the peripheral portion of the visual field out toward the temples of the head) than to stimuli in the nasal visual field (the central portion of the visual field, which is closer to the nose). However, this difference is weaker by the age of 2 months. In the temporal visual field the eye movements are elicit without the visual cortex, but to move the eyes to the nasal field requires the visual cortex and superior colliculus. The visual cortex is not yet fully developed to exert control over eye movements, which explains the asymmetry before age 2 months.

The more stimuli are shown the poorer the accuracy and the more disruption can be found in the eye movements and more time is needed for the execution of the eye movements. The cortical mechanism starts to control these reflexes around 6-7month-old infants. Moreover, the 6-7 month olds have a preference for novel stimuli whereas 3 months old infants would prefer to look again at the same stimulus.

Visual Acuity and Accommodation: Optokinetic response or preferential looking procedures are used to test the visual acuity of infants. They found that the visual acuity is 20/800 at birth and acuity improves steadily from birth on. Furthermore, the accommodation abilities of infants are also limited because they cannot change the lens too much. Both the accommodation and poor acuity improve quickly, but only reach adult like levels around 7 years.

Brightness and Color: Three month old infants are 1/10 sensitive to light in dark (scotopic) as well as light (photopic) settings compared to adults and one-month-old infants are about 1/50 as sensitive to light as are adults. Both adults and infants are sensitive to middle (green) wavelengths and are less sensitive to longer (red) and shorter wavelengths (blue). Long and middle wavelengths can be fairly well discriminated by infants, however, 1-month-old have a hard time to discriminate short-wavelength (similar to color-blind individuals). Around 2-month-old can make such discriminations.

Pattern Discrimination: The optical and neural bases of pattern vision appear to function prior to birth because premature infants like to look at pattern stimuli. In general, infants show a preference for strip patterns (over squares), high contrast pattern between the figures and background, larger patterns, patterns with many elements and curved patterns. The stimuli is most preferred if moderately complex, hence not very simple or too complex. This preference changes with age. The infants can discriminate the size, orientation, position and feature change, but they cannot detect changes in the configuration at the age of 4 months. Around the same age they seem to respond to the overall pattern of a figure by grouping different aspects of the pattern according to the Gestalt principles. However, infants have still trouble to apply these principles to complex stimuli and they might need prior experience with the patterns before showing grouping principles.

Object Perception: After birth infants prefer facial like stimuli over scrambled features and over faces of primate species. Some researchers theorized that the brain has specialized parts for facial recognition. Support comes from prosopagnosia a condition in which people cannot identify faces and in very extreme cases cannot recognize their own face. Furthermore, infants look longer at attractive faces 3 days after birth. One month after birth they can discriminate their own mother’s face from that of a stranger by orienting themselves toward her hair and her eyes. The nose and mouth can be changed, while the child does not notice these changes. All this indicates that infants can recognize differences, but that the features are weighted unequally.

Some researchers have investigated how infants integrate elements of a separated object in their visual image. Figure 15.9 shows one object in the front and a rod, either broken or occluded. The infants looked longer at the broken-rod indicating that they perceived it different from the occluded-rod. Therefore, they are able to perceptually complete the object when it was partially occluded from sight.

Infant Hearing: At birth the newborn is already able to hear, however, the auditory cortex is not very developed in the first year. The threshold of infants around the age of 6 months appears to be higher than that of adults. This can be fairly well observed below the 10.000 Hz frequency range. Adults outperform the infants in this range, but every range above decreases this difference between adult and infant. Bone conduction in the ear works best for low frequency sounds. Applying this knowledge to speech, vowels will be more easily heard than consonants because vowels have lower frequencies than consonants. Rhythmic pattern are also more likely heard. Infants have preferences for voices that they heard in the womb, this is also true for music that they were exposed to.

Newborns will move their head in the direction of the high-pitch voice. Adults often speak with their children in a high-pitch sound that used to be known as motherese, but is called infant-directed talk today. In return, the baby smiles and uses vocalization to engage with the talker. They also group sounds in a pattern similar to that of adults. One study was done with 6-9 months-old in which they had to listen to a sequence of 3 high-pitch and 3-low-pitch (HHHLLL). The researcher wanted to know how the infant would react if a break was introduce in the sequence at a usual point (HHH’LLL) and at an unusual point (HHHL’LL). The infants, like adults, paid more attention to the unusual break in the sequence than the usual one, indicating its novelty. Auditory grouping is referred to the perceptual organization of sounds on the basis of their temporal and frequency relationships.

Touch, Pain, Taste, Smell: In the womb the foetus develops touch sensitivity and heat sensitivity and this has been shown through the reflexes of the infant. One of these reflexes is the rooting response; the infants turn their head in the direction of the touch to the cheek. The pain level of infants and adults are very similar and can be inferred from the facial muscle pattern of the infant. Female infants appear to be more sensitive to pain than male infants. The taste receptors are developed very early, too - 13weeks after conception. The responses to bitter and sweet tastes vary, in that they are negative and positive, respectively. From an evolutionary perspective this has advantages, because sweet food has nutritional value. Infants younger than 4 months are insensitive to salt. It seems as early experience with taste influence the diet of children, since they show a preference for that taste. Researchers have also done olfactory tests with infants. They found that the children show changes in their heart rate, respiration and general body activity when alcohol, vinegar, anis oil and asafoetida were placed under their noses. They move away from toxic and harmful odours and move towards pleasant odours. They do well on olfactory discrimination, may be even better than adults. Furthermore, they appear to learn from their olfactory experience. In one study they found that children move towards things that smell like their mother’s amniotic fluid.

Perceptual Change through Childhood

Eye movements: Integration happens when the mental models (schemata) of the child contribute to understanding the perceptual information available in a given context. This comprise that the child is able to encode the perceptual information. Therefore, the perceptual information processing mechanisms make use of attention and memory.

Children under the age of 2 month do not fixated the most informative part of a stimulus, but they rather view parts of the stimulus such as the corner. This preference changes at the age of 2-month-age, when the child starts to spend less time on the features and fixates the internal elements of the stimulus. Thus, the child under age 2-month shows what is known as ‘sticky saccades’ (once an infant has apprehended a stimulus he finds it difficult to disengage his attention and move on to the next target) and the older than 2-month-old infants shows ‘intelligent scanning’ (planned and efficient eye movements to pick up the most information as quickly as possible). Interestingly, the 3-4-year-olds show the same intelligent scanning. Thus, it seems that only at the age of 6-7 years the child has learned to systematically scan the outer portions of the stimulus with occasional eye movements to the interior.

In the study of Vurpillot (1968) children between the ages of 2-9 years were shown different houses and then asked if the houses were the same. The youngest children’s performances were worse than that of the older ones and they showed a lack of systematic search. Different research tried to explain this and some concluded that the visual capacity between the different children is the same, but that they use their attention and information search differently from each other since the task demanded certain eye movements.

Furthermore, support comes from a study that compared 4-9 olds. They showed them a duck made out of vegetables. The 4-5-year-old children saw only the vegetables, whereas the 7-year-old saw the vegetable and duck, and only the 8-9 olds were able to draw the relationship between the parts and the global organization. Thus, this shows the developmental shift from parts to global organized arrangements.

Orienting and Filtering: Covert orienting is the shift in attention without moving the eyes and it changes with age for direct and indirect cues. It takes children longer to reorient their attention compared to that of adults. Filtering is the ability to ignore irrelevant stimuli in the environment while more task-relevant stimuli are being processed. Children cannot filter well; they get often distracted by irrelevant stimuli. This filtering process can also be studied through a graphical stimulus known as visual-geometric illusions, line drawings in which the actual size, shape, or direction of the some elements differs from the perceived size, shape, or direction.The Mueller-Lyer illusion and Ponzo illusion are famous examples of visual-geometric illusions. Some argue that the observer pay attention to the things such as the oversized wings of the Mueller-Lyer illusion and therefore experience this illusion. Indeed, if the participant is asked to watch merely the line and ignore the wings the illusion is strongly reduced. Children, in particular, are susceptible to illusions, but these illusions dwindle until age 25 and then do not change further.

Odor: A study was conducted to find out how well children between 8-14 years discriminate between varies odors. Both did equally good, however the younger children could not name the odors. So, it can be concluded that children are still learning the associations between odors and specific objects.

Perceptual change in Adults

Around age 40, human being show decreased functioning, when sensory receptors age and neural efficiency drops. Perception and all aspects of an individual’s life are interrelated that is the smallest differences in vision or hearing can affect the personality, creativity or/and the intelligence of an individual. There is a loss in myelinated axons in the visual cortex and as a result visual responses become slower. Unfortunately, glasses or other compensatory methods cannot correct for it.

Visual functioning and aging: Presbyopia is the reduction in the accommodative range of the eye as a result people in their late 40s start to wear reading glasses in order to see the details of object within their close environment. A decrease in efficiency of eyes can also be seen in the lens that becomes more yellow and darker with an increase in age. The threshold for light detection changes when the individual becomes older. However, the minimum threshold is reached equally fast, but the threshold for maximum sensitive changes considerably. Visual acuity decreases with age, too. Older individuals need more light to see details. Thus, the contrast needs to be stronger and as a consequence the contrast threshold increases for elderly people. The increased loss of binocular depth perception in the elder individual is often due to optical factors. The neural system that processes depth and movement information (magnocellular pathway) is more strongly affected by aging effects than the neural system concerned with detail and color vision (parvocellular pathway).

Aging, physical conditions and color vision: Fewer short-wavelength cones exist in the eyes of the elderly individuals. Physical conditions such as dyschromatopsias display problems of color discrimination. This can happen if the individual was exposed to certain solvents and neurotoxins. Some diseases cause a loss of short-wavelength, known as blue dyschromatopsias. People with diabetes, alcoholics, Parkinson’s disease and individuals with glaucoma often show this type of color vision loss. Usually, people are able to see middle- and large-wavelength. However, in some cases people cannot see red and green the cause for this is either degeneration in cones or an optic nerve diseases.

Aging and attention: Automatic attention shifts towards a target that appears suddenly in the visual field is efficient. However, if the attention has to be shifted voluntary reorientation is slow. Elderly people and college students perform equally well on test of visual filtering unless there is ambiguity about where the target will be displayed and the stimuli is presented to close into the visual periphery. One explanation goes like this: Older people do worse because they have a smaller visual field and this, in turn, could be due to a loss of acuity. This explanation holds for the farthest edge, 70°. However, the visual search tasks show stimuli that are within this range. Then, another explanation is that elderly individuals use strategies that are inefficient for this kind of task, focusing on global rather than local features of the stimulus (similar to young children). Furthermore, older people have a hard time to ignore Gestalt organization features of the stimulus, which also causes inefficient searching. Useful field of view (UFOV) refers to the area of the visual field that is functional for an observer at a given time and for given task. The number of traffic accidents that people have and the size of their UFOV are correlated if the individual must detect a target within distracters.

Age effects on hearing: When people get older their ability to hear diminishes. Hearing impairments usually start to uncover during middle age and are more likely after age 60. This trend is even more drastic for the relative older individuals, since 75% of the 70-year-olds show some kind of hearing problems. Hearing impairments are handicapping when speech is not well understood anymore. A slight hearing handicap happens at a loss of sensitivity of 25 dB, whereas a marked hearing handicap is at 55 dB and includes problems with recognizing loud speech. Hakstian suggested using friends, yourself, and older family members to observe difference in hearing sensitivity. The hearing loss often results from decreased flexibility of the inner ear mechanisms as well as a loss or damage to the hair cells in the ear. Moreover, the pace of the auditory neural pathways becomes slower and the auditory cortex efficiency decreases.

Age effects on the other senses: The touch sense is less sensitive in the elderly and the pain sensitivity for some external stimulus also alters. For instance, the pain from a mechanical pressure is a little bit reduced, but the pain from hot stimuli does not change. On average the odor sensitivity decreases to a large extent. For example, once older individual adapt to a certain odor they have trouble to return to their original level of sensitivity. Young adults can recognize odors they have smelled a week prior while older individuals could not recognize it minutes after exposure. There appears to be a correlation between decreases sensitivity to odor and taste in older people. As a consequence, they might not be able to recognize food. Not being able to smell or taste food can decrease appetite and in return cause less food intake.

Global changes in perceptual performance: After having examined the different changes in the elderly it can be said that (1) the neural responses slow down, accompanied by an increasing persistence of the stimulus in the neural representation and (2) that the control over entering information changes. The elderly should try to maintain working memory and control the information flow since it is an important mechanism for consciousness. If the person does not have proper control over it irrelevant information will be processed into the consciousness. This is visible when elderly suddenly switch the topic in the conversation.

What is the perceptual learning approach to development? - Chapter 16

Several things seem to affect our perception, for instance our history, experiences, knowledge and hypotheses. Usually, people do not think that these factors influence them. Even though, it is known that previous experiences play a big role in the perception of certain situations.

Experience and Development

Induction demonstrates the strongest interaction between perception and experience, without the experience the stimuli would not be perceived at all and thus, the level of perceptual ability would not change or develop in any way. Therefore, experience is especially necessary in order to develop the perceptual experience. Next, maturation has the weakest effect on the relationship between experience and perception, because the experience is not required to reach the highest level of perceptual ability. Thus, no interaction can be found and no experience is needed to develop the perceptual ability. Through enhancement the perpetual ability is improved by experience. The perceptual ability would also develop without the experience, but it would not reach the highest level of perceptual ability, whereas with the experience it would. Facilitation is the case then experience has an effect on the perceptual ability but solely prior to the final level of perceptual ability. At the final level no interaction between experience and perceptual ability is needed to increase the perceptual ability level. Lastly, maintenance is then the experience is not required in order to reach the peak of perceptual ability, however, experience is needed to maintain this level otherwise the level of perceptual ability decreases. Thus, the ability is present without the experience, but in order to keep it up experience is needed.

Varieties of perceptual learning

One method of perceptual learning is attentional weighting, our perceptual mechanisms adapt to the tasks and environments by increasing the amount of attention they pay to important stimulus dimensions (different aspects of a stimulus) and features (one particular aspect of the stimulus), while paying less attention to irrelevant parts of the stimulus. So, one can expect that things are recognized faster as well as decrease the time to make a decision. Differentiation is another form of perceptual learning, in which stimuli that were once seen as the same become now distinguishable. For example, faces can be better identified then the individual is familiar with it. Once the differences are learned it is hard to ignore these learned differences. One study showed that Caucasians were faster in categorizing African American than Caucasians simply because they see Caucasians as more different. Note that differentiation can be taught in some cases similar to a skill. Unitization is also a form of perceptual learning, in which a complex stimulus configuration is treated as a single functional unit in consciousness. For instance, chess player see their game as a whole rather than single pieces, so do weather experts. However, unitization can also impair perceptual performance, for example a picture is harder to recognize upside down. That is particular true if the person is familiar with the object because then difficulty for recognition is increased. Finally, stimulus imprinting is another method of perceptual learning, when neural receptors changes as a result of interaction with the environment. Here again, speed and accuracy can be increased with which stimuli are processed.

Restricted and Selective Rearing: A very direct method to estimate how experience influence perceptual development is to deprive the organism from its sensory modalities immediately after birth. Restricted rearing is the deliberate change of experience to which the organism is exposed from birth by doing so stimulus imprinting is strongly reduced since fewer opportunities are available. Selective rearing, on the other hand, is a technique that biases, but does not eliminate certain perceptual abilities. The claim that experience plays a crucial role in normal development of neural receptors was supported in many restricted rearing studies. For example, the visual system shows disruption along the visual pathways, including for example the retina, superior colliculus, the lateral geniculate nucleus, and the visual cortex. Some areas are affected more strongly than others, for instance the magnonuculear ganglion cells are lost without any visual experience, but the parvonculear ganglion pathways are unaffected. Dark rearing appears to also influence the auditory development in a negative way, thus it seems that visual and auditory sensations require experience simultaneously. Sometimes, scientists cover one eye and use the other as a comparison. Time is an important factor, more preciously the time when the deprivation occurs. It was found that animals deprived under age 3 months showed strong disruption in binocular responses, while almost no effect were found for animals older than 3 months. The critical period corresponds to periods of maximal growth and development in the nervous system. If disruptions happen during this period, even if they are minor, they can affect final functioning. The critical periods vary for the different brain areas.

In Figure 16.2, it can be seen how a cat is selectivity reared by exposing it merely to stripes and letting it wear a collar so that the animal cannot see its own limbs. Environmental surgery refers to what is happening in the nervous system after selective rearing. Neuronal responses in the visual cortex change as can be seen in Figure 16.3, where a cat was only exposed to vertical lines and therefore, kind of lost its ability to response to horizontal lines. The cat that has only seen vertical lines shows a preference for such lines. The results might be explained through stimulus imprinting. The environment of the animal prevented any input for ‘horizontal receptors’ to become imprinted. Applying this understanding to humans, it could be that cultural difference as well as educational differences might lead to difference in how the world is seen. For example, it was found that the left fusiform gyrus response more strongly to letters than digits or shapes, but letters and digits are defined by the culture.

Perceptual Effects: One could say that cataracts -formations on the cornea or lens of the eye will let some light through, but pattern are not visible and everything is blurry- in infancy is a form of restricted rearing in humans. Cataracts can be removed and when don vision will return, but with deficiencies, for example familiar objects cannot be identified merely through vision. They are also more likely to be distracted by irrelevant visual stimuli. In some cases cataract is only in one eye, called monocular restricted rearing. The visual field is the region outside world in which an eye will respond, measured in degrees around the head (see Figure 16.4). The binocular region is the region that both eyes can see, thus one or both eyes should be able to see an object in this region. An animal that is reared in the dark cannot easily detect objects in that region. Similar, an animal reared with one eye occluded does not show a response to objects in the binocular region. That seems to hold for humans too. Therefore, one could conclude that depriving one eye will also diminishes the visual region of the unaffected eye.

Astigmatism occurs if the cornea of the eye is not perfectly spherical, being flatter in some places and more curved in others. As a result, the focus for contours of some orientations is stronger than that for other orientations. For instance, horizontal astigmatism would mean that vertical lines can be seen more accurate than horizontal lines, which are blurry for people with horizontal astigmatism. If astigmatism happens early in life then the losses will be enduring. Anisotropic refers to reacting differently to stimuli depending on their orientation. The normal visual system, for instance, has a preference for vertical or horizontal stimuli over diagonal stimuli, called the oblique effect. It is believed that the oblique effect is partly a consequence of the genes. Other studies have shown that the environment affects it too. Cree Indians were compared with North Americans. The Indians were raised in traditional cook tents in summer (meechwop) and during the winter in lodge (matoocan), whereas the North Americans were brought up in typical buildings. The Indians were barely exposed to vertical or horizontal lines, while the North American students see such patterns on a daily basis. According to the selective exposure hypothesis – ‘individuals’ tendency to favor information which reinforces their pre-existing views while avoiding contradictory information’- the students showed reduction in acuity for obliquely oriented contours and the Cree did not. Through training it can be improved to see obliquely oriented stimuli. People with strabismus (crossed eyes or misaligned) have impoverished binocular vision. Sometimes, they do not have stereoscopic perception or cannot see through one eye (Amblyopia ex anopsia). This can be avoided through surgery during the critical period, hence within the first 2 years.

Sensor-Motor Learning

The body is an essential part in the process of perceptual development. Exafferance is when the person is inactive while exposed to stimulation, whereas reafferance means that the stimulation changes as a result of an individual’s own movements. It turns out that reafferance is a requirement for the development of accurate visually guided spatial behavior. Richard Held and Alan Hein demonstrated this by rearing kittens in the dark until they were 8-12-month-old. The experiment is well illustrated on page 486 (Figure 16.5). After the period of dark rearing they were put in a carousel, where one kitten could actively move while the other kitten was carried (thus passive) in the same direction and with the speed. The active cat should experience the stimuli as changing because of its movements, but for the passive kitten the experience is unchanged and thus, it receives the same stimulation. Later test on depth perception showed that the active animal performed equivalent to normal cats and that the passive cat barely demonstrated any depth perception. Moreover, if merely one eye receives active exposure then this eye will show normal depth perception. Therefore, it can be concluded that reafferance is a very specific process.

From this study and others it can be proposed that human infants need experience in order to develop normally. For instance, babies watch their hands extensively in the first month after birth. Simultaneously, their reaching is poor in the beginning but they become better with time. Here again, it appears as if experiences of how the body moves are crucial for sensory-motor coordination developments.

Perceptual rearrangement

Stratton’s technique involves optical rearrangement of spatial relations in the world using distorting lenses set in spectacles. For example, one of these glasses turned the world upside down. The participants wore these glasses for some weeks. With time they seemed to adapt to the new view on the world. They started to bicycle (after 3 days) and ski (after some weeks). The new form of vision seemed to be facilitated through the notion of gravitational direction and familiar events as well as familiar objects. After the glasses had been removed the participant felt initially unease, however, it only took them 1 hour to readapt. The wedge prism is another less extreme technique, where the objects shift a few degrees. When the wedge-shaped piece of glass are removed the world should be still moved some degrees. Therefore, if asked where an object is the participant should show some degrees away from its actual position. This is exactly what happens and the effect is called aftereffects. This effect can be understood as evidence for perceptual rearrangement.

Reafference is important to learn to adapt to the new distortion as one experiment showed. Some participant had to walk around for one hour and others were moved in a wheelchair for the same time while both put on the wedge prism. The active participants showed signs of distortion but the passive participant did not. This might be explained through something like error feedback, which informs them of the direction and the extent of the distortion. The timing and amount of information from feedback are important factors in adaption. The visual feedback increases the adaptation depending on how natural and realistic it is. Therefore, personal experience with the environment is usually superior to observational learning. In only rare cases, for instance watching (passive) an object approach does this event lead to a distortion. However, this happens only if the distortion seems very real.

Illusion Decrement: Learning to compensate for perceptual errors is another form of perceptual learning. Here, the error is not optical in nature and the individuals are not consciously aware of their perceptual errors or of any perceptual changes. For instance, the Mueller-Lyer figure causes illusions that can be counteracted when scanning the horizontal line for 1 minute 5 times. As a result the illusion diminishes to 40%, this is called illusion decrement.

Context and meaning

All form of perception can be uncertain. The internal image that we have in our minds can be caused from many external stimuli; therefore it is astonishing that our perceptual experiences are most of the time unambiguous. What we perceive appears to be a result of a decision-making process, in which all the available information is scanned in order to find out what the external stimuli is. Any current perceptual experience consists of a complex evaluation of the significance of the stimuli reaching our receptors according to the transactional viewpoint. The world that we are experiencing is the result of perceptual processing than the cause of the perception. In line with the transactional viewpoint, if our expectations change or acknowledgment of the situation changes, then our perceptual experience will also change. Irvin Rock introduced indirect perception, whereby the word indirect means that extensive computation and intelligent is needed and that the process is not automatic.

A famous figure shows how black dots with a white background can be perceived as a dog, even though this conclusion is a mental representation and not a fact. This effect is not so strong for less familiar objects. Once recognized and saved the meaningful organization will be apparent quickly when you look at it again. Before explaining the effects one has to differentiate between registered (trigger perceptual processing without conscious awareness) and apprehended (the experience is present in our consciousness). For instance, perceptual priming would fall under the rubric registered. A prime can lead active hypotheses testing in the mind about what the object is. This will be done until the stimuli can be categorized; once categorized the stimuli becomes likely apprehended. Through this mental process perception can be changed in a way that is no longer close to the actual stimuli. For instance, a researcher showed different individuals a picture of a woman with three eyes, the third eye was on the forehead. A particular observer for instance believed that the third eye must be a curl, because the individual unconsciously has the notion present that people do not have three eyes and as mentioned earlier our expectations influence our perceptions.

The Ebbing House Illusion is a central circle is surrounded by larger circles and the same circle is surrounded by smaller circles next to it. People usually think that the central circle is smaller when surrounded by larger circles and bigger than surrounded by smaller circles, even though it has the same size. This illusion persists also for meaningful objects, such as dogs. The illusion is strongest when the objects surrounding the test figure are identical. The illusion is decreases when the surrounding objects are from the same class, but still dissimilar (in the case of dogs, e.g. poodle, Dalmatian, and so forth). These reduction becomes even stronger if the central object was from the same category (e.g. animal), but from a dissimilar class (e.g. a dog in the center and surrounded by horses). No illusion could be found for objects (e.g. dog) that were surrounded by distant, irrelevant objects (such as shoes). Here, we first classify and identify items to make sense of the context for the target and in conclusion, our final conscious experience is affected by experiences.

Eyewitness Testimony: Our language and expectations can alter the reports of what we have seen. To illustrate this distortion, one experiment was done in which the subjects were shown a stimuli. The participant should reproduce the drawing after only a few minutes. However, during the presentation of the stimuli the participants were told different association, for instance stimulus was described as either a broom or rifle. Thus, the manipulation in this experiment was language. Participants when asked to draw the stimulus (without seeing it) few moments later drew either something that resembled a broom or rifle, respectable. The memory was distorted and applying this knowledge about the memory to eyewitness reports one can question their reliability. Often expectations or language influences (e.g. questioning by the police) the actual situation. Loftus is an expert in this particular field of memory distortion and in one of her studies participants watched a videoclip of a car accident. They were asked later ‘How fast was the white sports car going while traveling along the country road?’ or ‘How fast was the white sports car going when it passed the barn while travelling along the country road?’. A bar was not present in the actual videoclip, however, one week later the participant returned and when asked ‘Did you see a barn?’ 17% in the condition of ‘How fast was the white sports car going when it passed the barn while travelling along the country road?’ fell for the suggestion compared to 3% in the other condition. This kind of memory distortion is limited.

Some studies showed that once something is clearly perceived the memory resist any later input that could change it. So, it seems as if gaps in the memory make the memory prone to distortions. For instance, children show a high accuracy rate if they identify a person they saw before, but if the target is not in the lineup children are more likely to misidentify a person. This could be due to expectation that the guilty person must be among the person in the lineup and then likely leads to an erroneous identification. Feelings of certainty are not a reliable factor of the actual events; neither does memory improve if asked to focus on details. Although, if the questions are a little bit structured it may prevent some errors in the eyewitness report.

Environmental and Life History Differences

The environment and culture can also influence expectation and therefore perceptual learning.

Picture Perception: There are difference between three dimensional and two dimensional images. In pictures some things cannot be reproduced (e.g. size, texture of the photo, colors, etc.). Some theorists conclude that pictures are similar to statements in language because they are created and interpreted on a set of conventions in any given culture. The perception of pictures must be learned according to the theory. Two skills are required to process pictures. First, the object has to be identified in the picture and secondly, the person must be able to interpret the three-dimensional arrangement behind the picture. Two experimenters used one of their children as a subject for a 19 month experiment. Pictures were removed in the whole house (e.g. no magazines, no television, no can labels, and so on). When tested the child did not show any difficulties to identify pictures of common items. They concluded that human do not need to learn patterns and drawings as representations of the real-world objects. This counteracts what has been found so far in animal studies, namely that pictures are not treated equivalent to real objects. Individuals reared in isolated cultures with no exposure to photos had a hard time interpreting pictures, especially black-and-white images. There may be an overall ability to identify objects in pictures, but interpreting the implied spatial relationship seems to be a matter of educational and cultural influences.

Cues for depth are for example linear perspective, interposition and texture gradients. Cues for pictures are no binocular disparity, accommodation (same for all items in the picture) and convergence. Hudson created a set of pictures that depicted certain combinations of pictorial depth cues, e.g Interposition is the blocking of more distant objects by closer objects. The second pictorial depth cue is familiar size. For this we use our previous knowledge and expectation about how large or small the object should be in the real-world. With this picture set from Hudson two-dimensional and three-dimensional responses accessed. A correct three-dimensional response would be that the hunter tries to kill the antelope and a two-dimensional response for picture Figure 16.13 would be that the hunter attempts to spear the elephant, because the elephant is closer to the spear in the picture. In this case both interposition and familiar size depth cues would have been ignored. Africans compared to Western people seem to have difficulties seeing depth cues in pictures. They interact less with drawings and mass media. Exposure to drawings and adding more depth cues improve three-dimensionality. Different depth cues exist and some are learned better than others. Culturally determined conventions can be shown best in situations where the picture represents motion.

Of course, the motion is not real but instead interpreted as such. At the age of 4 years motions can be read into pictures. That is not true for Non-Western cultures, where they do not see the motions. That can change depending on education, urbanization and exposure to pictures. Much is brought into picture interpretation by expectations. We show a boundary extension, which is a tendency to recall or draw information that was not in the picture but was likely to have existed outside the camera’s field of view. This is also a support for the assumption that our perceptual experience consists of the ‘best bet’ as to what was actually out there. Some things have a higher likelihood of being related to each other and so they are added to the conscious percept to make the scene complete.

Illusion and Constancy: The carpentered world hypothesis begins with the observation that in the urbanized Western world, rooms and buildings are usually rectangular, many objects in the environment have right-angled corners, and city streets have straight sides, and so forth. In consequence, it could be that we therefore depend more on depth cues compared to people who are less exposed to such stimuli (for instance in rural areas). Segall, Campbell, and Herskovits compared the responsiveness of individuals in carpenter vs. non-carpentered environments to certain types of depth cues. For this purpose they used pictures similar to those of Hudson’s pictures and visual-geometrical illusions, because these stimuli are thought of to depend on three-dimensional interpretation of the pattern. They brought their study to Africa and Illinois and found that the Mueller-Lyer illusion was greater for the more urban groups. This is taken as evidence that the absence of experience with certain types of depth cues impairs other perceptual functions (e.g. size constancy) that are dependent on depth perception.

Speech: Our auditory system is exposed to ongoing language. Each language uses a small set of word-differentiating phonemes, which are the functionally characteristic sounds of that language. Some sounds might be treated as different in some language, but not in others, for instance the sound “r” as in rope and “l” as in lope are two different phonemes in English, but in Japanese the phonemes are not distinguishable. Here again, the learned language should affect auditory perception. Grownups who have learned only a single language, show problems in discriminating certain linguistic contrast characteristic of other languages. This can also be the case if the person has learned another language in which they are fluent. Infants are able to hear sound pairs that are not used in their native language. After the first year they start to respond selectively to the language spoken in their environment and lose their ability to discriminate phonemes from other languages. Through short-term intensive training the adult might be able to improve their ability to distinct phonemes from non-native languages again. A very interesting finding showed that people who are surrounded by a language (e.g. Hindi) that they barely understand (because they learn e.g. English as their native language) in their early years are able to discriminate phonemes of these unlearned languages. It is assumed that their early experience with the nonnative language is the reason for it. The inability to discriminate phonemes can also result from clinical conditions e.g. selective hearing loss, which affects the auditory environment during the early critical period.

Effects of Occupation: Not only between but also within a culture there is a selective exposure to different sets of environmental stimuli, e.g. different occupational settings. Such experiences affect cognitive and physiological levels. Figure 16.15 shows the effects of noise on hearing for different occupations. It becomes clear that factory workers have a higher average threshold than office workers or farmers. It was also found that the impact of noise is greater for men than for women and increases with age. The greatest losses are for the higher-frequencies. One study showed that performers of rock music may also suffer hearing losses (Figure 16.16). It was also found that students who used to work for venues with loud music suffer from similar hearing loss. Other occupations may expose the eyes to high-intensity lights and in this way can cause permanently impaired vision.

Myopia is a condition where the length of the eye is somewhat too great resulting in poor distance acuity. Two billion people are affected by it and it is caused by exposure to ‘near work’ such as reading or other activities that are restricted to close distances. Thus, if individuals try hard to see it leads to some pressure in the eyes and over accommodation of the crystalline lens. If this happens for longer the eye lengthens and this in turn leads to myopia. A person can have temporarily myopia, if they worked for a short span on a near task. Eye glasses can correct for myopia. Individuals are not surprised if they see people with glasses it seems to be a common thing. It was falsely claimed that training could reverse nearsightedness. If people become better in recognizing familiar objects and it is due to attention and not improvement of the eyes.

Perceptual Set: The perceptual set refers to expectancies or predisposition an observer brings to the perceptual situation. This set is like a form of selective attention because only certain parts are being processed and not the whole. Police officers and civilians were shown scenes over several hours. After, they were asked to categorize the people who appeared as wanted and unwanted and to categorize the actions they engaged in. The police suspected more thefts than the civilians. In another study, a violent scene was shown on the one eye and a nonviolent scene on the other (stereogram) to create confusion. This uncertain situation can be resolved by favoring one scene; the chosen scene then dominates the perception. Police students and non-police students participated. The result showed that advances police students saw the violence scenes twice as many times than beginning police students and university students. Therefore, occupations seem to have an effect on perception in ambiguous situation.

Perceptual sets can be found for all aspects of perception, e.g. odor. If strawberry solution is red people believe it smells more strongly like strawberry than it is colorless. That is also the case for Coca-Cola, when drank colorless or normal. The experience that the drink has to be brown affects the perception of it. In anorexia nervosa patients see themselves as too fat, which is analogous to perceptual sets because these individuals have expectancies about how they should look like, which comes from mass media.

Join World Supporter
Join World Supporter
Log in or create your free account

Why create an account?

  • Your WorldSupporter account gives you access to all functionalities of the platform
  • Once you are logged in, you can:
    • Save pages to your favorites
    • Give feedback or share contributions
    • participate in discussions
    • share your own contributions through the 7 WorldSupporter tools
Follow the author: Psychology Supporter
Promotions
verzekering studeren in het buitenland

Ga jij binnenkort studeren in het buitenland?
Regel je zorg- en reisverzekering via JoHo!

Access level of this page
  • Public
  • WorldSupporters only
  • JoHo members
  • Private
Statistics
[totalcount]
Comments, Compliments & Kudos

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.