Introduction to Biological Psychology Part IV Chapter 9 Perceiving Sound Our Sense of Hearing

Explore the Introduction to Biological Psychology Part IV Chapter 9 Perceiving Sound Our Sense of Hearing study material pdf and utilize it for learning all the covered concepts as it always helps in improving the conceptual knowledge.

Subjects

Social Studies

Grade Levels

K12

Resource Type

PDF

Introduction to Biological Psychology Part IV Chapter 9 Perceiving Sound Our Sense of Hearing PDF Download

SOUND 359 . PERCEIVING SOUND OUR SENSE OF HEARING Eleanor I alway say is a can ! yet it , and it 1101 , so it to touch your If in some way in for it to be on your radar . Rachel , Actress and Activist Rachel , quoted above , is an actress who starred in , created and The Silent Child ( 2017 ) an winning based on her own experiences as the child of a parent who became deaf after chemotherapy . The quote illustrates the challenge of deafness , which in turn demonstrates our reliance on hearing . As you will see in this section , hearing is critical for safely navigating the world and communicating with others . Consequently , hearing loss can have a devastating impact on individuals . To understand the importance of hearing and how the brain processes sound , we begin with the sound stimulus itself .

360 SOUND Making waves the sound signal The stimulus that is detected by our auditory system is a sound wave a longitudinal wave produced from in air pressure by vibration of objects . The vibration creates regions where the air particles are closer together ( compressions ) and regions where they are further apart ( as the wave moves away from the source ( Figure ) saw mascara AIR Fig . Sound waves created by vibration of an object The nature of the sound signal is such that the source of the sound is not in direct physical contact with our bodies . This is different from the bodily senses described in the first two

SOUND 361 sections because in the senses of touch and pain the stimulus contacts the body directly . Because of this difference touch and pain are referred to as proximal senses . By contrast , in hearing , the signal originates from a source not in direct contact with the body and is transmitted through the air . This makes hearing a distal , rather than proximal , sense . The characteristics of the sound wave are important for our perception of sound . Three key characteristics are shown in Figure frequency , amplitude and phase . was ma ( gim me cue A we ( me . Fig . Key characteristics of sound are a ) frequency , amplitude and ) phrase mi ma The frequency is the time it takes for one full cycle of the

362 I SOUND wave to repeat , and is measured in Hertz ( One Hertz is simply one cycle per second . Humans can hear sounds with a frequency of 20 ( 20 ) Examples of low frequency sounds , which are generally considered to be under 500 , include the sounds of waves and elephants ! In contrast , higher frequency sounds include the sound of whistling and nails on a chalk board . Amplitude is the amount of fluctuation in air pressure that is produced by the wave . The amplitude of a wave is measured in ( Pa ) the unit of pressure . However , in most cases when considering the auditory system , this is converted into intensity and are discussed in relative terms using the unit of the decibel ( Using this unit the range of humans can typically hear are . Above this level can be very harmful to our auditory system . Although you may see sound intensity expressed in , another expression is also commonly used . Where the intensity of sound is expressed with reference to a standard intensity ( the lowest intensity a young person can hear a sound of 1000 ) it is written as . The stands for sound pressure level . Normal conversation is typically at a level of around 60 . Unlike frequency and amplitude , phase is a relative characteristic because it describes the relationship between different waves . They can be said to be in phase , meaning they have peaks at the same time or out of phase , meaning that they are at different stages in their cycle at anyone point in time . The three characteristics above and the diagrams shown

SOUND 363 indicate a certain simplicity about sound signals . However , the waves shown here are pure waves , the sort you might expect from a tuning fork that emits a sound at a single frequency . These are quite different to the sound waves produced by more natural sources , which will often contain multiple different frequencies all combined together giving a less smooth appearance ( Figure ) VIOLIN Fig . Examples of sound waves produced by the clarinet and clarinet ma In addition , it is rare that only a single sound is present in our environment , and sound sources also move around ! This can make sound detection and perception a very complex process and to understand how this happens we have to start with the ear .

364 SOUND Sound detection the structure of the ear The human ear is often the focus of ridicule but it is a highly specialised structure . The ear can be divided into three different parts which perform distinct functions ' The outer ear which is responsible for gathering sound and funnelling it inwards , but also has some protective features ' The middle ear which helps prepare the signal for receipt in the inner ear and serves a protective function ' The inner ear which contains the sensory receptor cells for hearing , called hair cells . It is in the inner ear that transduction takes place . Figure shows the structure of the ear divided into these three sections .

SOUND I 365 ( attached to oval window ) Vestibular nerve Cochlear nerve Hound window Ear canal cochlea tube Tympani Tympani membrane cavity External ear Middle ear Inner ear Fig . The human ear can be divided into the outer , middle and inner ear , each of which has a distinct function in our auditory system . Although transduction happens in the inner ear , the outer and middle ear have key functions and so it is important that we consider these . The outer ear consists of the pinna ( or auricle ) which is the visible part that sticks out of the side of our heads . In most species the pinna can move but in humans they are static . The key function of the outer ear is in funnelling sound inwards , but the ridges of the pinna ( the lumps and bumps you can feel in the ear ) also play a role in helping us sound sources . Additional to this and often overlooked is the protective function of the outer ear . Ear wax found in the outer ear provides a coating which is antibacterial and

366 SOUND antifungal , creating an acidic environment hostile to pathogens . There are also tiny hairs in the outer ear , preventing entry of small particles or insects . The middle ear sits behind the tympanic membrane ( or ear drum ) which divides the outer and middle ear . The middle ear is an chamber containing three tiny bones , called the . These bones are connected in such away that they create a lever between the tympanic membrane and the cochlea of the inner ear , which is necessary because the the cochlear is . Spend a moment thinking about the last time you went swimming or even put your head under the water in a bath . What happens to the sounds you could hear beforehand ?

The sounds get much quieter , and will likely be muffled , if at all audible , when your ear is with water . Hopefully you will have noted that when your ear contains water from a pool or the bath , sound becomes very hard to PERCEIVING SOUND I 367 hear , This is because the particles in the water are harder to displace than particles in air , which results in most of the sound being back off the surface of the water . In fact only around of sound is transmitted into water from the air , which explains why it is hard to hear underwater . Because the inner ear is , this gives rise to a similar issue as hearing under water because the sound wave must move from the middle ear to the inner ear . To achieve this without loss of signal , the signal is amplified in the middle ear , by the lever actions of the , along with changes in the area of the bones contacting the tympanic membrane and cochlea , both of which result in a increase in pressure changes as the sound wave enters the cochlea . As with the outer ear , the middle ear also has a protective function in the form of the middle ear . This is triggered by sounds over 70 and involves muscles in the middle ear locking the position of the . What would happen if the could not move ?

The signal could not be transmitted from the outer ear to the inner ear .

368 I SOUND base ' am membrane endolymph has ma Fig . Diagrammatic representation of the three of the cochlea ( uncoiled ) We now turn our attention to the inner ear and , the cochlea , which is the structure important for hearing ( other parts of the inner ear form part of the vestibular system which is important for balance ) The cochlea consists of a tiny tube , curled up like a snail . A small window into the cochlea , called the oval window ( Figure ) is the point at which the sound wave enters the inner ear , via the actions of the . The tube of the cochlea is separated into three different chambers by membranes . The key chamber to consider here is the scala media which sits between the basilar membrane and membrane , and contains the organ of ( Figures ,

PERCEIVING SOUND I 369 Fig . The structure of the cochlea 370 I PERCEIVING SOUND nerve fibres from tympani , support ing pillar cells cells ( in ) me Fig . The structure of the organ of The cells critical for transduction of sound are the inner hair cells which can be seen in Figure . These cells are referred to as hair cells because they contain protruding from one end . The end from which the protrude is referred to as the apical end . They project into a fluid called endolymph , whilst the other end of the cell , the basal end , sits in perilymph . The endolymph contains a very high concentration of potassium ions . How does this differ from typical extracellular space ?

SOUND i 371 Normally potassium is at a low concentration outside the cell and a higher concentration inside , so this is the opposite to what is normally found . When a sound wave is transmitted to the cochlea , it causes the movement of in the chambers which in turn moves the basilar membrane upon which the inner hair cells sit . This movement causes their to bend . When they bend , ion channels in the tips open and potassium into the hair cell causing ( Figure ) This is the auditory receptor potential . Spend a moment looking at Figure . What typical neuronal features can you see ?

How are these cells different from neurons ?

There are calcium gated channels and synaptic but there is no axon .

372 I PERCEIVING SOUND Fig . The inner hair cell responsible for transduction of sound waves .

SOUND 373 You should have noted that the inner hair cells only have some of the typical structural components of neurons . This is because , unlike the sensory receptor cells for the somatosensory system , these are not neurons and they can not produce action potentials . Instead , when sound is detected , the receptor potential results in the release of glutamate from the basal end of the hair cell where it synapses with neurons that form the cochlear nerve to the brain . If glutamate binds to the receptors on these neurons , an action potential will be produced and the sound signal will travel to the brain . Auditory pathways what goes up must come down The cochlear nerve leaves the cochlea and enters the brain at the level of the brainstem , with neurons in the cochlear nuclear complex before travelling via the trapezoid body to the superior olive , also located in the brainstem . This is the structure in the pathway to receive information from both ears . Prior to this in the cochlear nuclear complex , information is only received from the ipsilateral ear . After leaving the superior olive , the auditory pathway continues in the lateral to the inferior colliculus in the midbrain before travelling to the medial geniculate nucleus of the thalamus . From the thalamus , as with the other senses you

374 I SOUND have learnt about , the signal is sent onto the cortex . In this case , the primary auditory cortex in the temporal lobe . This complex ascending pathway is illustrated in Figure . law . me now Fig . The ascending pathway from the cochlea to the cortex You will learn about the types of processing that occurs at different stages of this pathway shortly but it is also important to recognise that the primary auditory cortex is not the end of the road for sound processing ,

SOUND i 375 Where did touch and pain information go after the primary somatosensory cortex ?

In both cases , information was sent onto other cortical regions , including secondary sensory areas and areas of the frontal cortex . As with touch and pain information , auditory information from the primary sensory cortex , in this case the primary auditory cortex , is carried to other cortical areas for further processing . Information from the primary auditory cortex divides into two separate pathways or streams the ventral what pathway and the dorsal where pathway . The ventral pathway travels down and forward and includes the superior temporal region and the ventrolateral prefrontal cortex . It is considered critical for auditory object recognition , hence the what name ( Cohen , 2013 ) There is not yet a clear consensus on the exact role in recognition that the different structures in the pathway play , but it is known that activity in this pathway may be modulated by emotion ( Macpherson , Greening , Mitchell , 2013 )

376 I SOUND In contrast to the ventral pathway the dorsal pathway travels up and forward , going into the posterodorsal cortex in the parietal lobe and forwards into the dorsal lateral prefrontal cortex ( Figure 24 ) This pathway is critical for identifying the location of sound , as suggested by the where name . As with the ventral pathway , the exact role of individual structure is not clear but it too can be modulated by other functions . Researchers have found that whilst it is not impacted by emotion ( et , 2013 ) it is , perhaps , modulated by spatial attention ( Tata Ward , 2005 ) LOBE WHERE . The dorsal and ventral streams of auditory information

PERCEIVING SOUND i 377 Recall that when discussing pain pathways you learnt about a pathway which extends from higher regions of the brain to lower regions a descending pathway . This type of pathway also exists in hearing . The auditory cortex sends projections down to the medial geniculate nucleus , inferior colliculus , superior olive and cochlear nuclear complex , meaning every structure in the ascending pathway receives descending input . Additionally , there are connections from the superior olive directly onto the inner and outer hair cells . These descending connections have been linked to several different functions including protection from loud noises , learning about relevant auditory stimuli , altering responses in accordance with the cycle and the effects of attention ( Delano , 2015 ) Perceiving sound from the wave to meaning In order to create an accurate perception of sound information we need to extract key information from the sound signal . In the section on the sound signal we three key features of sound frequency , intensity and phase . In this section we will consider these as you learn about how key features of sound are perceived , beginning with frequency . The frequency of a sound is thought to be coded by the auditory system in two different ways , both of which begin

378 PERCEIVING SOUND in the cochlea . The first method of coding is termed a place code because this coding method relies on stimuli of different frequencies being detected in different places within the cochlea . Therefore , if the brain can tell where in the cochlea the sound was detected , the frequency can be deduced . Figure shows how different frequencies can be mapped within the cochlea according to this method . At the basal end of the cochlea sounds with a higher frequency are represented whilst at the apical end , low frequency sounds are detected . The difference in location arises because the different sound frequencies cause different displacement of the basilar membrane . Consequently , the peak of the displacement along the length of the membrane differs according to frequency , and only hair cells at this location will produce a receptor potential . Each hair cell is said to have a characteristic frequency to which it will respond .

PERCEIVING SOUND I 379 10000 9000 ( I ) gang soon ( Fig . A schematic diagram of how different frequencies are located along the length of the cochlea ( a ) Distinct displacement patterns for signals of different frequencies ( 380 SOUND A ' 65 70 75 Fig . Temporal code assumes a direct relationship between stimulus intensity and of action potentials in the cochlear nerve . The neural response phase , locked to every cycle of the sound wave , is shown in the upper trace ( A ) and the frequency of the stimulus sound waveform ( in ) in the lower trace ( Although there is some support for a place code of frequency information , there is also evidence from studies in humans that we might be able to detect smaller changes in sound frequency that would be possible from place coding alone . This led researchers to consider other possible explanations and to the proposal of a temporal code . This proposal is based on research which shows a relationship between the frequency of the incoming sound wave and the of action potentials in the cochlear nerve ( Bray , 1930 ) which is illustrated in Figure . Thus when an action potential occurs , it provides information about the frequency of the sound .

SOUND 381 Recall that we can hear sounds of up to or 20 . How does this compare to the rate of neurons ?

This is much higher than the rate of neurons . Typical neurons are thought to be able to fire at up to 1000 . Given the constraints of rate , it is not possible for temporal code to account for the range of frequencies that we can perceive . and Bray ( 1930 ) proposed that groups of neurons could work together to account for higher frequencies , as illustrated in Figure .

382 SOUND . EV sown wave um . no A am Fig . and Bray suggested the volley principle where neurons work together to create an output which mimics the stimulus frequency . The two coding mechanisms are not mutually exclusive and researchers now believe that temporal code may operate at very low frequencies ( 50 ) and place code may operate at higher frequencies ( 3000 ) with all intermediate frequencies being coded for my both mechanisms . Irrespective of which coding method is used for frequency in the cochlea , once encoded , this information is preserved throughout the auditory pathway . Sound frequency can be considered an objective characteristic of the wave but the perceptual quality it most

SOUND 383 closely relates to is pitch . This means that typically sounds of high frequency are perceived as having a high pitch . The second key characteristic of sound to consider is intensity . As with frequency , intensity information is believed to be coded initially in the cochlea and then transmitted up the ascending pathway . Also in line with the coding of frequency , there are two suggested mechanisms for coding intensity . The method suggests that intensity can be encoded according to rate in the auditory nerve . To understand this it is important to remember the relationship between stimulus and receptor potential which was described in the section on touch . You should recall that the larger the stimulus , the bigger the receptor potential . In the case of sound , the more intense the stimulus , the larger the receptor potential will be , because the ion channels will be held open longer with a larger amplitude sound wave . This means that more potassium can into the hair cell causing greater and subsequently greater release of glutamate . The more glutamate that is released , the greater the amount that is likely to bind to the neuron forming the auditory nerve . Given action potentials are , the action potentials stay the same size but the frequency of them is increased . The second method of encoding intensity is thought to be the number of neurons . Recall from Figure that sound waves will result in a position of maximal displacement of the basilar membrane , and so typically only activate hair cells with the corresponding frequency which in

384 SOUND turn signal to neurons in the cochlear nerve . However , it is suggested that as a sound signal becomes more intense there will be sufficient displacement to activate hair cells either side of the characteristic frequency , albeit to a lesser extent , and therefore more neurons within the cochlear nerve may produce action potentials . You may have noticed that the methods for coding frequency and intensity here overlap . Considering the mechanisms described , how would you know whether an increased rate in the cochlear nerve is caused by a or a greater intensity of a sound ?

The short answer is that the signal will be ambiguous and you may not know straight away . The overlapping coding mechanism can make it difficult to achieve accurate perception indeed we know that perception of loudness , the perceptual experience that most closely correlates with sound intensity , is impacted by the frequency of sound . It is likely that the combination of

PERCEIVING SOUND 385 multiple coding mechanisms supports our perception because of this . Furthermore , small head movements can be made which can impact on intensity of sound and therefore inform our perception of both frequency and intensity when the signal is ambiguous . This leads us nicely onto the coding of sound location , which requires information from both ears to be considered together . For that reason sound coding can not take place in the cochlea and so happens in the ascending auditory pathway . Which is the first structure in the pathway to receive auditory signals from both ears ?

It is the superior olive in the brainstem . The superior olive can be divided into the medial and lateral superior olive and each is thought to use a distinct mechanism for coding location of sound . Neurons within the medial superior olive receive excitatory inputs from both cochlear nuclear complexes ( ie , the one of the right and left ) which allows them to act as coincidence detectors , To explain this a

386 I SOUND little more it is helpful to think about possible positions of sound sources relative to your head . Figure shows the two horizontal planes of sound left to right and back to front . arr Fig . Examples of sound sources relative to the head We will ignore stimuli falling exactly behind or exactly in front for a moment and focus on those to the left or right . Sound waves travel at a speed of 348 ( which you may also see written as ) and a sound travelling from one side of the body will reach the ear on that side ahead of the other side . The average distance between the ears is so this means that sound waves coming directly from , for example , the right

SOUND side , will hit the right ear before they reach the 387 eft ear and vice versa if sound was coming from the left . Shorter delays between the sounds arriving at the left and right ear experienced for sounds coming from less extreme right or positions . This time delay means that neurons in the are eft nerve closest to the sound source will . This head start is maintained in the cochlear nuclear complex . Neurons in medial superior olive are thought to be arranged such that can detect time delays and thus code the origin of the 16 ) the sound . Figure 29 illustrates how this is possible . If a sound is coming from the left side , the signal from the left nuclear complex will reach the superior olive first and likely get all the way along to neuron before the signal from the right cochlear nuclear complex combines with it , maximally exciting the neuron .

388 . SOUND A , comm mam male was ME NUCLEI , Fig . Delay lines and coincident detectors in the medial superior olive . Each of the three neurons shown ( A , will fire most strongly when a signal from both ears reaches it at the same time . This will happen for neuron when the sound wave is coming from the left , as the signal from the left cochlear nucleus has time to travel further ( past neurons A and ) before the signal from the right cochlear nucleus arrives . Using Figure , what would happen if the sound was from exactly in front or behind ?

PERCEIVING SOUND 389 The input from the two cochlear nuclear complexes would likely combine on neuron Neuron is therefore , in effect , a coincidence detector for no time delay between signals coming from the two ears . The brain can therefore deduce that the sound location is not to the left or the right but it can tell from these signals if the sound is in front of or behind the person . This method , termed interaural ( between the ears ) time delay , is thought to be effective for lower frequencies , but for higher frequencies another method can be used by the lateral superior olive . Neurons in this area are thought to receive excitatory inputs from the ipsilateral cochlear nuclear complex and inhibitory inputs from the contralateral complex . These neurons detect the interaural intensity difference , that is the reduction in intensity caused by the sound travelling across the head . Importantly the drop of intensity as sound moves around the head is greater for higher frequency sounds . The detection of interaural time and intensity differences are therefore complementary , favouring low and high frequency sounds , respectively . The two mechanisms outlined for perceiving location here are methods . They rely completely on the data we receive , but there are additional cues to . For

390 I SOUND example , high frequency components of a sound diminish more than low frequency components when something is further away , so the relative amount of low and high frequencies can tell us something about the sound location , What would we need to know to make use of this cu ?

We would need to know what properties ( the intensity of different frequencies ) to expect in the sound to work out if they are altered due to distance . Use of this cue therefore requires us to have some prior experience of the sound . By combining all the information about frequency , intensity and we are able to create a percept of the auditory world . However , before we move on it is important to note that whilst much of the auditory coding appears to take place in lower areas of the auditory system , this information is preserved and processed throughout the cortex . More importantly , it is also combined with input and several structures will to create a perception of

SOUND 391 complex stimuli such as music , including areas of the brain involved in memory and emotion ( Warren , 2008 ) Hearing loss causes , impact and treatment As indicated in the opening quote to this section , hearing loss can be a difficult and debilitating experience . There are several different types of hearing loss and each comes with a different prognosis . To begin with it is helpful to categorise types of hearing loss according to the location of the impairment ' Conductive hearing loss occurs when the impairment is within the outer or middle ear , that is , the conduction of sound to the cochlea is ' Cochlear hearing loss occurs when there is damage to the cochlea ' hearing loss occurs when the damage is to the cochlear nerve of areas of the brain which process sound . The latter two categories are often considered collectively under the of hearing loss . The effects of hearing loss are typically considered in terms of hearing threshold and hearing discrimination . Threshold refers to the quietest sound that someone is able to hear in a controlled environment , whilst discrimination refers to their

392 SOUND ability to concentrate on a sound in a noisy environment . This means that we can also categorise hearing loss by the extent of the impairment as indicated in Table . Table . Different classes loss level ( i I ( Mild Following speech is esp . Moderate following speech with ( Severe 7089 need to lip read or use sig earing aid Profound 90420 Usually . need to lip read or use sig ineffective You should have spotted that the unit given in Table is not the typical or . This is a type of unit , or hearing level , used for hearing loss ( see Box Measuring hearing loss , below ) Measuring hearing loss If someone is suspected of having hearing loss they

SOUND 393 will typically undergo tests at a hearing clinic to establish the presence and extent of hearing loss . This can be done with an instrument called an audiometer , which produces sounds at different frequencies that are played to the person through through headphones ( Figure ) Fig . A hearing test being conducted with an audiometer The threshold set for the tests is that of a healthy young listener and this is considered to be . If

394 SOUND someone has a hearing impairment they are unlikely to be able to hearthe sound at this threshold and the intensity will have to be increased for them to hear it , which they can indicate by pressing a button . The amount by which is it increased is the level . For example , if someone must have the sound raised by 45 in order to detect the sound they will have moderate hearing loss because the value of 45 falls into that category ( Table ) Conductive hearing loss typically impacts only on hearing threshold such that the threshold becomes higher , ie , the quietest sound that someone can hear is louder than the sound someone without hearing loss can hear . Although conductive hearing loss can be caused by changes within any structure of the outer and middle ear , the most common occurrence is due to a build up of in the middle ear , giving rise to a condition called otitis media with effusion , or glue ear . This condition is one of the most common illnesses found in children and the most common cause of hearing loss within this age group ( Hall , Maw , Golding , Steer , 2014 ) Why would fluid in the middle ear be problematic ?

SOUND 395 This is normally an air structure , and the presence of fluid would result in much of the sound being reflected back from the middle ear and so the signal will not reach the inner ear for transduction . Glue ear typically arises in just one ear , but can occur in both . It generally only causes mild hearing loss . It is thought to be more common in children than adults because the up arises due to the tube not draining properly . This tube connects the ear to the throat and normally drains the moisture from the air in the middle ear . In young children its function can be impacted adversely by the growth of adenoid tissue , which blocks the throat end of the tube meaning it can not drain and fluid gradually builds up . However , several risk factors for glue ear have been . These include iron ( et , 2019 ) allergies , to dust mites ( Salina , 2020 ) and exposure to second hand smoke as well as shorter duration of breast feeding ( et , 2012 Owen et , 1993 ) Social risk factors have also been including living in a larger family ( et , 2020 ) being part of a lower socioeconomic group ( et , 2012 ) and longer hours spent in group childcare ( Owen et , 1993 ) The risk factors of glue ear are possibly less important than

396 PERCEIVING SOUND the potential consequences of the condition . It can result in pain and disturbed sleep which can in turn create behavioural problems , but the largest area of concern is on educational outcomes , due to delays in language development and social isolation as children struggle to interact with their peers . Studies have demonstrated poorer educational outcomes for children who experience chronic glue ear ( Hall et , 2014 Hill , Hall , Williams , 2019 ) but it is likely that they can catch up over time , meaning any long lasting impact is minimal . Despite the potential for disruption to educational outcomes , the first line of treatment for glue ear is simply to watch and wait and treat any concurrent infections . If the condition does not improve in a few months , may be used . These are tiny plastic inserts put into the tympanic membrane to allow the to drain . This minor surgery is not without risk because it can cause scarring of the membrane which may impact on its elasticity . Whilst glue ear is the most common form of conductive hearing loss , the most common form of hearing loss is Noise Induced Hearing Loss ( This type of hearing loss is caused by exposure to high intensity noises , from a range of ( industrial , military and recreational ) and normally comes on over a period of time so gets greater with age , as hair cells are damaged or die . It is thought to affect around of the population and typically results in bilateral hearing loss that affects both the hearing

PERCEIVING SOUND I 397 threshold and discrimination . Severity can vary and its impact is frequency dependent with the biggest loss of sensitivity at higher frequencies ( that coincide with many of the every day sounds we hear , including speech . At present there is no treatment for and instead it is recommended that preventative measures should be taken , for example through the use of personal protective equipment ( What challenges can you see to this approach using ?

This assumes that is readily available , which it may not be . For example , in the case of military noises , civilians in war zones are unlikely to be able to access It also assumes that can be worn without impact . A musician is likely to need to hear the sounds being produced and so although use of some form of may be possible , doing so may not be practical . The impact of on an individual is substantial . For example research has demonstrated that the extent of hearing loss in adults is correlated with measures of social isolation ,

398 SOUND distress and even suicide ideation ( 2018 ) Other studies indicate can result in frustration , anxiety , stress , resentment , depression , and fatigue ( Canton Williams , 2012 ) There are also reported effects on employment with negative effects on employment opportunities and productivity ( Canton Williams , 2012 , Hammer , 2017 ) Additionally , given will typically occur in older people , it may be harder to diagnose because they mistake it for a natural decline in hearing that occurs as people get older , meaning they may not recognise the need for preventive action if it is possible , or the need to seek help . Looking across the senses We have now reached the end of the section on hearing , but before we continue to look at the visual system it is helpful to spend a moment reflecting on the systems you have learnt about so far .

SOUND i 399 Exercises . Compare and contrast the mechanisms by which touch , pain and sound signals are . There are several similarities you could have mentioned here . For example , all these systems can include ion channels , that is , those that are opened by mechanical force . Additionally , they all involve the of a positively charged ion causes a receptor potential . There are also key differences as well . For example , whilst touch and hearing only use sensitive channels , pain can also use sensitive and channels . Furthermore , the ions that create the receptor potential differ In , the incoming ion is sodium , as is typical for across the nervous system , whilst in hearing it is potassium due to the endolymph .

400 I SOUND . Considering the pathway to the brain , what do you notice is common to all the sensory systems discussed so far ?

In all systems , the thalamus receives the signal on the way to the primary sensory cortex for that system . Additionally , there are typically , projections to a range of cortical areas after the primary sensory cortex . Extracting key features of the sensory signal is important . What common features are detected across all three systems ?

In all cases the intensity and location of the stimulus is encoded . Additionally , in touch and hearing , frequency of information is encoded SOUND 401 hearing Key Takeaways Our sense of hearing relies on the detection of a longitudinal wave created by vibration of objects in air . These waves typically vary in frequency , amplitude and phase The structure of the ear allows us to funnel sounds inwards and amplify the signal before it reaches the cochlea of the inner ear where transduction takes place Transduction occurs in specialised hair cells which contain channels that open in response to vibration caused by sound waves . This results in an influx of potassium producing a receptor potential Unlike the somatosensory system , the hair cell is not a modified neuron and therefore can not itself produce an action potential . Instead , an action potential is produced in

402 SOUND neurons of the cochlear nerve , when the hair cell releases glutamate which binds to receptors on these neurons . From here the signal can travel to the brain The ascending auditory pathway is complex , travelling through two brainstem nuclei ( cochlear nuclear complex and superior olive ) before ascending to the midbrain inferior colliculus , the medial geniculate nucleus ofthe thalamus and then the primary auditory cortex . From here it travels in dorsal and ventral pathways to the prefrontal cortex , to determine where and what the sound is , respectively There are also descending pathways from the primary auditory cortex which can all structures in the ascending pathway Key features are extracted from the sound wave beginning in the cochlea . There are two proposed coding mechanisms for frequency extraction place coding and temporal coding . Place coding uses transduction in the cochlea whilst temporal coding locks transduction and subsequent cochlea nerve firing to the frequency of the

SOUND 403 incoming sound wave . Once coded in the cochlea this information is retained throughout the auditory pathway Intensity coding is thought to occur either through the firing rate ofthe cochlea nerve or the number of neurons firing . Location coding requires input from both ears and therefore first occurs outside the cochlea at the level of the superior olive . Two mechanisms are proposed interaural time delays and interaural intensity differences Hearing loss can be categorised according to where in the auditory system the impairment occurs . Conductive hearing loss arises when damage occurs to the outer or middle ear and hearing loss arises when damage is in the cochlea or beyond Different types of hearing loss impact hearing threshold and hearing discrimination differently . The extent of hearing loss can vary as can the availability of treatments Hearing loss is associated with a range of risk factors and can have a significant impact on the individual including their social contact with others , occupational status and , in

404 I SOUND children , academic development . References , 2019 ) The association between iron and otitis media with effusion . of Otology , 15 ( A . 2018 ) Social exclusion , mental health and suicidal ideation among adults with hearing loss Protective and risk factors . journal , 68 ( article , Cohen , 2013 ) The what , where and how of perception . Nature Review Neuroscience , 14 ( 10 ) Canton , Williams , 2012 ) The consequences of hearing loss on dairy farm communities in New Zealand . 17 ( Hall , Maw , Golding , Steer , 2014 ) Glue ear , hearing loss and IQ an association moderated by the child home environment . One ,

SOUND 405 ( Hill , Hall , Williams , 2019 ) Impact of hearing and visual in childhood on educational outcomes a longitudinal cohort study . Open , Kara , 2012 ) Prevalence and risk Factors of otitis media with effusion in school children in Eastern . Int , 76 ( Macpherson , Greening , Mitchell , 2013 ) Emotion modulates activity in the what but not where auditory processing pathway . 82 , Hammer , 2017 ) Economic Impact of Hearing Loss and Reduction of Hearing Loss in the United States . journal , Language , and Hearing , 60 ( 2016 , Salina , 2020 ) Prevalence of allergic rhinitis in children with otitis media with effusion . European Annals of Allergy and Clinical Immunology ,

406 SOUND 52 ( 19 Owen , Baldwin , Swank , Johnson , Howie , 1993 ) Relation of infant feeding practices , cigarette smoke exposure , and group child care to the onset and duration of otitis media with effusion in the first two years of life . The journal of , 123 ( Tata , Ward , 2005 ) Spatial attention modulates activity in a posterior where auditory pathway . 43 ( Delano , 2015 ) modulation of peripheral auditory responses . Frontier in System , Warren , 2008 ) How does the brain process music ?

Clinical , Bray , 1930 ) The nature of acoustic response The relation between sound frequency and frequency of impulses in the auditory nerve . journal of Psychology , 13 ( PERCEIVING SOUND 407 About the Author Eleanor KINGS COLLEGE LONDON ?

ref in ( Ellie studied psychology at Sheffield University . She went on to complete an Neuroscience at the Institute of Psychiatry before returning to Sheffield for her doctorate , investigating the superior colliculus , a midbrain structure . After a research post at Oxford University she became a lecturer at the Open University before joining King College London , where she is now a Reader in Neuroscience , She conducts research into Attention Hyperactivity Disorder , focusing on identifying novel management approaches .