I’ve been switched on for a month now. It’s been a really steep learning curve, and after two tuning sessions I’m starting to feel like I’m getting close to the potential of the technology itself. I can hear all sorts of stuff now, that I won’t bore you with.
Hearing stuff is easy now.
Identifying it is much harder.
It’s especially harder when it comes to speech – I already sort of knew this, but it’s now becoming really obvious to me on a daily basis. Speech is made up of an almost infinite number of combinations of consonants, vowels, accents, speaking speeds, and so many other variables. It sounds almost completely alien to me after more than 30 years of learning to hear speech through hearing aids.
I’m beginning to develop a discrimination of sounds though. I can tell the difference between a d and a t, just about. Sounds like s are starting to become clearer.
Despite all that, I’m not very happy at the moment. I’m going to explain why.
I had my one month hearing test the other day. I’ve been having hearing tests since I was a baby. You sit in a soundproofed room and press a button when you hear sounds.
However, this was a different kind of hearing test. I was trying to listen to a man’s disembodied voice coming from a speaker, and to understand what he was saying.
I’ve taken this test before, as part of my assessment for the cochlear implant. I scored 1%, and that was only because I was wildly guessing at words like at, the, and and.
This time round I could hear more, but I understood even less. I could recognise p, ng, t sounds, but all of my guesses at what was being said were completely wrong. I scored 0%.
But what really surprised me was the lipreading test. In this test they play a video of a man speaking without much facial expression, with sound accompanying it. When I took this test pre implant, I scored 96%.
When I took the test again earlier this week, I scored 75%.
So my comprehension of the spoken word is currently significantly worse than it was before the operation. In fact when I took the test this time round, I felt as though I was being overloaded with information – both visually and aurally. It was almost as though my brain didn’t know whether to listen to the speech first, or rely on the visual input first.
I was pretty down about this for a while, then I decided to put all of this in context. When I took the test first time round, it was wearing a hearing aid I’d had 30+ years of experience with. My brain was hammering every last dB of information it received from those hearing aids, and making pretty good guesses with the limited data.
Contrast that with this device in my head, which I’ve only had for a month, and was only turned up fully about two weeks ago. I’m having to relearn what speech sounds like all over again.
A slightly disappointing milestone – but at the same time useful as a benchmark for future measurement. I’ll be tested again at the three month mark, then at the six month mark, then again at one year. Hopefully improvement will show then.
I know where I am in testing terms, I now feel like I’m hearing everything around me day to day (OK, here’s a quick list: squeaking cabinet doors, fluorescent lights, my son’s giggle, the telephone, the wind outside, the waterfall outside the office, the clicking sound of the indicator in the car, the cracking noise my neck makes sometimes).
Now it’s time for a plan of focused, structured training every day, where I can measure my progress. Listening to audiobooks, learning the ling sounds of speech, watching TV without subtitles, using Kindergarten iPad apps, listening to music and trying to follow the lyrics, and much more.
It’s time for a training montage.