[Trigger Warning: This post quotes someone who uses the R-word.]
In a workshop on special education, inclusion expert Dr. Cheryl Jorgensen simulates a class activity with a group of teachers. She asks them to pretend they’re students in a traditional social studies class, and among them is a hypothetical girl, Kelly, who has Down syndrome. Kelly is nonverbal and has a talking device. She can press buttons on a device, and those buttons speak her ideas. Her talking device is projected on the wall. Kelly’s vocabulary options are: Hello, Goodbye, Help, Bathroom, Yes, No, Break, Hungry, and Drink. Nine words. Three rows of three.
Jorgensen pretends to play the teacher of this class. “Who was the 16th president of the United States?” Remember, she’s talking to teachers who are pretending to be students. She asks someone to play Kelly, and “Kelly” approaches the projected screen in order to touch her answer.
Obviously, Kelly can’t answer. Her nine language options don’t give her the ability to do much more than request bodily needs. The woman playing Kelly eventually touches the word, “Help.”
“Clearly she’s not smart enough to know who the 16th president is, right?” Jorgensen says. “She didn’t give me the answer.”
“It wasn’t one of the choices,” says a participant in the workshop.
“She’s too retarded to have those choices,” Jorgensen says curtly.
Before you gasp, know that Jorgensen is not speaking her own opinion. She’s ventriloquizing the attitude of educators who operate on a dangerous assumption: they assume Kelly can’t. They assume Kelly can’t communicate with more than nine words, and they assume she can’t know who’s the 16th president of the U.S, and they assume she can’t learn the mainstream content with her typically developing peers.
In her workshop, Jorgensen argues that the least dangerous assumption is opposite of this attitude. It’s the simple, powerful phrase: presume competence. Presume Kelly can achieve in a regular classroom. Presume she can learn what the rest of her peers can. Presume she can speak with more than nine words.
If you spend a few years working with special educators and therapists, as we have, and then meet someone who greets your daughter with this presume competence attitude, you realize how radical and rare it is.
A year and a half ago, I asked an early interventionist to help us with the significant gap in Fiona’s expressive and receptive communication. As you might know, Fiona is nonverbal. Her expressive communication is limited to pointing, eye contact, facial expressions, maybe twenty rough signs, and a thousand ways to intone her only sounds: mm, um, and hum. But she understands a thousand times more. So I asked her early interventionist to help me create a picture system for Fiona, one where she could point to pictures to express her wants and needs. The therapist was initially enthusiastic, but eventually thought Fiona wasn’t a good candidate. She thought Fiona would “stim” on the picture cards (at the time, Fiona loved to hold cards and tap them against the palm of her hand.) The therapist encouraged a few signs, but didn’t provide much else in the way of augmentative and adaptive communication. Nine months went by. Meanwhile, Fiona just kept learning more and more vocabulary, without the ability to express it herself.
Someone alerted me to a special service in the state. I tooted a horn, or sounded a bell, or spoke whatever magic words unlocked the door to this service, and it led to intermittent visits from an Augmentative Communication Consultant. Immediately upon meeting her in our living room, I saw her “presume competence” attitude, an attitude that was as front-and-center as the very shirt she wore.
“Does she have an alphabet app?” she asked Fiona’s early interventionist, who was also at the meeting.
“The alphabet?” the therapist said, like it was a foreign word. Like the subject of the alphabet was as incredible and out-there as astrophysics.
“Of course!” said the communication expert. “Get her writing!”
Fiona hasn’t shown the ability to steadily hold a crayon yet. But that didn’t matter to the communication expert. This woman thought Fiona should have access to the alphabet in any way that we could give it to her. And why not try some apps on an iPad?
She was the first expert I’d met who’d held such a long, wide, generous view of my daughter’s capabilities. So I listened intently to everything the expert said. I listened when she told me that Fiona and I needed a shared language, one that would enable robust, creative expression. (“Right now, you don’t have a shared language,” she said, a sentence that was both true and heartbreaking.) I listened when this woman told me that we needed to offer Fiona multiple parts of speech rather than just a list of nouns, so she could do more than make requests and identify objects. I listened when this woman believed we needed a system with voice output, so Fiona could touch a button and a computer could speak a word. I listened when this woman told me she really, truly believed Fiona could handle something much more complicated than a few dozen picture cards on strips of Velcro (which by that point the early interventionist had provided.)
Over the course of a year, this expert watched Fiona carefully, assessed the gap between her expressive and receptive communication, observed her fine motor limitations, and eventually made a recommendation.
She believed we should try Speak For Yourself. It’s a robust and impressive communication app, something that, if Fiona learns to use, she will never outgrow. It has the potential to offer 14,000 words. It’s incredibly user-friendly, and designed with motor planning in mind (so targets stay in the same place when you move to different screens. Imagine trying to type on a keyboard that keeps moving its keys around. Not easy.)
The problem? With motor planning comes fixed targets. The size of the targets in Speak For Yourself cannot be changed, and the mainframe offers 120 words. 120 words on one iPad screen make for some pretty tiny blocks. Fiona is not yet isolating a finger, so she hits the screen with two or more fingers, and often rakes her hand across it, hitting multiple targets. Once in a blue moon, she hits her intended target, but more often than not, she doesn’t.
“This isn’t working,” I thought when we first experimented with the app. “We need something with bigger targets.” I thought, which I knew also meant this: “We need less words.”
I wrote the communication expert. I told her my concerns. I photographed pictures of Fiona touching the screen. I underscored Fiona’s fine motor limitations. I underscored the opinions of Fiona’s therapists, all of whom had met the app with serious skepticism. “Woah,” her occupational therapist said when she saw all those tiny boxes of words. “The buttons are even small for me,” her physical therapist said when she tried to integrate the app into therapy.
The communication expert scheduled another meeting. When she returned to our living room, again with that “Presume Competence” attitude front and center, she listened to my concerns. She nodded. And then she said things that made my heart sing.
She said: We cannot allow Fiona’s fine motor impairments to limit Fiona’s language development.
She said: We cannot only give her the number of targets she can accurately hit when she knows hundreds of words more than that.
She said: We give her the language. And then we help facilitate her use of the device until her fine motor skills catch up to her language abilities.
She said: We give her the language.
Friends, the sentence almost makes me cry—the kind of crying you do when you realize a weight has been sitting on your chest, and someone has just plucked it off you and said, “Here, let me take that for you. You don’t need that.” So much lightness. So much levity. We give her the language.
That is how we have ended up here:
Fiona is touching her Speak For Yourself app, about 100 words displayed for her reach. The picture feels like a radical gesture in presuming competence.
Let me be clear. Fiona cannot yet use this app to express herself in any way that you or I would. In this photo, she is exploring. She is hitting the screen and observing what happens. But she is very interested in observing what happens. And when she hits a word she knows and likes (water table, Grammy, mustard, Sesame Street) she lights up, looks at me, and grins. Then she resumes her exploration, striking “up” or “shake” or “doctor” or “heavy,” or striking no word at all.
We have many strategies at our hands. We are giving this a full year. The theory is this: if it takes a typically developing child approximately one year to speak his or her first word, then it can take a child like Fiona just as long to eventually use the communication app expressively. We won’t expect her to touch a word in an intentional way any time soon. We won’t pressure her. We won’t use the app as a way to “test” her fine motor skills or her vocabulary knowledge (“Where’s the cow, Fiona? Point to the cow.”) That’s not communication. That’s a dog-and-pony show. Instead, we’ll model what the app can do. We’ll comment on the weather with it. We’ll offer her choices with it, and she’ll nod yes or no. We’ll review the day’s adventures with it. And we’ll let her explore it. I think half of her therapists believe we’re a little delusional, and I don’t care. We are presuming competence. We are paving the way for a shared language. We are believing in our girl.