Kids in the future will not be ‘taught’ to read. Every interaction with every word on every device will support them learning to read on their own.
We only sense now. We only feel now. We only think now. We only learn now. We are naturally ‘wired’ to learn from what is happening on the living edge of now. Humans learn best by differentiating, refining, and extending their participation on the living edge of now.
Modern human life requires an unnatural kind of learning. Reading, writing, math, and all their abstract, conventional, and technological outgrowths, require our brains to process information in complexly artificial ways. Whereas we learn to move, feel, touch, smell, taste, hear, emote, walk, and talk by reference to the immediate internal feel of learning them, in the artificial domains we learn from the external abstract authority of who or what we are learning from. In natural modes of learning we learn from immediately synchronous (self-generated) feedback on the edge of participating (falling while walking). In the artificial modes, (other-provided) feedback can be far out of ‘sync’ with the learning it relates to (test results in school provide feedback far downstream from the learning they measure).
Most of the children who struggle in school are struggling with artificial learning challenges.
In reading, for example, our brains must process a human invented ‘code’ and construct a simulation of language. This unique form of neural circuitry conscripts the biologically based language processes of our brains to perform in programmably mechanical ways (according to the instructions and information contained in a c-o-d-e). The virtual machinery that must form in our brain to do this is as artificial as a CD player.
The Absurdity of Explicit Abstract Reading Instruction
Can you imagine trying to help a toddler learn to walk by giving them verbal ‘how to’ instructions when they are sitting? Can you imagine trying to teach kids to learn to ride a bicycle without using a bicycle – by trying to teach them through the use of abstract exercises rather than a guiding hand during the real-time live act of trying to ride the bicycle?
All prevailing models of reading instruction share a similar absurdity. They all involve methods of instruction that are abstractly removed from the live act of reading they intend to improve. They are all designed to train learner’s brains to perform unconsciously automatic code-processing operations that will later, when engaged in actual reading, result in fluent word recognition. Why? Because the technology we have used to teach reading has been incapable of interactively coaching and supporting children on the living edge of their learning to read. Unable to respond to learners during the real-time flow of their learning to work out unfamiliar words, we’ve been forced to train them in the abstract offline ways we do.
At Learning Stewards, we have turned the process completely upside-down and inside-out. Rather than using abstract training exercises, we have created a technology based pedagogy that is based on instantaneously responding to and coaching learners word-by-word as needed.
Our tech provides autonomous learning-to-read guidance and support which safely and differentially stretches the learner’s mind into learning to decode. Instead of teaching phonics rules and spelling patterns to be later applied (hopefully) to the decoding of unfamiliar words, our tech interactively guides students through the process of working out unfamiliar words, and it teaches them the rules and patterns in the process. With this model, kids learn 3 simple steps that enable them to learn to read (thereafter without any ‘offline’ instruction).
Learning to Read 1-2-3: 1) Click on ANY word. 2) Try to read word in pop-up. Can’t? Click word in pop-up. 3) Repeat.
Every time a student encounters a word she or he doesn’t recognize, they touch or click it. This brings up a pop-up box containing the word. Clicking on the word in the pop-up results in visual and audible ‘cues’ that reduce and often eliminate the (letter-sound-pattern) confusions in the word. With each click, the cues advance through a consistent series of steps that reveal (where applicable): the word’s segments, long and short sounds, silent letters, letter-sound exceptions, and groupings (blends and combinations). At each step, the student uses the cues to try again to recognize the word. If they can’t, they click again. If all of the cues (seen and heard) after the initial clicks aren’t sufficient to guide recognition of the word, a final click causes the pop-up to animate (visually and audibly) the ‘sounding-out’ of the word and lastly, the playing of the word’s sound as it is normally heard.
See for yourself:
Kids in the future will not be explicitly-systematically taught to read any more than they are explicitly-systematically taught to talk. They will learn to read during their every interaction with every word on every device (phones, tablets, computers, tv sets, augmented reality). They will learn to read as a background process pervasively available while they are playing and learning with anything involving written words. All words – all devices – all the time.
Decades of research, thousands of studies, and billions of dollars later: 60+% of U.S. children are still chronically less than grade-level proficient in reading. We are dedicated to ending the archaic, abstract, tedious, precarious, and ineffective (and consequently life maligning) ways we have historically taught reading. It’s time to get our heads out of the past and recognize that learning to read is a technological process, and as such, a process best facilitated by technology.
Learning Stewards is a 501c3 non-profit organization working to provide the technology described in this article to every public school student in the U.S. for free. Help us steward this transformation.