Virtually all methods of reading instruction are based on some variation of phonics and/or whole language. The two camps have been involved in what has been dubbed the ‘reading wars’ for decades. In July of 2018 another chapter in the ongoing debate took place. The following comment was posted in reply to: Phonics debate embracing the evidence – Phonics in context debate 2018
Despite their profound differences, the debaters (phonics vs whole) all share a common assumption. Both sides assume that learning to read MUST take place within the media of two-dimensional static text. But why must the mental models of the 15th century printing press continue to constrain how we envision and design 21st century instruction? While good readers must be able to eventually read static text, nothing (except centuries of inertia) says we have to use static text to teach beginning and struggling readers.
Reading is an artificially inseminated language experience that is constructed by our brains according to the instructions and information contained in a c-o-d-e. Though many factors contribute to difficulties in learning to read, what makes learning to read (English and other deep orthographies) difficult for most beginning and struggling readers – what initially challenges their brains – is the confusing relationship between the naturally evolved and naturally learned code of speaking and listening, and the artificially created and artificially learned c-o-d-e of reading and writing. (See: Unnatural Confusion)
It’s one of the most complex, unnatural, cognitive interactions that brain and environment have to coalesce together to produce. – Dr. G. Reid Lyon, Past-Chief of the Child Development and Behavior Branch of the National Institute of Child Health & Human Development, National Institutes of Health,
The c-o-d-e of English orthography is an archaic legacy technology (like telegraphs and typewriters). However, unlike the development of modern information technologies, there wasn’t a committee of designers concerned with developing its ‘user interface’ so it would be easy for people to learn to use. Instead, English orthography is the result of elitism, prejudice, ignorance, negligence, and a series of historical accidents (See: The First Millennium Bug).
“It’s easy to forget that the system we have learned is a system that is based on a series of accidents that result in layers of complexity”– Dr. Thomas Cable, Co-author: A History of the English Language
Many notables, including Benjamin Franklin, Noah Webster, Melvil Dewey, Theodore Roosevelt, and Mark Twain, recognized that the code’s letter-sound confusion was at the root of reading difficulties. But despite their efforts and those of hundreds of others, centuries of attempts to change the alphabet or reform English spelling – to render their relationship more simply phonetic – failed (See: Reform Attempts).
“Delay in the plan here proposed may be fatal… the minds of men may again sink into indolence; a national acquiescence in error will follow, and posterity be doomed to struggle with difficulties which time and accident will perpetually multiply”. – Noah Webster
The central issue is inertia. Any change to the alphabet or spelling would create a ‘before’ and ‘after’ disconnect in the continuity of written English; and would be a disturbance, nuisance, and expense to everyone literate in the system as it is now.
“People are more likely to change their religion than change their writing system.” – Charles Hockett, Anthropological Linguist
Because changing the code – changing the alphabet or spelling – has such intolerable consequences, our conceptions of ‘teaching reading’ have been constrained to accepting the confusion as immutable. As a consequence, virtually everything we think about reading, learning to read, and the teaching of reading (including “Scientifically Based Reading Research“), is based on – warped by – accepting the code’s letter-sound-spelling confusion as immutable (See: Paradigm Inertia). All traditional approaches to teaching reading are based on training the brains of readers to deal with this letter-sound confusion by either working around it (whole – contextual guessing) or by recognizing rule-clues (cues) in the letter’s lexical and semantic context (phonics). Phonics and whole language methods are both attempts to compensate for (work around), rather than directly address, the confusing correspondence between letters and sounds.
…[in English] “we have fifty some sounds and only twenty-six letters. So we have to adopt a whole variety of mechanisms to close the gap.” – Dr. Richard Venezky, Author: The American Way of Spelling: The Structure and Origins of American English Orthography
We can’t change the alphabet and we can’t change the spelling. However, today, in an era of ever less expensive and increasingly more powerful digital devices, why not revise our models of reading instruction around using modern technology to dynamically, in real-time, scaffold and differentiate the process of learning to read?
The purpose of reading instruction is to teach students how to work out words they don’t know. Because teachers can’t be on the ‘live edge of learning to read’ with all of their students all the time, and because static printed words can’t provide students any help in recognizing them, our models of reading instruction have been limited to training students to remember and apply abstract rules and procedures to work out words on their own. That’s where it all breaks down. The reason reading instruction fails so many students is because it’s not available to them when they most need it – when, while trying to read, they encounter a word they don’t know. But what if it was available?
What if, rather than teaching learners to decode / recognize words by using abstract instructional exercises (and hoping they will apply them reliably fast enough when they are later actually reading), we provide instruction while they are actually engaged in the live act of trying to read words? Not some words some of the time, but each and every word that stutters the flow of their reading.
Using today’s inexpensive digital technology we can easily add another ‘learner’s layer’ to English orthography. We can preserve the two-dimensional alphabet and spelling, and digitally embed within it a dynamically responsive layer that provides a profoundly more neurologically efficient way to learn to read or improve reading. Instead of pedagogies constrained by the limitations of ‘static orthography’, we can completely reimagine literacy learning through the lens of what ‘hyper-orthography’ makes possible.
What if the ‘learner’s orthography’ was able to respond; letter-by-letter, sound-by-sound, word-by-word, meaning by meaning, with whatever learners need as they progress to fluency in reading? Not just read the words for them (short-circuiting their learning), rather, interactively scaffold, in real-time, the process of learners learning to work out recognizing and understanding the word for themselves. What if beginning and struggling readers learned to read with LIVE dynamic text that instructs, guides, and supports them word-by-word, not only to recognize and understand each and every particular word they ‘stutter’ on, but in a way that generalizes to learning to read all words? Sound far fetched? Have you been clicking on some of the words on this page?
Our debates about reading instruction are rooted in assumptions about the orthographic technology children learn to read with. By adding a dynamic overlay to the orthography, we can preserve the best and eliminate the worst of phonics and whole language as we differentially support and scaffold children up a more mind-friendly ladder to proficient reading.
Why not? Twenty-five years of ‘reading war’ debates, thousands of research papers, hundreds of billion of dollars…. and yet most of our children are still below proficient in reading: