Skip to main content

New Phono-graphix Lips Study from Truch

Submitted by an LD OnLine user on

COMPARING REMEDIAL OUTCOMES USING LIPS© AND PHONO-GRAPHIX: AN IN-DEPTH LOOK FROM A CLINICAL PERSPECTIVE

By Steve Truch, Ph.D., The Reading Foundation

Abstract

Several multisensory, structured and sequential programs are designed to enhance student’s phonological awareness, which is now seen as a primary cause of subsequent decoding and spelling problems. One of these programs is the Lindamood Phoneme Sequencing Program (LiPS). Another is the Phono-Graphix Program (PG). The Reading Foundation, a private clinic in Calgary, Alberta Canada has had several years of experience with both programs. Reading research studies in scholarly journals rarely provide practitioners with information about possible differential outcomes from commercially available programs. The clinical results presented here are therefore unique. The Reading Foundation has had extensive clinical experience with both the LiPS and PG Programs. Outcomes using the LiPS Program over an 80-hour time period were previously presented in Annals of Dyslexia in 1994. Clinical outcomes using the Phono-Graphix Program, also using an 80 hour time period, are presented here. A total of 203 clients of various ages and backgrounds completed an intensive clinical program (with a median of 78 hours of one-on-one treatment) where the primary delivery was the Phono-Graphix method (modified for clinical use). Pre and post-test results of such treatment showed highly significant gains (p .0001 or greater) on measures of phonological processing, sound/symbol connections, word attack, word identification, spelling and decoding in context. Extensive information regarding “gains per hour” as students go through the program is also presented. Next, the results from this analysis are compared to the data presented by the Phono-Graphix authors in the 1996 issue of Annals of Dyslexia. A failure to replicate those results in terms of treatment gains per hour is discussed. Finally, data comparing the outcomes from the LiPS Program (1994) and the Phono-Graphix Program is presented. Overall, there were no significant differences in outcomes between the programs over a similar 80 hour time period on all variables but spelling, where outcomes favored the Phono-Graphix Program. However, those outcomes may have been the result of the additional spelling time spent in The Reading Foundation’s clinical version of the Phono-Graphix program. This data supports the idea that a structured remedial reading program incorporating certain essential elements (see “Features of an Effective Remedial Reading Program” on the last page of this poster presentation) will bring positive results and that there are several programs which can accomplish this, including The Reading Foundation’s new program, called “Discover Reading.”

Background

From about the mid 1980’s to the present time, a large body of research done world-wide has identified that stimulating phonological awareness in students is a critical feature in both the prevention and remediation of reading disabilities (Torgesen, Alexander, Wagner, Rashotte, Voeller and Conway, 2001). Indeed, weak phonological skills are seen as a primary cause of reading disabilities (Lyon, Alexander and Yaffee, 1997).

Since a causal connection between weak phonological skills and subsequent decoding seems likely, it is not surprising that a number of studies now suggest that the addition of phonological awareness training is a necessary ingredient in the remediation of a reading disability (Truch, 1990; Torgesen, Wagner, Rashotte, Alexander and Conway, 1997).

One of the longest-standing remedial programs having a strong phonological component is the Lindamood Phoneme Sequencing Program (LiPS), formerly called Auditory Discrimination in Depth. The LiPS Program contains a motor feedback component that brings to the student’s attention the articulatory features that are present when a sound is produced by the vocal apparatus. Students first learn that some sounds are “noisy” (voiced”) and some are “quiet” (unvoiced). Many of the consonant sounds can be taught as pairs of sounds and the articulatory features used in producing them can be brought to the attention of the student. The subsequent labeling of the articulatory features is then used in error-handling procedures for both decoding and spelling. For example, the sounds /p/ and /b/ are both produced when the stream of air from the mouth is first stopped by the lips and then quickly released when the lips are then opened. The student learns that these two sounds are called “lip-poppers,” which is an excellent label describing the articulatory actions that actually occur. “Lip-poppers” can further be analyzed as “quiet” (/p/) or “noisy” (/b/).

In the LiPS Program, the student’s phonological awareness is further enhanced by an activity called “tracking.” The student is taught to manipulate phonemes (add, delete, shift, substitute and repeat phonemes) using colored wooden blocks to assist his “tracking” of the sounds. As the student advances in the program, the shift from tracking sounds using blocks to tracking sounds using letters is made. When a student makes an error, then the teacher can use the student’s new knowledge of the articulatory features of sounds and an error-handing procedure called “responding to the response” to assist the student in correcting his mistake. For example, if the student read “spot” as “stop,” he would be questioned about what he “feels” after the “skinny” sound /s/ when he says the word “stop.” His reply would be that he feels a /t/ sound, a “tip-tapper.” His attention would then be directed to what he actually sees after the skinny sound in the word “spot.” (“Is the ‘p’ a tip-tapper letter”?). At this point, the student should become cognizant of his error and be able to self-correct it.

Both “articulatory feedback“ and “responding to the response” are very compelling and attractive features of the LiPS Program. Indeed, the use of articulatory feedback has often been viewed as necessary (at least by many in the practitioner community) for the subsequent development of phonological awareness in students. However, at least two studies have now demonstrated that the articulatory feature is not necessary (Olson and Wise, 1992 and Torgesen et al, 2001). It would therefore appear that a variety of different programs can produce similar results in phonological awareness, word attack and word identification skills in the long run, so long as the program includes some essential ingredients such as training in phonological awareness coupled with an explicit introduction to sound and letter connections.

In 1996, an article about a new program called Phono-Graphix  appeared in Annals of Dyslexia (McGuinness, C., McGuinness, D and McGuinness, G., 1996). In the article, the authors presented evidence of substantial improvements for students in a very short period of time (12 clinical hours) in phonological awareness, word attack and word identification skills. The program did not appear to use any articulatory feedback, as in the LiPS Program, but did include other auditory activities (phonemic segmenting, phonemic blending and “auditory processing,” which is essentially a phoneme deletion/blending task) that stimulated phonological awareness. Sound and letter connections were made explicit to the student in an interesting fashion. The Phono-Graphix Program also did not use the “responding to the response” procedure. Rather, error-handling was more direct.

McGuinness et al (1996) provided a measure of “gains per hour” in standard score points that showed the gains from the Phono-Graphix program were seven times greater than those obtained from the LiPS Program; 11 times greater than the Reading Recovery Program and 50 times greater than Alphabetic Phonics. These results appeared astonishing, given the author’s previous experience with the LiPS Program.

Anyone doing remedial work with students would naturally investigate such compelling statistics. The Reading Foundation in Calgary, Alberta, Canada, which had previously been using the LiPS Program and had provided documented evidence of its effectiveness (Truch, 1994) gradually made the switch from using the LiPS Program in the clinic to using Phono-Graphix.

The Director of The Reading Foundation clinic first attended a one-week training session in Florida. He also completed the Phono-Graphix trainer program and subsequently trained the clinical staff of The Reading Foundation in Calgary. In addition, two visits were made to The Reading Foundation by Carmen McGuinness, to “fine-tune” procedures and ensure that staff were adhering to the methods used in the Phono-Graphix program.

In this study, there were some significant differences in delivery of the Phono-Graphix Program at the Florida clinic, where the Phono-Graphix Program was developed, and at The Reading Foundation. One of the major differences was in instructional time and intensity. The Reading Foundation offers not only remedial reading instruction (including comprehension) but also has programs for math and written language. The “immersion model,” where students attend daily for four hours of one-on-one was started in the Lindamood-Bell Clinic. The Reading Foundation from its inception in 1990 adopted this model. The immersion model brings the most rapid gains for students in the shortest possible time period. The Phono-Graphix program, on the other hand, was designed to be delivered once weekly with a trained teacher for one hour and supplemented by homework with the parent. Homework assignments come from a book purchased by the parent. (It is interesting to note that the Florida clinic now offers an option that is more intensive). At the time, the choice was to drop the immersion model or adapt the Phono-Graphix program to fit the immersion model. The Reading Foundation chose the latter route. Another difference occurred over the issue of homework. Homework assignments were not given to the parents for students attending The Reading Foundation. In order to accommodate these differences in program presentation, a great deal of preparation work was undertaken to establish a more thorough and explicit scope and sequence for the Phono-Graphix Program for the students attending The Reading Foundation’s “intensive immersion” clinic. The scope and sequence was completed with consultation from Carmen McGuinness. The techniques used in the Phono-Graphix Program were not altered, but more time was devoted in the clinical program to spelling, writing and reading in context than is generally the case with the Phono-Graphix Program. Modifications also included different sequences to accommodate students’ differing pacing abilities.

In this study, the outcomes of the Phono-Graphix Program are evaluated to determine its effectiveness from the perspective of the clinical immersion model. Those results are discussed in Part 1 of this paper. Next, the extremely rapid results presented in the McGuinness et al study (1996) after 12 hours of intervention are compared with the clinical results at The Reading Foundation after 12 hours of intervention. That forms the subject for Part II of this paper. And finally, because the raw data was still available on the subjects who had attended The Reading Foundation from 1990 to 1993 and who went through 80 hours of intensive one on one remediation using the LiPS Program, it was possible to determine whether one program was more effective than the other in terms of outcomes after 80 hours by doing an analysis of variance. That forms the basis for Part III of this paper.

Method

Subjects

Clients who attend The Reading Foundation usually come from a word of mouth referral, either from a parent, or a specific agency, usually a school. Of the 203 students used in this analysis, 36 students (17.8%) were ages 6-7; 47 students (23.3%) were ages 8-9; 105 students (51.5%) were 10-16 and 15 students (7.4%) were ages 17 and over. These age distinctions are quite arbitrary and were used only because the same age groups were used in the McGuinness et al (1996) study.

No attempt was made to diagnose or classify a student as “dyslexic” or “learning disabled” though the vast majority of our clients would meet traditional criteria for “learning disabled” (i.e., average intelligence or better but with a discrepancy between reading potential and actual reading performance) or “dyslexic”. Some of our clients would be considered “mentally challenged” or “slow learners.” Many of them also presented with symptoms of attention deficit disorder. However, these categories were not as important as their common presenting symptom, mainly a reading weakness and what was common in the reading weakness of all the 202 clients was their initial deficit in phonological processing and attendant difficulties with decoding, spelling and fluent reading. The youngest client in this study was 6 years, 3 months old. The oldest was 42 years. The ratio of males to females was 1.6:1 with a total of 126 males and 77 females. The average vocabulary score for all 203 subjects was 105.60 (s.d 11.80).

The collection of raw data was started in 1998 and was discontinued two years later. Nevertheless, we had just over 200 students of all ages who completed the Phono-Graphix Program over the two-year period, over twice the number in the McGuinness et al (1996) study.

Measures

Oral Vocabulary

Students’ oral vocabulary was measured using either the Peabody Picture Vocabulary Test, (Dunn and Dunn, 1997) the Vocabulary subtest of the Wechsler Intelligence Scale for Children – Third Edition (Wechsler, 1991) or the Comprehensive Receptive and Expressive Vocabulary Test (Wallace and Hammill, 1997). For this analysis, however, only the Expressive scores were used in the data analysis.

Phonological Processing

Auditory segmenting, blending and auditory processing were measured informally by tests developed for the Phono-Graphix Program. For the auditory segmenting task, students were orally presented with either a real word or a nonsense word and asked to state each sound in the word (e.g., “Tell me the sounds in the word “cat”). The original segmenting test from the Phono-Graphix Program had a ceiling score of 63 items but did not sample words with more than four sounds. Our clinical experience with the test quickly showed that this was too easy so some additional words were added that contained five and six sounds. The total possible score on The Reading Foundation version of the segmenting test therefore went to 79. However, since a student receives one point for each sound segmented on either version of the test, then comparisons of gains are still valid.

In the blending task, students were presented a sequence of phonemes orally and asked to tell what the word was (e.g., “What is this word: /b/ /e/ /d/?”). There were a total of 16 words on this test.

In the auditory processing test, students were asked to say a word with a phoneme deleted (e.g., “Say ‘play’ without the /l/ sound”). There were a total of 10 items on this test.

Finally, students were also assessed on the Lindamood Auditory Conceptualizaton Test (Lindamood and Lindamood, 1979) as an independent measure of phonological processing. On this test, students use colored wooden blocks to represent sounds. They are then asked to match the patterns of sounds orally presented to them with the blocks. The ceiling score on this test is 100.

Code Knowledge

The Phono-Graphix Program contains an informal “code knowledge” test. Here the student was shown consonant and vowel letter combinations and was asked to name the sound associated with it. For example, if the student saw “oy,” he would state /oi/. There were 50 items on the test and the total number of items correct was converted to a percentage.

An informal measure of sound/symbol connections was also used by The Reading Foundation. In this test, the examiner said a single sound such as /b/ and the student printed the letter or letters that represented the sound. A total of 28 sound to letter connections were sampled. A similar test is used at the Lindamood-Bell clinic.

Decoding

The Woodcock Word Attack Subtest (Woodcock, 1987) was used to measure decoding ability. On this test, students read a series of nonsense words. Their raw score was converted to a grade-equivalent score. While a newer version of this test was available, it was not used, since the Truch study (1994) used the 1987 version of this subtest for students doing the LiPS Program.

Word Identification

The Reading subtest of the Wide Range Achievement Test – Third Edition (Wilkinson, 1993) was used for this variable. Students read a graded word list. Their raw score was converted to a standard score.

Spelling

The Spelling subtest of the Wide Range Achievement Test – Third Edition (Wilkinson, 1993) was used as a spelling measure. Raw scores were converted to standard scores.

Reading in Context

The old Gray Oral Reading Test (Gray, 1963) was used as a measure of reading in context. Again, a newer version of this test is available but was not used since the 1963 version was also used for the previous study (Truch, 1994) with students who had completed the LiPS Program.

Procedure

Most students in this study attended The Reading Foundation clinic on an intensive one-on-one remedial basis. Students attended for four hours of one-on-one each day, usually for four consecutive weeks. Some students attended for 40 hours or less when their difficulties were in the “mild to moderate” range. Some students required more than 80 hours.

For older students, the routine of attending the clinic for four hours each day was modified since older students had greater difficulty missing school time. In those cases, the student usually attended for eight hours each week instead of 20, until they completed the 80 hours. This routine had also applied to the 281 students who were the subjects for the Truch (1994) study using the LiPS Program.

Clinicians diligently followed the teaching methods as taught in the Phono-Graphix Program. An hourly program at The Reading Foundation was similar to the description provided in the McGuinness et al (1996) study. These included activities for phonemic segmenting, blending and manipulation together with a systematic introduction to sound/letter connections. The sound/letter connections start with one-to-one sound/letter correspondences (e.g., /b/ and b) and then to the one to many sound/letter correspondences (e.g., /oe/ as oa, ow, etc.). However, since each student attended The Reading Foundation for about 80 hours, then more time could be devoted to spelling and reading in context, areas that are not addressed in much detail in the Phono-Graphix Program Lesson Plan manual.

The Phono-Graphix Program was administered to each student by a qualified and trained clinician. Many of the staff had had previous experience with the LiPS Program. Some were reluctant to give up the LiPS Program, simply because it had been so successful. However, they also began to appreciate the simplicity of the Phono-Graphix Program and the fact that students could get to “real reading” sooner than they would with the LiPS Program. They also enjoyed the manner in which the students were introduced to the graphemes that represented the phonemes. In the Phono-Graphix Program, students learn that the sounds in words are represented by letters (called “sound pictures”). There are no “rules,” such as “silent letters” that are attached to the presentation of the letters. Not even “expectancies” as described in the LiPS Program. Rather, students learn that some sounds have more than one sound picture that can be used to represent the sound. For example, the sound /ae/ can be represented by the letters a, ai, ay, ea, ey, etc. For the student then, the task is simplified to one of sorting the options and being able to recognize the correct one. Diane McGuinness (1997) makes a compelling argument for using this approach. Students find it more logical and easier to think of words first as consisting of sounds, and then being shown how to represent the sounds with various letters. This is in contrast to a “phonics” approach that introduces the student first to a letter and then the sound. By going from orthography to phonology as a teaching method, “phonics logic” in the long run becomes very complex and thus confusing to the student (McGuinness, 1997). The great strength of the Phono-Graphix Program lies in the simple and logical way in which the alphabet code is presented.

Students who attended The Reading Foundation were tested after 12 hours in the program (three clinical days). This measure allowed us to compare the results of 12 clinical hours at The Reading Foundation with the 12 clinical hours in the McGuinness et al. (1996) study.

Students at The Reading Foundation were tested again after 24 hours of being in the Phono-Graphix Program at The Reading Foundation (six clinical days). The rationale for the 24 hour testing was to provide for some comparison for the fact that students in the Florida clinic from the McGuinness et al (1996) study also had supplemental help at home for each lesson. Thus, a student in Florida who attended the McGuinness clinic for 12 hours had (we assume) about 12 more hours of homework. However, while this author attended his training sessions at the Florida clinic, it became very apparent from the students who came to the Florida clinic each day that many of the students did not do any homework at all, while others had had many more hours of work done with them at home. As the McGuinness group did not account for this fact in presenting their findings, the “extra help at home” factor remains an unknown in terms of its possible impact on their outcomes. In the McGuinness et al. (1996) study, the authors state that only two of their 87 students failed to do the homework assignments.

Finally, students at The Reading Foundation were post-tested near the end of their 80 hours. Thus, we were able to see the trends in progress for each student individually as well as for the whole group over time. (The LAC, the sound/symbol test and the GORT were not administered after 12 and 24 hours. These three tests were administered only pre and post). Thus, extensive “gains per hour” information is provided in this study for each of the applicable variables.

Results

This quasi-experimental design simply compares pre and post-test results with no control group.

Part I

Outcomes for All Students Using the Phono-Graphix Program

The means for the pre test, the 12-hour testing, the 24-hour testing, the post-test scores and the overall “gains per hour” for all 203 students on all the variables are presented in Table I.

Pairwise comparison using t-tests for paired samples and two-tailed significance were used for the analysis. A gains per hour column is also included. In this column, the gains per hour for each client were measured based on the actual number of hours the student had completed. Those gains are then averaged for the entire group. This procedure is the same one described in the McGuinness et al. (1996) analysis. So for the students who completed the program at The Reading Foundation in 20 hours, the pre and post-test gains are averaged for 20 hours. If the student took 100 hours to complete the program, the gains are averaged for 100 hours and so on. For some of the variables (e.g. segmenting, blending, auditory processing), the “gains per hour” when averaged over the median 80 hours is very small. However, as the mean scores in Table 1 also show, most of the gains on these variables occur in the first few hours of the program, typically in the first 12 hours. After that, the student doesn’t have much room for further growth since he is already close to the ceiling on these tests. More analysis of the “gains per hour” factor occurs in Tables III to IX.

Table I

Outcomes for All Students (n=203) After 80 Hours of one-on-one Intervention Using the Phono-Graphix Program

Variable Pre 12 Hours 24 Hours Post Sig. Gains/hour

LAC 63.2 NA . NA 90.4 p  .001 .47 (S.D. .38)
S.D. 22.0 11.6

Segmenting 40.1 73.3 75.4 77.4 p . 001 .75 (S.D. .97)
S.D. 21.9 9.1 6.8 3.4

Blending 12.1 14.1 14.4 14.9 p . 001 .05 (S.D. .06)
S.D. 3.2 1.9 1.5 .9

Auditory
Processing 4.7 8.4 8.6 9.4 p . 001 .09 (S.D. .13)
S.D. 2.9 2.4 2.1 1.3

Code
Knowledge 61.9 77.4 80.5 88.90 p . 001 .50 (S.D. .32)
S.D. 15.5 13.8 13.5 9.7

Sound
Symbols 22.8 NA NA 27.4 p . 001 .09 (S.D. .09)
S.D. 4.3 1.2

Woodcock 3.37 5.5 5.9 8.0 p . 001 .11 (S.D. .17)
S.D. 2.2 3.4 3.6 3.9

WRAT
Reading 83.5 90.9 91.7 98.8 p . 001 .29 (S.D. .22)
S.D. 13.3 13.3 14.0 12.1

WRAT
Spelling 83.9 87.9 89.1 95.6 p . 001 .21 (S.D. .21)
S.D. 12.9 12.7 13.2 13.1

GORT 3.8 NA NA 4.8 p . 001 .02 (S.D. .04)
S.D. 2.9 3.2

The fact that highly significant pre and post-test gains were obtained for all subjects is consistent with clinical impressions of gains perceived by parents, clinicians and students themselves. The Phono-Graphix Program does work and produces very good clinical outcomes.

Because data was collected after 12 hours of intervention and again at 24 hours, it would be interesting to know whether the changes that do occur happen in a cumulative fashion or not. Table II presents the degrees of significance for all the variables and all possible time intervals for those variables where testing occurred at 12 and 24 hours (as well as pre and post of course).

Table II

Degrees of Significance (p ) Of Changes On Means Between All Possible Time Intervals
For All Ages (N = 202)

Variable Pre to 12 Pre to 24 Pre to Post 12 to 24 12 to Post 24 to Post

Segmenting .001 .001 .001 .001 .001 .001
Blending .001 .001 .001 .001 .001 .001
Auditory Processing .001 .001 .001 .011 .001 .001
Code Knowledge .001 .001 .001 .001 .001 .001
Woodcock .001 .001 .001 .001 .001 .001
WRAT Reading .001 .001 .001 .001 .001 .001
WRAT Spelling .001 .001 .001 .001 .001 .001

Table II makes it very clear that highly significant gains over time are continually occurring despite the fact that changes in means between intervals (Table I) are sometimes small.

When each of the variables in Table II was also sorted over the four different age groups, significant gains across time intervals again appeared. This was true in almost every instance and is therefore not presented in detailed tables here.

Comparing Age Groups in Gains Per Hour

Students were grouped into three age categories (6-7; 8-9 and 10-16) so as to be able to compare them, in Part II, to the same age groups presented in the McGuinness et al (1996) study. We also had a number of students in the 17+ group and the information on them is also presented in Tables III to IX. Unfortunately, only 15 clients were in this older category which is a very small sample size. [No information for an older group of students is presented at all in the McGuinness et al. (1996) study].

Tables III-IX presents an in-depth look at the gains per hour for each of the variables (where 12 and 24 hour testing was used) broken down into the four age groups. The 12-hour column represents the gains per hour from the pre-test to the completion of 12 hours of intervention divided by 12. The 24-hour column represents the additional gains from 12 hours to 24 hours divided by 12. The Post column represents the gains per hour from an additional 56 hours of intervention (from 24 hours to the final Post test).

Table III

Gains Per Hour On Segmenting (79 items) At Pre, 12 hours, 24 hours and Post

Variable 12 Hour 24 Hour Post

Age

6-7 2.21 .49 .11
8-9 3.26 .16 .03
10-16 2.88 .12 .02
17+ 3.48 .22 .02
All Subjects 2.91 .20 .04

For this variable, the average rate of gain between time intervals slows down as the student spends more hours in the program. In fact, the rate of gain is significantly slower (p  .001) for the last two time intervals. For the student, even though the gains per hour are very small compared to the leaps made in the first 12 hours, the results are nevertheless cumulative over time and are significant. A further 12 hours brings significant gains in segmenting, and a further 56 hours brings further gains again in segmenting. However, the rates of gain between time intervals are significantly slower than in the first 12 hours.

Bear in mind as well that these are average scores. Students who were highly proficient at segmenting after 12 hours would have had their programs adjusted. Segmenting activities would still be done, but likely not for each and every ensuing clinical hour. The same holds true for the blending and auditory processing variables presented next. However, since we did not keep such microscopic records, we cannot know for certain the exact number of hours a particular student would have spent in each of these activities over the course of the 80 hours.

Table IV

Gains Per Hour On Blending (16 items) At Pre, 12 hours, 24 hours and Post

Variable 12 Hour 24 Hour Post

Age p-value p-value p-value

6-7 .23 .001 .06 .008 .03 .001
8-9 .15 .001 .03 .001 .009 .001
10-16 .18 .001 .009 .001 .003 .001
17+ .13 .001 .05 ns .01 ns
All Subjects .18 .001 .03 .001 .009 .001

The probability values are reported for the Blending variable since the degree of significance in the gains per hour varied somewhat by age group and time intervals. Gains between intervals were again significantly slower for all but the 17+ age group after 24 hours and a further 56 hours. Again, these numbers simply mean that that rate of gain slows down after the first 12 hours for all but the 17+ age group but gains are still occurring.

Table V

Gains Per Hour On Auditory Processing (10 items) At Pre, 12 hours, 24 hours and Post

Variable 12 Hour 24 Hour Post

Age p-value p-value p-value

6-7 .23 .001 .06 .008 .03 .001
8-9 .15 .001 .03 .001 .00 .001
10-16 .18 .001 .009 .001 .003 .001
17+ .13 .001 .05 ns .01 ns
All Subjects .18 .001 .03 .001 .009 .001

On this variable, all gains are highly significant after 12 hours. Once again, the 17+ age group does not register significantly slower rate changes after 12 more hours and after 56 more hours. The other groups slow down in their rate of gains to a significant extent.

Table VI

Gains Per Hour On Code Knowledge (% gains) At Pre, 12 hours, 24 hours and Post

Variable 12 Hour 24 Hour Post

Age p-value p-value p-value

6-7 1.32 .001 .37 .008 .23 .001
8-9 1.27 .001 .30 .001 .17 .001
10-16 1.32 .001 .38 .001 .12 .001
17+ 2.18 .001 .35 .01 .12 .001
All Subjects 1.36 .001 .35 .001 .15 .001

Gains on code knowledge for all subjects are highly significant after 12 hours. After that, the rate of gain for each group significantly slows down. However, students are still making hourly gains.

Table VII

Gains Per Hour On Word Attack (Grade-Equivalents) At Pre, 12 hours, 24 hours and Post

Variable 12 Hour 24 Hour Post

Word Attack

Age p-value p-value p-value

6-7 .11 .001 .02 .001 .02 .001
8-9 .16 .001 .05 .006 .03 .001
10-16 .25 .001 .07 .001 .04 .001
17+ .16 .001 .12 ns .01 ns
All Subjects .19 .001 .06 .001 .03 .001

Gains on word attack are significant after 12 hours and the rate then slows down for the remaining time intervals except for the 17+ age group where the rates between intervals are not significantly slower.

Table VIII

Gains Per Hour On Word Identification (Standard Scores) At Pre, 12 hours, 24 hours and Post

Variable 12 Hour 24 Hour Post

WRAT-R Reading
Age p-value p-value p-value

6-7 .54 .001 .41 .ns .08 .002
8-9 .44 .001 .16 .05 .14 .001
10-16 .74 .001 .11 .001 .10 .001
17+ .98 .001 .32 ns .10 .05
All Subjects .64 .001 .19 .001 .11 .001

Word identification gains are significant for all subjects overall after 12 hours. However, the rate gains after 24 hours are not significantly slower for two of the age groups (6-7 and 17+). The other groups slow down significantly.

Table IX

Gains Per Hour On Spelling (Standard Scores) At Pre, 12 hours, 24 hours and Post

Variable 12 Hour 24 Hour Post

WRAT-R Spelling
Age p-value p-value p-value

6-7 .49 .001 .27 ns .06 .008
8-9 .30 .001 .23 ns .09 .01
10-16 .35 .001 .18 ns .09 .001
17+ .51 .001 .21 ns .15 ns
All Subjects .37 .001 .21 .04 .09 .001

The gains in spelling over the first 12 hours are again highly significant. After that, those gains also remain fairly constant for the next 12 hours. For the last 56 hours, the rates are significantly slower for all but the 17+ group. Keep in mind that by the end of the clinical immersion program, all students have gained significantly in their spelling skills.

Discussion of Part 1 Results

As is evident from Tables I to IX, the gains for all variables and for all students are highly significant. Clinical use of the Phono-Graphix Program did deliver excellent results.

As well, for all the variables, a surprisingly positive effect does occur after just 12 hours of intervention. The reason it is surprising is because this “12 hour effect” has not surfaced in prior studies related to intensive remedial reading interventions. The McGuinness et al. (1996) study was the first to show that significant gains occurred in a very rapid time frame with their Phono-Graphix Program and they make much of it in their article and subsequent advertising regarding their program.

However, the “12 hour effect” may be present from other programs. It’s just that there is no data at this point to determine whether the effect is restricted to the Phono-Graphix Program or is present when using other programs. For example, there might have been a “12 hour effect” from the LiPS program as well. However, in the Truch (1994) study, data was not collected in the same time intervals as for this one. Only pre and post-test scores were administered to students who completed the LiPS Program. Certainly, it would seem likely that the “12 hour effect” is not restricted to one program alone so long as the remedial program contains the necessary activities designed to strengthen students in their areas of weakness.

The great value of the “12 hour effect” of course is the “quick start” it gives to the students and the resulting encouragement and change in self-confidence for them.

However, it should also be apparent from the data that students cannot possibly become “readers” after just 12 hours of intervention. This is despite the fact that all the phonological variables were nearly at ceiling levels after just 12 hours and even the word identification and spelling variables were in the average range or close to it after just 12 hours (see Table I).

Students who require remedial assistance in reading not only require a good program. They require time to develop the complex processes involved in becoming readers. Any suggestions of a “quick fix” are extremely misleading and in the long run, very unfair to the students. For example, in the Phono-Graphix Program, most students can be introduced to all the “code options” and retain many of them in the short term after just a few hours of intervention. However, for the student to develop proficiency in using those options for word identification or spelling (even spelling single words, let alone creating sentences) and to retain those options in the long term, takes much more time. The case is even worse for reading fluency. Reading fluency has been receiving a great deal of attention recently (e.g. Perspectives, International Dyslexia Association, Volume 28, No. 1, Winter, 2002). Teaching a child to become a fluent reader when there has been an initial weakness in phonological processing remains quite a challenge. As noted in Table I, even after 80 hours of intensive instruction, the gains in fluency for The Reading Foundation students are positive, but remain rather small. Torgesen et al (2001) provide evidence that 80 hours of a similar “intensive immersion” in a school setting brings positive rate gains for students. However, a two-year follow-up by the Torgesen group on those students showed that rate scores dropped for nearly 75% of the students. Only 25% continued to grow in their fluency after the intervention was over (however, no significant follow-up after the intensive immersion was provided to those students either).

Overall then, the clinical results for students receiving the Phono-Graphix Program in an intensive immersion fashion are very good. Positive gains show on all variables. A “12 hour effect” also appears and provides the students with a quick start to reading recovery. Each clinical hour of intervention brings continued and positive changes on variables over most time intervals. The 17+ age group do not register significantly slower rate gains after 12, 24 and 56 hours as do many of the other age groups. However, this may have been due to the small sample size for this group (15 students).

Gains for all students after 12 hours is strong, even in word attack, word identification and spelling. After a full 80 hours, the gains are also encouraging. By that time, all variables are “normalized” except for fluency, which, while stronger, is still not nearly in the average range for many students.

From a clinical perspective, these results are very positive. Clinicians, students and parents were all pleased with the kind of progress their students made while in the clinical program.

A lingering question for many parents is the extent to which such gains are maintained in the long run. McGuinness et al. (1996) did not conduct any formal follow-up with their students though they did report encouraging parent comments from a follow-up parent survey (50% return rate). Formal follow-up did not occur for any of The Reading Foundation students either though many Reading Foundation students are re-assessed after various time intervals after leaving the program. Most students from such assessments show a maintenance of gains on many variables; however, the data collected in this fashion has not been systematically recorded or subjected to any kind of analysis.

Systematic two-year follow up in the Torgesen et al. (2001) study provides the most thorough analysis to date on long-term results from intensive immersion interventions. Generally speaking, word attack and word identification scores continued to grow for about 75% of the students two years after the intensive intervention. Comprehension scores continued to grow for about 50% of the group while for the other half, the score on comprehension fell. Rate gains continued to grow for only about 25% of the students; another 25% gained after 1 year and then fell off, while for the other 50%, rate gains continued downward after the post-testing and for the next two years. Of all the variables then, fluency is the most difficult to maintain and grow after an intensive intervention.

Part II

In this section we compare, where possible, the 12 hour outcomes on common measures from the McGuinness et al. (1996) study and this one.

This section is much like comparing apples to oranges since there are points of departure from the actual delivery of the Phono-Graphix Program at The Reading Foundation and the delivery at the Florida clinic. The “clinical immersion” model was used at The Reading Foundation. Students attended The Reading Foundation each day for four hours over a four week period whereas in Florida, the student attended just once a week and supplemented that instruction with homework. Therefore what is common is 12 hours of instruction in both cases delivered by trained instructors in the Phono-Graphix program. The McGuinness et al. (1996) group however, did have 12 supplemental hours of homework (more or less) which is a factor that has an unknown effect on this comparison.

Because this was not a controlled study, the measures differed as well. However, what was common to the McGuinness et al. (1996) study and this one for which meaningful comparisons could be made include the segmenting, blending, auditory processing and code knowledge tasks from the Phono-Graphix program. The gains in standard score points from the Wide Range Achievement Reading subtest are also arguably comparable to those from the McGuinness study using the Woodcock Reading Mastery Word Identification Test (Woodcock, 1987). Both tests are standardized measures of real word identification and both allow for a “standard score gains per hour” measure.

Unfortunately, the McGuinness et al. (1996) data did not include standard deviations for segmenting, blending and auditory processing, so in the end, the code knowledge variable and the word identification variable were the only two variables where standard deviations were published from the McGuinness et al. (1996) study. Consequently, they are the only variables where a statistical comparison of the two groups was possible. The means, standard deviations and p-values from the two groups on those two variables and the other variables appears in Table X and favors the Florida group.

The McGuinness et al. (1996) study did not report pre and post-test results for spelling or reading in context (fluency) nor for an older age group. Consequently none of those results appear in Table X.

Table X
Results After 12 Hours of Intervention

McGuinness et al Reading Foundation
(1996)

Pre Post (12 hours) Pre 12 Hours

Variable

Segmenting (63)

N N
Age 6-7 31 48.4 (63) 61.3 (63) 37 32.9 (79) 63.3 (79)
Age 8-9 27 38.3 61.0 47 38.9 74.3
Age 10-16 29 41.0 62.3 104 43.9 76.4

Blending (16)

Age 6-7 9.4 14.0 10.0 12.3
Age 8-9 11.4 14.9 12.6 14.3
Age 10-16 12.2 14.5 12.8 14.8

Auditory Processing (10)

Age 6-7 4.2 7.4 2.0 5.6
Age 8-9 5.2 8.7 4.3 8.3
Age 10-16 6.3 9.5 5.8 9.2

Code Knowledge (%)

Age 6-7 44.9 76.7 48.0 61.8 (p.  .001)
Age 8-9 61.2 85.8 64.8 78.7 (p.  .004)
Age 10-16 70.7 90.3 66.2 81.5 (p.  .001)

Word Identification
(Standard Scores)

Age 6-7 89.8 103.0 88.8 97.2 (p.  .037)
Age 8-9 86.9 99.7 86.1 90.7 (p.  .005)
Age 10-16 82.5 97.6 82.6 86.0 (p.  .018)

Again, because the standard deviations for segmenting, blending and auditory processing were not published (contact with the senior author of the McGuinness study revealed those standard deviations are just not available) statistical comparison of those variables is simply not possible.

However, since the scores on these variables are all recorded as gains in raw scores, the author has calculated some gains in raw scores for each of these variables, based on the means from Table X. Those gains are tabulated in Table XI so as to allow some form of comparison. The Florida group used a segmenting test with 63 items. The Reading Foundation used the same test but with more upper level items added to a total of 79.

Table XI
Gains in Raw Scores after 12 Hours between Groups on Segmenting, Blending and Auditory Processing

Variable Florida Reading Foundation

Segmenting

Age 6-7 12.9 30.4
Age 8-9 22.7 35.4
Age 10-16 21.3 32.5

Blending

Age 6-7 4.6 2.3
Age 8-9 3.5 1.7
Age 10-16 2.3 2.0

Auditory Processing

Age 6-7 3.2 3.6
Age 8-9 3.5 4.0
Age 10-16 3.2 3.4

The gains in raw scores after 12 hours in segmenting favors the Reading Foundation group. However, the initial means for this group are lower than the Florida group and the ceiling on the test is higher.

The blending scores favor the Florida group after 12 hours. Perhaps the “homework effect” is at work here since the Florida scores, if divided in half for the first two age groups, brings nearly equivalent results.

Finally, the auditory processing scores appear to bring gains that are quite similar after 12 hours despite any homework done by the parents on this task.

In Table XII, we compare the gains per hour after 12 hours between the two groups. These gains are, at first glance, astonishingly different.

Table XII

Gains per Hour on Word Identification

Age Florida Group S.D. Reading Foundation S.D.

6-7 1.73 1.53 .54 .66 (p  .038)
8-9 1.44 1.06 .44 .53 (p  .005)
10-16 1.93 1.13 .74 .61 (p  .02)

All Subjects 1.70 .84

The Florida group shows gains that are 2-3 times faster after 12 hours than those at The Reading Foundation. The differences are statistically highly significant as stated in Table XII. Keep in mind however, the Florida group did receive supplemental instruction from parents at home. Assuming an average of 12 more hours of instruction at home would mean that the Florida numbers could be overstated by a factor of two. If that is the case, then the numbers are more comparable though they would still favor the Florida group.

It is difficult to understand this difference in gains per hour, just because their results are so at odds with other interventions and with the clinical results at The Reading Foundation. For example, Torgesen et al. (2001) report Word Identification gains in standard score points per hour from several studies (Table 10, p. 52 of that study – reproduced in part and added to here as Table XIII).

Table XIII
Gains in Standard Score Points Per Hour of Instruction (adapted from Torgesen et al., 2001)

Method Phonemic Decoding Word Identification

Torgesen et al. LiPS Method .41 .20
67.5 hours of 1:1 EP Method

Wise et al. (1999) Similar to LiPS .30 .21
40 hours. Sm grp +1:1
computer

Lovett et al. (1994) unspecified — .13
35 hours 1:2

Alexander et al. (1991) LiPS Program .34 .23
65 hrs. 1:1

Truch (1994) LiPS Program — .21
80 hrs. 1:1

McGuinness et al. (1996) Phono-Graphix 2.57 1.70
8 hrs. 1:1

Rashotte et al. (in press) Unspecified .50 .19
30 hours small group (4)

Truch (this study) Phono-Graphix — .29
(Avg. 61.7hours 1:1)

Truch (this study) Phono-Graphix — .84
After 12 hours 1:1

The importance of looking at rate gains is stated by Torgesen et al. (2001) as follows:

The consistency in rate of gain across the first five studies in (Table XIII) seems remarkable, and it suggests that the high rates of growth obtained in our study should be generalizable to other settings, with other teachers implementing the interventions…We might even suggest that these rates could serve as a benchmark for “reasonable progress” in reading for students receiving remedial instruction in both public and private settings. As such, they are clearly much higher than is typically achieved in most current special education settings.

With the exception of the McGuinness et al. (1996) data, the rates of gain on word identification range from a low of .13 points per hour to a high of .23 points per hour with an average gain of .199 points per hour. In this current study, the average points per hour for all clients was .29 (after an average of 61.76 hours) but an astounding .84 points per hour when averaged over just 12 hours. This rate is higher than in the other seven studies, but is still well below those reported by McGuinness et al. As Torgesen states:

“Only one study reported growth rate values that were clearly out of range with the others (McGuinness et al., 1996) which suggests that the findings bear replication by other investigators.”

Replication does not appear to occur from this study, at least not for word identification despite the fact that our rate gains per hour were the highest in the group, excluding the McGuinness data. However, if we factor in the “homework effect” and divide the McGuinness results in half, then the ensuring number in rate gains per hour from the Florida study are more comparable to the “12 hour effect” at The Reading Foundation. There is a “12 hour effect” at work with the Phono-Graphix Program and perhaps that applies to other programs as well. The studies reported by Torgesen et al (2001) all had a minimum of 30 hours of instruction with an average of 52.9 hours. When you divide your final outcome by 12 or by 52.9 then you are simply arithmetically going to get different results in favor of the group with lesser hours (unless they are in a really weak remedial program). Ultimately, the only valid comparisons from a research point of view are from situations where 12 hour data is collected on whatever program is being used. Those kinds of comparisons are only possible from fully controlled research studies (which are rarely done) and not field data such as this.

The difficulty then, with the McGuinness et al. data comes not only in the rate gains per hour as stated, and which seem extraordinarily high and non-replicable, but also from their claim that the Phono-Graphix Program brings gains that are x times higher than other programs. That claim is more an arithmetic artifact than a reality. As clearly demonstrated in Tables III to X in particular in this study, there is a “12 hour effect” that comes from the Phono-Graphix Program. First, the “12 hour effect” may be possible from other programs but has never been measured in that fashion; secondly, and more importantly, “12 hours does not a reader make”.

Discussion of Part II Results

When it became apparent that the 12-hour gains reported in the Florida group were not being replicated in the clinical setting for word identification and code knowledge, a number of self-examining questions arose immediately. The first was that The Reading Foundation clinicians were perhaps not implementing the Phono-Graphix Program properly. However, that did not turn out to be the case. Any differences in delivery of actual techniques were trivial and had no effect on the outcomes. Clinicians at The Reading Foundation have always been very diligent about following procedures and they were followed diligently for the Phono-Graphix Program. When Carmen McGuinness came to Calgary to observe clinicians, any feedback she had was immediately incorporated into the Reading Foundation’s program. However, major procedural differences were not apparent.

A second and more likely possibility has to do with program emphasis. If you are restricted to 12 hours of clinical work because that is your program, then there is only so much you can do in 12 hours. You could and should, spend time on phonological activities. In general, The Reading Foundation found that changing those phonological variables was actually quite easy in a 12-hour time period. Next, if you have just 12 hours, you could spend more time on introducing the various code options and more time on single word identification tasks. That is likely what happened in Florida. Students were likely introduced to more code options over 12 hours than would have been the case at The Reading Foundation. At The Reading Foundation, more time was spent in applying the code options to various activities. Thus, the pace of introducing the code would have been a little slower at The Reading Foundation. After 80 hours however, the code knowledge score of students was about the same as the Florida group. You could also spend time, in 12 hours, on word identification activities. This pacing difference between Florida and The Reading Foundation might account for the greater gains on those variables from the Florida group. However, in 12 hours, you would have very little total time for spelling or reading fluency, areas not reported on at all by the Florida group. Certainly, after 12 hours, you do not have a “reader” or a “speller.” There is simply no getting around the fact that remediation takes time and there are no quick fixes.

Thus, any statements about gains being “x times greater” from the Phono-Graphix Program compared to other programs is quite misleading since the kind of controlled study needed to truly test it against other programs has never been done. The results from this study do support some quick and positive changes that take effect after just 12 hours. However, 12 hours hardly represents an entire remedial program and it is quite possible that outcomes after longer periods from different programs are about the same. That possibility was addressed in Part III of this paper, where it was possible to compare the outcomes of 80 hours of Phono-Graphix to 80 hours of the LiPS Program from the same clinical setting. Again, this is not the ideal research model since the results after 80 hours do not occur in the same time frame nor are students randomly assigned to different interventions. Nevertheless, the results do provide some useful information.

Part III

Comparing LiPS Program Outcomes to Phono-Graphix Program Outcomes

In 1994, pre and post-test data on six variables and 281 students who had completed 80 hours of intensive remediation using the LiPS Program at The Reading Foundation was presented in Annals of Dyslexia. The raw data for this analysis was saved in two files. One file (N=235) contained information on students who were less severe in their presenting difficulties with reading and spelling. The other file (N=46) contained data on students who were much more severe and were in the LiPS Program for an extended period of time, much longer than 80 hours. The information from both files was collapsed and analyzed for all cases (N=281) after 80 hours in the 1994 study.

For this section, we present an analysis of covariance first comparing the 235 students who had used the LiPS Program to the 203 clients who were taught using the Phono-Graphix Program. Both groups completed an average of 80 hours of remediation. The results appear in Table XIV. For the 235 students who completed the LiPS Program, the data was collected between 1990 and 1993. For the Phono-Graphix students, the data was collected between 1998 and 2000.

A first analysis of the data showed that the WRAT Reading and Spelling scores were significantly lower in the 1994 group. This may be due to genuine differences between the two groups over the years or, more likely, due to test differences, including re-norming between the WRAT-R, used in the 1994 data and the WRAT-III, used for this data. As a result, the pre and post-test scores were subjected to an analysis of covariance with the pre-test scores on these two variables covaried along with vocabulary and age.

Table XIV

Adjusted Means

Variable Pre S.E. Post S.E. F MSE Sig.

LAC
LiPS 66.33 1.26 93.20 .741 683.90 p = 0.02
PG 63.99 1.38 90.61 .81 1, 427=5.42

Word Attack

LiPS 3.15 .12 7.73 .23 14.26 p = 0.28
PG 3.34 .13 8.10 .25 1, 427=1.17

WRAT
Reading

LiPS 78.77 .76 97.17 .77 145.29 p = 0.31
PG 83.51 .82 97.37 .85 1, 427= 1.05

WRAT
Spelling

LiPS 78.58 .76 89.41 .84 2821.14 p  0.001
PG 82.96 .82 94.67 .92 1, 427=17.45

GORT

LiPS 3.62 .14 4.74 .16 6.36 p = 0.300
PG 3.95 .15 4.99 .18 1, 428=1.07

Discussion of Part III

Table XIV shows that significant differences on the LAC Test favor the LiPS group while differences in spelling are significantly stronger for the Phono-Graphix group. (F-values are for post-scores only).

Both of these outcomes make sense. In the LiPS Program, a great deal of time is spent in “tracking” sounds using colored wooden blocks. None of that occurs in the Phono-Graphix Program. Therefore, students who take the LiPS Program might be expected to make stronger gains on the LAC since they are getting a great deal of “test practice” during the course of their intervention on items very similar to the actual LAC Test.

The difference in spelling scores might be attributable to the difference in the way the two programs treat the alphabet code. As mentioned earlier, the Phono-Graphix Program is very strong on this point and the simplicity of showing students the manner in which sounds can be represented by different options (“sound pictures”) makes the task of learning “code options” about as simple as it gets for the English language. The LiPS Program is very good on this point as well, but the “code options” are not as well organized and there are many “expectancies” in the program, which are just a subtle version of “phonics rules.”

It should be noted however, that the spelling portion of the Phono-Graphix Program was modified for the clinical version that students received at The Reading Foundation. Spelling activities were included each clinical hour and the cumulative effect was obviously positive. These positive spelling outcomes will not likely be present if practitioners do not incorporate sufficient time for spelling activities when they are using the Phono-Graphix Program.

A second analysis was carried out with all 281 students from the original data compared to the 202 students who had completed the Phono-Graphix Program. An analysis of covariance showed that the spelling gains for the Phono-Graphix group remained significantly stronger (p=0.027), while the advantage for the LiPS Program on the LAC Test were no longer significant (p=0.065).

However, outcomes on the spelling variable may be stronger now in the LiPS Program with the addition of the symbol imagery activities (Bell, 1997) added to the LiPS Program. This is a more recent addition and could have a bearing on spelling outcomes for students in the LiPS Program. Unfortunately, no data has been presented to date on this issue.

OVERALL DISCUSSION

Discussion of the Phono-Graphix Program

It is readily apparent from the data presented that the Phono-Graphix Program is a good program that delivers results in a clinical setting. Significant gains on all measured variables were present after an “intensive immersion” of about 80 hours for the students. There was also a very strong “12 hour effect” for students on all variables. In just 12 hours, students showed significant growth on all the variables where 12 hour testing occurred (fluency was not one of them). However, we were not able to replicate the outcomes from the McGuinness et al (1996) study in terms of “gains per hour” after just 12 or even 24 hours of one-on-one remediation. Certainly, it is possible to bring about some important changes in a short period of time and these are reflected in the scores with the phonological processing variables used in the Phono-Graphix Program. Gains in word attack and word identification skills were also found after 12 hours, but at nowhere near the gains per hour rates presented in the McGuinness study. This may in part be due to the differences in delivery between the intensive remedial approach used at The Reading Foundation and the “once a week” approach used at the Florida clinic and the additional instruction in the form of homework. However, the main difference is largely arithmetic in nature. If you divide your results by 12 (see Table XIII), your rate gains per hour are much higher than if you divide by a larger number. Since students who have completed 12 hours of a remedial program are hardly “remediated” at that point, it is a gross fallacy to compare those 12 hour results to other programs that take longer. The Phono-Graphix Program also needs to be much longer than 12 hours in order to be more complete and to do justice to all the things that need to be done in a remedial program.

Therefore, to claim the Phono-Graphix Program will bring results that are seven or 50 times faster than other programs is clearly not supported and indeed misleading. The results presented in the McGuinness et al (1996) study using the Phono-Graphix Program provided evidence that some encouraging changes that can occur in a short period of time using a good program. This is supported by our data and our clinical experience. For example, we found that it is indeed quite easy to bring about changes on the phonological variables in a very short period of time. Twelve or 24 hours of remedial work (of which, about 4-8 hours would be directly related to phonological activities) can bring students excellent gains on areas like segmenting, blending and phoneme manipulation. However, improving a student’s phonological skills and even their word attack and word identification skills is still a long way from turning them into good readers. Clinicians throughout North America have worked for decades with very severely disabled readers using strong programs. Twelve hours hardly begins to describe the kind of time many of those students require in a strong remedial program.

Comparing LiPS to PG

The results from this study clearly show strong outcomes with the Phono-Graphix Program, comparable to what was previously attained with the LiPS Program. In fact, better outcomes for spelling are evident when using Phono-Graphix compared to LiPS. This makes the Phono-Graphix Program a good choice for remedial specialists. For remedial teachers, another advantage of the Phono-Graphix Program is that it is easier to learn than the LiPS Program and generally speaking, from our clinical experience, it is also easier on the students since they are more quickly introduced to the process of contextual reading. Often, students who do the LiPS Program struggle with the abstract nature of the “tracking” task in the program and do not see its relevance to reading and spelling for quite some time.

The Phono-Graphix Program is a good program with many interesting features. The program is generally easy to teach and students do like it. It includes some of the important elements needed in any good remedial reading program including explicit phonological development activities and explicit teaching of letter and sound connections. The latter in particular is a real strength in the program and of great benefit to students who are confused by traditional letter/sound approaches.

The use of parents to provide supplemental instruction is probably a two-edged sword. Certainly, some parents are able to work well with their children and the ability to assist them at home is of benefit in terms of cost in particular. However, our clinic stays away from that model since the vast majority of parents who come to us are either unable or unwilling to assist their students at home. The emotional interplay between parent and child is particularly troublesome when a student has a reading disability. In most such cases, expert treatment is needed and that expertise does not come from the parent.

Remedial teachers who are trained in the LiPS Program and who use the “clinical immersion” model” are using a right remedial combination. Clinicians who use the Phono-Graphix Program and the “immersion model” will also get excellent outcomes with their students; results that should in fact produce better outcomes for spelling than the LiPS Program, provided spelling activities are incorporated each hour.

Overall though, the outcomes from the two programs were very comparable. It seems that as long as a program contains the “essential ingredients” then good results can be expected.

To that end, The Reading Foundation has completed its own remedial program, called “Discover Reading.” This program contains all the essential ingredients in a good remedial reading program and more. Phonological activities, letter sound activities, visual memory activities, word attack, word identification, spelling, context, fluency-building and writing skills are all incorporated in the program. Thi

Submitted by Anonymous on Fri, 11/21/2003 - 12:10 AM

Permalink

The whole study did not post. THe tables did not show up correctly. I’m sure you could write to Steve Truch at www.readingfoundation.com up in Canada if you want the rest. It was an attachment that I tried to post.

I found this very interesting. Plus, his program, which I got to see in San Diego looked very good. If you are looking for a complete program, do check out Discover Reading.

Michelle AZ

Submitted by des on Fri, 11/21/2003 - 4:03 AM

Permalink

IMNSHO, the whole study is, as the Brits might say, barking! You just can not compare the two at all. I don’t think it is a study in the usual sense. First of all, the populations are very very likely to be quite different. The kids who really really need LiPS are NOT kids who would likely be helped by PG. PG assumes that the kids will at least hear the sounds and hear the difference between sounds. LiPS is for kids who don’t hear the sounds. You spend hours on the labeling of sounds, how the sounds are produced, etc. all sorts of things that even most dyslexic kids don’t need. The parents who go for the LiPS have been told that their child isn’t learning any other way.
Perhaps the kids have been thru various approaches without success. I have one kid doing LiPS with me. He could not hear the difference between /s/ and /th/. Now tell me how the heck he is supposed to do PG, which starts right in with reading. OF course they are not going to read much after LiPS. LiPS isn’t really a reading program so much as a very intensive phonological awareness program. I have told the parents of my one kid to not expect him to read much after we are done, but that he will be ready to start a more conventional reading program afterwards.

The kids in the PG centers are not necessarily even dyslexic— bad teaching ie with whole language, dyslexia, ADD, etc. The kids doing the LiPS have severe phomenic awareness problems.

This kind of study gives education studies a bad name. You need to control the population in the study. An uncontrolled study is totally meaningless.
It’s the kind of thing that makes some people think PG is total nonsense as they seem to proliferate this sort of bad “research”. Diane McGuiness in particular. The groups must be the same kind of kids. AND you must know what you are doing the research on and IF the program is really meaning to work on it. You can’t say bad bad LiPS isn’t teaching spelling. The point of it is NOT to teach spelling or reading as such but to get the kid phonologically ready.

Not meaning to kill the messenger here. This was a long involved thing to post, and it was very nice of you.

Rant mode off.

—des

Submitted by Janis on Sat, 11/22/2003 - 1:08 AM

Permalink

I think the study author does say that it is not a true scientific study, just a comparison between two programs after 80 hours. But des, you are right, he’d have to compare LiPS AND Seeing Stars with PG for a fair comparison. The Lindamood Bell clinic told me that they never do LiPS alone anymore; they always combine it with Seeing Stars.

I’d still like to see Truch’s program, though. It would be nice if someone could take the best from both.

Janis

Submitted by des on Sat, 11/22/2003 - 2:39 AM

Permalink

Yeah Janis, I know the author says it is not a true scientific study but makes it *look* like one, complete with pre and post test info, test results, the title, etc. This might make one *think* it is one. And who funded it anyway?

If I wanted to write an informal comparison I would do what I did with the Barton/LMB comparison. Compare/contrast. The guy even made a conclusion, PG is better etc. You notice that my comparison doesn’t make any conclusions.

The comparison as you say isn’t a fair one. The SS and LiPS together might be a bit more fair, but I still think the population difference would be problematic. I really think that LMB tends to get the most severe kids for whom nothing else works. Heck why would you pay the fees that LMB charges if you didn’t really need something totally different? So you are getting the most severe dyslexic kids compared to a cross section of struggling readers who may not even really be LD. With all the whole language cr** around, PG could be just the ticket for such kids, and I’m sure many do end up at PG clinics. IMO, it wouldn’t be the last run for such kids.

BTW, I think you can still do LiPS solo, but your goal is not reading, but very good phonological awareness in a kid that starts at zip. I don’t really know how they take a kid with zip phonological awareness and do SS except at the most basic level (air writing the letters, say)— would be highly useful for the kid who doesn’t know the letters.

I too, think the Truchs program might be interesting to see, however, I am not, shall we say, enamored by the group that could run such a
“study” at make it appear to be real research. Why do it if that wasn’t the intent?

—des

Submitted by Anonymous on Sat, 11/22/2003 - 11:45 AM

Permalink

Des,

I think you have to remember how difficult it is to do a randomized controlled trial. You have to have a lot of money usually. You also have to have parents willing to have their children randomly assigned to different interventions. Certainly, parents paying for the therapy would be unlikely to willing to do this.

Personally, I don’t think they were misleading. They said it was a quasi experiment. And as a parent whose child has attended the PG clinic in Orlando and done some LIPS, I found their analysis interesting and consistent with our experience.

But you are right, wrong conclusion can be made when you don’t have randomized trials. I am thinking of all the Hormone Replacement Therapy studies which were retrospective and concluded that post menopausal women should take hormones. A randomized clinical trial (which was critisized by many as subjecting women to a less desirable treatment when we KNEW the truth) concluded the opposite for most women. Turns out the healthier women were those choosing HRT to start with (much like your argument with PG and LIPS).

Beth

Submitted by Janis on Sat, 11/22/2003 - 1:59 PM

Permalink

So Beth, should we take hormones or not? Lol! (Seriously, I am worried about that! So much conflicting information!)

I want to point y’all to to powerpoint presentations which Joe Torgeson did this month at the ASHA and IDA conventions. Both talk about the difficulty in reaching fluency even after the decoding skills are taught.

http://www.fcrr.org/science/pptpresentations.htm

Read the first two on Fluency and Intervention for Dyslexia.

Janis

Submitted by des on Sat, 11/22/2003 - 5:38 PM

Permalink

Janis, you are perhaps more intelligent than the average reader of such stuff. I recognize that it is VERY difficult to do a randomized real study, so my answer to that is DON’T DO ONE or that is pretend to do it. My problemo with it is not that it is a comparison, that part of it is fine. Be a comparison then. The problem is that they intended it to look like more than it was. Perhaps it was ignorance. I mean hopefully it was. Educational research is notoriously bad, and maybe they didn’t even know just how bad it was. I realize that they *said* this is not real research this is quasi and all that, but then they went to a lot of trouble to make it LOOK like research. My point is if it is not research dont’ try and give the appearance of research. Then they came to a conclusion, which was my other problem with it. If they are not really researching it, they can’t come to a conclusion. They might say, “we believe that___” but I don’t remember that language. They make a recommendation. They say in conclusion this is should be done.

My guess is that this is an inside job. That this is not some outside source takign a look at these two things. Another no no. It might be possible to do research of a more limited nature— ie, take a look at gains made by using PG. It’s not research really either but it would give people real info. These are kids who are struggling readers at least and they did learn.

I think this is important. People do make decisions based on this type of stuff. And not all of them are smart enough to figure out that when he said it wasn’t real research he meant it and not be fooled when he made it look like research.

As a comparison it would have been more interesting to read (less gibberish with test results say). Fortunately it is not in readamerica’s list of research studies. But it might be in Truch’s.

And I agree that he isn’t alone in this. Scientists have definitely had to backpedal on the hormone issue!

—des

Submitted by des on Sat, 11/22/2003 - 5:49 PM

Permalink

Oh sorry that was Beth who said it was not confusing. Ok, but I’d still say the average reader of this forum is pretty educated.

However…
You can’t really say though that a kid who had LiPS for some period and then went into PG was the same population than some group of kids that are in the PG clinics with no list of their background. If your child really did need LiPS then he could NOT have gone thru the PG program and just really done fine. He is no doubt doing better and able to hear the sounds, as that is pretty much what it teaches. Then he can go on to PG or OG or SS or whatever.

I know that LiPS does at the end go into some reading and spelling, but I think this is why LMB does not keep a kid doing LiPS, it really doesn’t teach those and wasn’t really designed to. It is sort of phonological awareness on steriods. :-)

So if someone is going to do bad quasi research on two reading programs, at least they should do bad quasi research on two programs that really ARE reading programs. :-)

—des

Submitted by Janis on Sat, 11/22/2003 - 11:23 PM

Permalink

des,

I’m not sure what you were meaning about an inside job. But Truch runs a reading clinic up in Canada. For several years they used LiPS and then they changed to use PG. One could assume kids coming into his clinic would have a whole range of severity of PA issues in both the LiPS group and the PG group. Both programs go through the sound-symbol alphabetic code with LiPS having a heavier emphasis on PA with no letters while PG begins with letters.

It couldn’t be an inside job because he says his results with PG did not replicate all the claims by Read America in the 1996 PG article in the Annals of Dyslexia. But in his own clinic comparison of LiPS and PG post-testing, the gains were very similar. I thought it was very interesting to read that articulatory feedback has not been proven to really help. I was glad to hear that because I did not want to teach that past!

Basically, at the Read America clinic and Truch’s clinic, people were just seeking help for their children’s reading problems. I think there would be no reason at all to assume that the kids going to the Read America clinic had less severe PA issues than those who went to Truch. After all, most of us don’t have all that many reading clinics to choose from!

Janis

Submitted by des on Sun, 11/23/2003 - 1:12 AM

Permalink

[quote=”Janis”]des,

>. But Truch runs a reading clinic up in Canada. For several years they used LiPS and then they changed to use PG. One could assume kids coming into his clinic would have a whole range of severity of PA issues in both the LiPS group and the PG group. Both programs go through the sound-symbol alphabetic code with LiPS having a heavier emphasis on PA >with no letters while PG begins with letters.

Well if they STOPPED using LiPS perhaps that got around and so the parents who wanted that for their kids began looking elsewhere?
You can’t exactly rule that out, esp. if you are attempting a so-called study
that isn’t even matching the populations.

And as I keep saying I don’t think that LiPS is really a reading program per se, so to compare reading scores or spelling, well it isn’t meaningful. All kids do not need LiPS, I think that is pretty evident. Perhaps the vast majority of even actually dyslexic kids do not need it. We don’t even know that the kids that it was used with at this clinic needed it.

OTOH to start out reading with kids who don’t hear the sounds, like my student, and expect that PG can help them, well I know for a fact you aren’t a PG crazed women, and I think that it would be a disservice.
PG expects the kids to hear the difference between two sounds. Sure they may need refinement in that skill, I think we can agree with that. I think we can agree that PG will help refine it as well, but it can’t give kids the ability to hear all sounds when they can’t. There is PA in PG, but it is not enough for some kids.

>It couldn’t be an inside job because he says his results with PG did not replicate all the claims by Read America in the 1996 PG article in the

Ok I’ll buy that much. He does say that.

>Annals of Dyslexia. But in his own clinic comparison of LiPS and PG post-testing, the gains were very similar. I thought it was very interesting to >read that articulatory feedback has not been proven to really help. I was glad to hear that because I did not want to teach that past!

Well at least in the kids they had.

>Basically, at the Read America clinic and Truch’s clinic, people were just seeking help for their children’s reading problems. I think there would be no reason at all to assume that the kids going to the Read America clinic had less severe PA issues than those who went to Truch. After all, most of >us don’t have all that many reading clinics to choose from!

I don’t think there is anything wrong with helping kids with their reading problems. What I did not agree with with was the claim— which since it wasn’t really research— that PG worked better than LiPS. The problem with it is that LiPS does something different. Perhaps this is so that some places have few clinics to chose from. That is certainly true here, but I have heard of people just dropping everything to go to LMB for intensive work.
I have also really heard of people seeking out LiPS over something else as the something else was not effective.

Since we don’t KNOW the population involved we can’t really assume ANYTHING though. This was, as everyone insisted, NOT a study in the real sense of the word. It might have been interesting. But the usefulness of it is questionable since the groups were not matched or anything of that sort.

And from my own limited experience, I don’t know how you would teach PG to a kid who can’t hear that /th/ and /s/, say are different.

Gee, I would guess for the vast majority of even dyslexic kids the whole LiPS thing would be massive overkill and boring as all get out. But there ARE kids that need it.

BTW, guys I’m not knocking PG. What I am is criticizing a study that isn’t really a study but looks like one. That makes inferences and recommendations it should not be making. I think a neat little comparison of PG, Truch’s own method and LiPS would have done really nicely. I think to point out that obviously LiPS is time consuming and not needed in all cases would be very appropriate. I know they don’t get on this board very often but I have seen elsewhere where PG didn’t work. This isn’t an indictment, but just a statement that kids do learn differently, and that this may not be enough for some kids.

BTW, I am glad to see that we disagree about SOMETHING. Gosh it kept getting awfully boring. “Gee Janis’ post was right on target”. “Hey des was just right on that.” You’d think we were the same person and signing in on different logins. Now everyone will know that for sure we are not the same person. HEy everyone out there lookie, we aren’t the same person.
:-)

>Janis[/quote]

—des

Submitted by Sue on Sun, 11/23/2003 - 2:49 PM

Permalink

I think LiPS is definitely a reading program. It teaches decoding, just as PG does, and takes it further into decoding longer words than PG. THey are different enough, though, so that I’d want longitudinal data. PG is a self-professed quicker fix and LiPs is far more thorough at building skills from the ground up.

I agree, It is very important to realize that the populations using the different programs aren’t tightly controlled. Many of the students who get such instantaneously magnificent results are kiddoes who simply needed to be *taught* the sound-symbol connections and to think with their eyes and ears at the same time. Those kids are different from ones who have more severe processing difficulties.
I did appreciate the addressing of the hype and overstatement of PG’s efficiency, and the stressing that the clients in this study weren’t having Mom sit with them and do Reading Reflex — they were getting a much more intensive, involved version. Now, what I would love to know is, how strictly did the PG tutors adhere strfictly PG — or did they occasionally use their expertise and do something from another bag of tricks? The PG training stresses *not* tainting what you do (and this is generally the reason cited for any student’s lack of progress)— but the good reading teachers I know aren’t afraid to defy this. However, this difference in training policies is important IMO.

Submitted by Sue on Sun, 11/23/2003 - 2:56 PM

Permalink

Janis — per “changed to use pG” — I didn’t read the study with really close attention (perhaps in the pre-tthanksgiving quiet of the tutoring center), but that sounded like they dropped LiPs in favor of it… when it looked more like they simply added to the repertoire.
Actually, I can just imagine the personailities behind this study — passionate folks wanting to see/prove that what they do works. So while it doesn’t look like an “inside job” I reckon some of the insiders were less than objective. You can’t do a double blind with this stuff, either ;)

Submitted by Kendra on Sun, 11/23/2003 - 8:47 PM

Permalink

What this all helps me see is that intensive time is a big factor in substantial gains. It helps me justify pull-out at 4 days per week for intensive reading instruction.

I am struggling lately with full inclusion - and I’m for it in certain instances - but not for intensive phonological training when the rest of the class is reading chapter books.

Thanks!!

Submitted by Sue on Sun, 11/23/2003 - 9:15 PM

Permalink

YES YES YES

There’s an awful lot of research indicating that frequency and intensity are critically important factors.

The best program in the universe done once a week is going to have limited benefit. (I believe this is emphasized in the _Catch Them Before They Fall_ ARticle. )

Submitted by Janis on Sun, 11/23/2003 - 9:24 PM

Permalink

des,

Lol! Yes, those that think we are the same person can now have it confirmd that we aren’t! However, we do not disagree on the main facts that either PG or LiPS may not be THE answer for EVERY child. All we disagree on is how this article might be perceived. :) I just see it as one clinic’s report on 200 kids who had 80 hours of PG and 200 kids who had 80 hours of PG plus a little extra spelling work. Both teach some degree of PA (more in LipS) and decoding (all 44 sounds).

My tendency is really to agree with Kendra, though. The article very much confirms that very intensive intevention is required to make significant progress. 80 hours of one-on-one is a lot…and I see that as being realistic with the children I teach. Fleuncy will take longer than that.

Janis

Submitted by des on Sun, 11/23/2003 - 11:29 PM

Permalink

Yeah, Janis I suppose we still agree on a lot of things, but this just puts to rest any theories that we might be the same person under different logins.
I think I need to clarify things. I said that LiPS is not a reading program. I think I overstated this. I think it is not *primarily* a reading program and this is why LMB gets the kids into SS asap. The manual does spend some degree of time on spelling expectancies and reading longer words, otoh, something like half the manual on just hearing, feeling, seeing, tracking the sounds. Much of the rest of the book consists of lists, questions, appendix. I think you could conceivably use LiPS as a total reading program, however I wonder how often it is really used that way.

The title of the program is LiPS: Phoneme Sequencing program for reading, spelling and speech. I think it could be argued it is as much a speech therapy book as a reading one.

I think the comment on intensive intervention is an excellent one!!

—des

Submitted by Janis on Mon, 11/24/2003 - 1:47 AM

Permalink

I honestly think once a child is comfortable with the oral/auditory PA exercises in LiPS that one can then move on into PG or Barton. That’s what an APD specialist I know does (so it’s not my original idea!).

Janis

Submitted by des on Mon, 11/24/2003 - 5:11 AM

Permalink

Yes, actually that was Susan Barton’s recommendation. She said to get him thru tracking and then go on to Barton (or presumably you could go on to something else). My own feeling from my vast experience with LiPS (NOT!!) is that the program is very powerful in terms of getting the kid to hear the differences in sounds. Before I started with him, he couldn’t hear the difference between many of the sounds. Some of them were not all that close imo, but to a kid with severe auditory processing disorder might be. I am pretty impressed with my student’s new abilities, however I wouldn’t want to stay with it beyond tracking.

I have thrown in several of the Barton “tricks”, as she calls them, ie telling the difference visually between “p” and “b”.

—des

Submitted by Anonymous on Mon, 11/24/2003 - 5:41 AM

Permalink

pigs dig deep in the mud and balls fly up in the air?

Submitted by des on Mon, 11/24/2003 - 6:26 AM

Permalink

Close. It’s now balloons go up and pigs go down in the mud. It’s done with the left hand. The hand with palm facing towards you looks like a “b” and the other way a “p”.

—des

Submitted by Anonymous on Fri, 11/28/2003 - 8:34 PM

Permalink

Oh, no, oh, no, oh no.
This quick trick is a nightmare to those of us with *real* directionality problems, as are most such quick tricks. You say the trick is done with the left hand — look, if I knew which hand was my left, I wouldn’t be *having* this directionality issue. Choosing the left hand in the first place is the *problem*, not the solution, for those of us with a deep problem. And then remembering if it’s supposed to face you or face away, and why … I am one of those who has no native directionality *at all*, and I can just as easily visualize looking at myself in a mirror as I can place myself inside my body — once coped with, this turns out to be a weird advantage in test-taking and 3-D geometry, as I have no trouble at all reflecting or rotating or whatever; my problem is keeping the visualization sitting still! But telling me to “just do this” with my hands is FAR MORE dfficult than 3-D advanced calculus. For people of my sort, you have just added a quite unnecessary layer of difficulty to the learning in your attempt to make it easier from your point of view.

Quite seriously, I have another little girl (age 7-8, Grade 2) right now who has just this issue, and quite deep apparently. I’ve worked with her a couple of times a week for four or five months. We’ve gone from illegible scrawls to a quite acceptable printing and a very nice beginning on cursive, and simultaneously her reading has smoothed out and her spelling is now acceptable for her level. We’ve been doing reading and writing practices stressing directionality for two hour-long sessions a week to get to this point.
Last Thursday, I asked her to show me her left hand; she had no idea whatever. The question occurred because right-left came up in our vocabulary practice. We’ll be working on verbalizing directionality some more, but my point here is that (a) you can use directionality by automatisms before you can verbalize it or consciously manage it; (b) my suspicion is that you need at least some automatisms before you can make conscious use of it.

Submitted by Laura in CA on Fri, 11/28/2003 - 11:21 PM

Permalink

Victoria,
I’m not sure if this would help your student, but my son has directionality issues and I did notice that very short daily directionality exercises with an arrow chart helped. Unfortunately, after we stopped doing them his ability declined so I need to start using them again.

Another thing, I think marital arts and dance are both helpful for directionality because students must constantly remember and work on body directionalty to automaticity. I could see how these activities could also be quite frustrating for severe directionality issues, but with a patient teacher, it might really make a difference.

Submitted by des on Fri, 11/28/2003 - 11:55 PM

Permalink

Even if done with the wrong hand, it would still help with the “b” vs “p”, but would not help with a “b” vs “d”. All you do is correct the hand though. If the kid is right handed, you say “use the hand you don’t write with” otherwise you say “use the hand you write with”. It doesn’t have to be automatic— as long as the tutor is aware of what she/he is doing (even I am not too automatic— and I have to sit next to my student). Anyway, this little trick has helped my student who is quite severe. Not sure how severe his directionality problems are.

Anecdotally, I’ve heard read that it has helped kids, not sure how many had very serious directionality problems. I don’t think there is any research on the trick. I have heard she has used stickers on the hand (you could use a bracelet or ring on even a colored sticker for an older kid) to help the kid remember. She also suggests the tutor can use a sticker on the kids shoulder for certain activities to help the tutor. I haven’t needed this as long as I am on the right side.

—des

Submitted by des on Fri, 11/28/2003 - 11:59 PM

Permalink

I read Victoria’s answer really carefully I doubt if it would work with someone that severe in directionality. I might be inclined to teach cursive as she is doing. Also some of the games using arrows and so on. Still I think it would be worth a try, esp using stickers, etc. (Perhaps the tutor would need one as well :-))

—des

Submitted by des on Sat, 11/29/2003 - 11:07 PM

Permalink

I wanted to clarify the “b”, “p” trick that Susan Barton talks about. In no way does she describe this as a quick fix type of thing. This is something the kid would practice each time when reading and spelling words with these letters. If this is a kid that reverses most of the time, then you would practice it every session and at every opportunity that those letters come up. Still I would guess it would not work with a kid with very severe directionality issues.

—des

Back to Top