nt
Thank you for this!
You have explained my situation very well. I actually have no problem doing this myself but I know I am not the norm.
I am concerned for the kids whose moms don’t know how to teach themselves to be a case manager.
Re: an article about the perils of indiv. WISC subtest analy
I think that makes perfect sense, but how do parents determine what is the next step and what the best diagnostic tool is?
Schools like to say that the IQ is all that is needed and there is nothing else!!
Thanks
K.
Re: an article about the perils of indiv. WISC subtest analy
I agree with all that you have said. What is the reason why subtest scores are not explained and in our case we could not even get a diagnosis from the school’s evaluation. So I spent three years reading and asking and guessing what I should do next because the current situation is not working.
Do they not know what these scores mean, or is it again, all about accountability; if they find it they have to fix it fear?
Re: an article about the perils of indiv. WISC subtest analy
<>
I think it depends on the individual psychologist and possibly the school district. I know at our meetings, our psychologist does an excellent job explaining the subtests, and how they reflect on academics. Also, our school psychologist gives a copy of the report at the meeting. All parents are welcome to make an appointment with our school psychologist and discuss the report.
Marilyn
Why are subtests not explained?
One theory—not that many people have a knack for qualitative analysis of quantitative data. There are schools for public policy filled with people who can do qualitative analysis of qualitative facts. There are the quant types that can do quantitative analysis of quantative data. But a cross between the two is actually pretty rare, or at least I have found it so in my professional life where this skill is valued. You need someone who can make a story out of numbers and who can spot where the story has important gaps that need to be filled. And further, ideally, that person will seek more information to get that gap filled so a story that is plausible can emerge.
The fact of the matter is that most evaluators don’t really have this knack (or sometimes have it but lack the inner passion to keep going until they have a story that makes sense). What is the point of telling you this score or another is low—you can see that yourself. If the evaluator pointed it out, your obvious next question would be what does that mean—why is it low and does it shed any light on the direction we should go on helping this child. And if they haven’t really done their work, they won’t be able to answer those questions. A good evaluator, as pointed out by that article, would have formed a hypothesis (another name for a story that explains the scores) and tested it, perhaps with another diagnostic tool. But most of the time, this is not going on.
For example, our very expensive private evaluator gave a bunch of tests in addition to the WISC and achievements. I can only conclude that he gave the same tests to everyone. No scores were given to me, let alone explained when we met with him. (He devoted the whole meeting to social issues like self-esteem.) I would not have even received the scores until a month after the close-out meeting when I got the final report. (On a friend’s advice I had them faxed after the meeting.) The report itself was clearly cut and pasted from bits he had on his computer—for example it stated that is was “essential” that my child (then 7) learn to type despite the fact he has absolutely no dysgraphia or fine motor problems and (at the time) had beautiful handwriting.
Lesson—with any evaluation, make sure you get the test scores well in advance of the meeting with the evaluator. Then dig, dig, to make sense of them (perhaps using the resources on this Board) so the meeting can be devoted to making the evaluator really do his work and give thoughtful answers and strategies for going forward. Keep pressing until he can give a plausible story that makes sense to you. (And without a retreat into—“I can see where you are confused, but understanding all of this requires a deep knowledge of statistics—standard deviations, normal distributions, thin tails, fat tails, blah, blah, blah that you couldn’t possibly appreciate because you do not have a PhD in psychology, but trust me because I do.” There may be rocket science in the tests, but understanding the practical implications isn’t rocket science—it’s much more qualitative than that. And if the evaluators were actually themselves rocket scienctists why aren’t they in the business of validating and norming the tests, instead of administering them? Or even more lucrative, why aren’t they employed in financial engineering?)
There is no question that an evaluator will work harder to prove himself to an informed parent and you’ll get better output—he may even consult with a colleague that to get a story that satisfies a parent that is intimidating enough.
Re: Why are subtests not explained?
A comment from a math graduate:
Psychological/educational testing is NOT rocket science.
In fact freshman science/math students laugh at the foolishness of what goes on in educational statistics classes.
Educational statistics people are often (not always, there are bright lights out there, but often) not top students, working hard to do lower level math classes, and excessively proud of themselves for having achieved success in math that they found tremendously difficult. Some (again, not all) want to show off that they are “better” than other people because they have all this esoteric knowledge.
In contrast, if you ever meet some real math professors, you will notice that most of them (with a few unfortunate exceptions) are rather shy and unassuming people; they know enough to know that they don’t know everything. And they spend their lives developing logical proofs, so they know both how to analyze logical arguments, and the limits of logic.
Test constructors use formulas that have been passed down, not because they are proven science, but because they are rules of thumb that produce nice patterns of test results.
All statistics by its nature involves averages of various sorts and approximations — that is the business of statistics, to take a lot of raw data and to find some way to simplify it into an understandable pattern.
Test scores CAN give important information, but they are limited by time, format, the kind of questions asked, and many other things. All test scores have to be taken as one snapshot of one type of facts at one instant in time. Any half-decent statistics or testing or education will tell you that; it is a pity that some testers have forgotten chapter 1.
Some testers are good, and they will try to tell you as much as they know and explain it to you as clearly as they can.
Some are self-important and try to show off how much more they know than you. Some are just plain incompetent and don’t understand the scores on the tests that they give out.
Every half-decent psychological or educational test has documentation, explaining what the test scores mean, how much variability is normal, what abilities they are trying ot measure, what is the average range of scores, and so on. Any tester should have this documentation on hand, and you should be able to get copies; not of the test questions or answers, but of the test interpretation.
You said it!
That was a huge mouthful and well worth the time to read & ponder. It is also so true, I believe. I’ve not thought of it in the terms you use.
Before we had the medical tools of today, the old-time family doc had to be an expert diagnostician. Some docs had the “gift” and some had no “nose” for figuring out the source of the problem and/or the remedy. Same goes with teachers and psychologists. I’ve found a few people over the years that could, as you say, qualitatively analyze quantitative data—and vice versa. They have been the most productive problem solvers for me personally and professionally.
What is your background & experience, if you don’t mind my asking?
Also a nice answer!
We say things differently, but I think the underlying principles are very similar.
Re: Why are subtests not explained?
I disagree with the Math Major regarding statistcis. These stats are very useful when assessing a child’s cognitive and educational profile. However, these statistics are only part of the full picture. Historical and clinical data is also needed. JW
I think the bottom line of this is towards the end:
“IQ tests are typically very good instruments for generating a hypothesis about
someone’s strengths and weaknesses, but they are poor diagnostic instruments for evaluating a learning
disability. Particular patterns on an intelligence test may give hints to a possible weakness or disorder, but
the assessment of such things is typically done with other tools.”
In other words, the subtests can suggest weaknesses, which when markedly lower than other subtests, warrant further investigation with more precise diagnostic tools. From reading posts here, and based on personal experience, it would appear that a number of evaluators don’t go on to this next step and, further, do not even point out the discrepancy to parents. Instead, all too often, they try to cover their bases by saying the child is suffering from poor esteem or something equally unhelpful.
This leaves parents in the awkward position of trying figure out why Johnny can’t read or whatever themselves. And if they are bright and energetic they will mull over the data they do have, the subtest scores, in an effort to ferret out what further information they need or further testing that should be done to figure out what specifically is wrong so they can engage in targeted remediation instead of taking a shot in the dark by buying a heavily marketed product like Hooked on Phonics. The evaluators of course should be doing this work, but so often, even when the charge is high, they take an assembly line factory approach.