Measuring the difference.

We have helped thousands of people over the years from all over the world to combat their stutter. However, the programme doesn't work for everyone and we want to be as transparent as we can with annual statisitcs measuring our success.

Twice a year, as part of our ongoing quality control, Primary Coaches submit reports on all their students. Regional Directors (RDs) compile these and send to Dave McGuire. Once a year, Dave assigns Evaluation Auditors (AEs) to verify (audit) reports received in March by calling the majority of members rated successful by RDs.  Any discrepancy is resolved by another experienced Auditor.  Since the 2003 evaluations, we tightened the criteria where those who we could NOT contact (n/c's) were, with a few exceptions, rated as unsuccessful.

It has been noted that our own standard of success criteria is set very high as shown by our own success rates based on McGP standards (and rated by at least two people) being consistently lower than the perceived success (self-assessments) of members themselves.  Click here to view a description of our success criteria. It should be noted that other intensive programmes for those who stutter measure success by self-assessments and only factor in those who respond. Some other programmes measure success by assessing in the clinic and/or on the last day of an intensive course. We consider these to be flawed and inaccurate for the following reasons:

  • Self-assessment surveys give useful information, but do not give a reasonably accurate indication of success. In our experience, someone using avoidance mechanisms and blocking will eventually regress to being unacceptably out of control even though they report themselves as 'satisfied', 'successful',  ‘improved quality of life’, etc. Therefore, we only count the assessment by a second or third evaluator/auditor (who can detect if or if not there is genuine blocking and/or avoidance mechanisms are being used) in our official success rate.
  • Basing a success rate on only those survey forms returned does not give an accurate indication of success. For example, if a programme has a total of 30 clients in, say, a 3 month period, and only 15 bother to return the survey form, the evaluator does not know how the non-responders (half the sample) are doing. In our experience, with very few exceptions, those who do not return calls for evaluation purposes have gone back to old avoidance behaviors, including being afraid to talk to an evaluator, and out of control stuttering.  Therefore our evaluation base is the TOTAL number of members that joined our programme during the target year. Those not responding are recorded and factored in as 'not-successful' unless there is a very good reason such as traveling but rated successful by their Primary Coach and Regional Director in the original report before auditing.
  • Evaluations done during or right after an intensive course are not a good indication of success. The clinic (or, with McGP, the conference room) quickly becomes a comfort zone and there is the dynamic of a 'honeymoon' period where fear is very low and confidence high. Therefore, our evaluation audits are done anywhere from 9 to 18 months after a member's first course, which gives us more of an indication of 'long term' success in the real world. If we evaluated immediately after their first courses, our success rates would be close to 100%. 
  • Any success evaluation should have some kind of  'check and balance' verification procedure to ensure accuracy similar to our Evaluation Auditors who live, with very few exceptions (for example, being fluent in the language of the region), outside the audited region to decrease the chance of undo influence from Regional Director being audited.

However, to be able to compare ourselves to other programmes for those who stutter, starting with the Evaluation Audits from 2014, (besides factoring in those we were not able to contact as unsuccessful), we now include statistics based on:

  • Audited success rates excluding the non-contactables (only counting those we were able to contact) similar to the evaluation procedures of other programmes.
  • Self-assessed improvement in Quality of Life (QOL). This is based on the comments from Primary Coaches, Regional Directors, and/or Auditors and positive answer to the question: “do you see yourself as successful?”. 

Breakdown by Year: