Even though CAHPS surveys are effective at measuring and addressing patient satisfaction, they fall short in a few capacities. Surveys rely on patient self-reports. Because of this, answers may be incomplete or inaccurate because a lack of knowledge and/or not fully remembering the experience. Reports are usually intended for the entire population rather than tailored to the needs of specific patients.
The failure to tailor reports to well-defined audiences could result in failure to communicate information clearly or even to engage the audience’s interest. Examples include lack of CAHPS survey knowledge, health literacy, language barriers, not understanding what questions means, misinterprets question, unclear healthcare terminology, poor cognitive status, tired of answering surveys, and does not answer the question truthfully.
Does the lack of substantial differences in CAHPS data from year to year make the reporting of annual CAHPS rates unhelpful? Scores are based on patient population every quarter. A Star Rating can change from one public reporting period to the next, even if its score doesn’t change. Each star rating is calculated relative to the scores of all other agencies that release ratings. Scores are refreshed every quarter, and the public score is weighted according to the patient population. The reported score is the average of the last 4 quarters’ results. Clear as mud?
“… an agency that achieved an overall rating score of 94% in one public reporting period and was assigned to the 5-star category could find itself assigned to the 4-star category with that same score the following public reporting period, if the distribution of agencies overall resulted in more agencies with higher overall rating scores.”
Concerns with Fairness and Effectiveness
Two primary concerns about the fairness and effectiveness of CAHPS surveys are that, it’s not fair to compare CAHPS scores across different healthcare providers or health plans. And it does seem like there are factors beyond their control. For example, the age or education level of the facility’s patients could affect how they respond to the surveys. Evidence shows, however, survey length has very little effect on response rates. We are learning the design and wording of the questions matter more than the number of questions.
Factors like how old or how sick patients are can affect CAHPS scores also. However, those differences can be accounted for by a technique called case-mix adjustment, which makes it possible to estimate how healthcare providers would score if they all served patients with similar characteristics. Case-mix adjustments aim to level the playing field. The adjustments reduce the likelihood providers will avoid taking patients who they think will report poor experiences because of factors out of the providers’ control.
How do you get a survey to a patient?
Do you hand surveys to patients in the office and request that he or she fill it out and return it before leaving? Do you mail surveys to patients at home? Do you ask questions over the phone? If you have a practice website, do you direct them to log on and fill out a survey posted online?
Practices that conduct CAHPS surveys of their Medicare patients by phone have their scores adjusted downward by CMS. The adjustments are made on the theory that higher acceptance and completion will inflate their scores over the practices that survey their Medicare patients by mail.
For practices that have a website, there has been a growth in web-based patient surveys among its clients. However, the internet and email are not approved methodologies.
“We think it’s the right technology for the future, and we have a very robust measurement platform that allows for non-government surveys to be administered in that format. The challenge is that CMS and the Agency for Healthcare Research and Quality have provided rules as to what modes we are allowed to use for administration of the CAHPS survey.”
About the Author: