You’re a nurse coming off a 12-hour shift in the medical/surgical ICU. It was a hectic night that started with a cranky co-worker asking why you were five minutes late and ended with a delirious patient calling you the devil as he tried to pull out his IV. Leaving the hospital, you notice a machine with four emoticons ranging from a very smiley green face to a very frowny red one. The machine wants to know: How was your day? You press the frowny red face, and you feel a little better.
This is a made-up scenario, though it’s not hard to imagine a staff member at Michael Garron Hospital in Toronto having had a similar experience. A year ago, the hospital decided to rent a HappyOrNot terminal, placing it next to the main employee exit in an effort to diversify how they measured employee engagement. “We saw it as something novel and as [a way of] demonstrating to staff that we are interested in listening,” says Phillip Kotanidis, director of human resources, occupational health and safety, and organizational development and wellness at MGH. Recently, the hospital renewed its lease on the terminal for another year, and shifted it to the waiting area of the diagnostic imaging department, where they have used it to ask patients: How was your visit today?
You’ve likely seen HappyOrNot stations in airport washrooms or at your local grocery store. Health care settings have also begun installing the terminals; in addition to MGH, Trillium Health Partners in Mississauga has experimented with the technology as a way to poll patients, as has Toronto’s Hannam Fertility Centre. This is perhaps unsurprising, given the attention currently being dedicated in Canada to measuring patient experience. The most familiar tools in this enterprise are questionnaires such as the Canadian Patients Experiences Survey—Inpatient Care (CPES-IC), which includes more than 40 questions, and is sent to patients after they’ve been discharged from hospital—at the earliest 48 hours afterwards, and sometimes more than 60 days afterwards. “Happiness buttons” seem like the polar opposite of the CPES-IC: simple and quick for the patient to use; offering up data in “real time.”
But what can they actually do, for patients and for the health care system?
How do happiness buttons work?
The HappyOrNot machine is essentially an elbow-high stand topped by what looks like a slightly inflated video-game console that features four buttons in a gentle arc: dark green with a definite smile; light green with a less curvy one; light red with the hint of a frown; and dark red with a scowl. Behind the console, in a frame, is a piece of paper asking a question. Staff or patients can log their answer practically without stopping as they walk down a hallway or through a parking lot. The results are crunched overnight and available through a Web portal the following morning in colour-coded bar graphs showing how many people pressed each of the buttons by the hour. Daily and monthly graphs are also generated as time accrues.
The idea is to make data collection both frictionless and fast and thereby increase the volume collected, and also enable speedy identification—and resolution—of problems. The technology is akin to (and compatible with) the Net Promoter Score, a business tool developed in the early 2000s to gauge customer loyalty that asks one simple question: On a scale of 1 to 10, how likely are you to recommend this service to a friend or colleague?
HappyOrNot terminals are actually not for sale or lease; clients pay a monthly service fee for the analytics, which is calculated according to the number of terminals being used and for how long. The stations themselves are physically light and easy to move around.
How much information can the buttons really give?
The main critique of HappyOrNot is that it does not tell you what people are happy or unhappy about, especially when questions are as general as “Please rate the care you received today,” or “Would you recommend our emergency department?”
In response to this critique, Vic Rempel of Arvo Group, which is authorized to sell the technology, says that because the results are time-stamped, clients sometimes begin to see trends—a dip between 12 and 1 p.m. each Monday, for example—which may have to do with something as concrete as a shift change. The team at Michael Garron Hospital looked for patterns like these, says Kotanidis, and for a while, seemed to find one: At one point, staff finishing the night shift were often pressing the unhappy face. But then a few months later, that same time of day became a happy period. “It’s hard to figure out the rhyme and reason to it,” says Kotanidis.
HappyOrNot has just launched a new version of the terminal in North America called “Smiley Touch” which features the same four emoticons, but on a touch screen. If you press one of the frowny faces, a second screen pops up asking why you are unhappy and giving you up to six options, like “wait time,” “attitude,” and “customer service.” If you press one of these icons, a third screen offers the chance to type in a comment. “It’s set up so it gives a little deeper insight,” says Rempel.
Do happiness buttons command high response rates?
Not as high as you might think—at least not everywhere. At Michael Garron, which employs 2,500 staff, some 1,100 people used the terminal every two weeks when it was first installed. Today, that number is down to 200. At the Hannam Centre, which has been using HappyOrNot for about a year, some 60 to 70 percent of patients respond consistently. Rempel puts the average response rate between 20 and 30 percent a day. Meanwhile, the average response rate for the CPES-IC survey—with its 40-plus questions—is 40 percent, according to Anthony Jonker, director of innovation and data analysis at the Ontario Hospital Association.
Are happiness buttons an alternative to surveys?
Nobody really thinks so. The surveys, with their carefully developed patient-reported experience and outcome measures (PREMS and PROMS), provide rich longitudinal data that “allow for transparency and public reporting, because of the quality of the data,” says Anna Greenberg, vice-president of health system performance at Health Quality Ontario. Jonker points to a recent collaboration between the OHA and HQO that looked at adding questions to outpatient surveys about the challenges of transitioning from hospital to primary care. “The patients we talked to, they have very specific concerns they want you to get at,” says Jonker. “They say, ‘Here’s how I’m feeling about it. I feel like I’m on my own between one place and the other.’ A happy button isn’t going to tell you that.”
Tara McCarville suggests that it shouldn’t be expected to. “It’s not about quality of care,” she says. McCarville, a health care partner at PricewaterhouseCooper with a background in health technology and acute care, was part of the team that brought HappyOrNot into Trillium Health Partners a couple of years ago. Trillium used multiple terminals and moved them around to different parts of the hospital so as to get “a window on very specific service issues,” says McCarville. “‘How clean is the washroom? Did you enjoy your cafeteria meal? Were you able to find parking within 10 minutes?’ Anything that is not about the clinical intervention that you’re getting, and the clinical outcome that you get as a result of that.” She adds: “A HappyOrNot terminal is valuable but it does not answer all the questions. It’s really important that it’s part of a much broader strategy to engage patients and get feedback.”
Both Jonker and Greenberg agree that real-time results are an important complement to retrospective data. “If you want to have quality improvement in relation to patient experience you need something nearer to real-time,” says Greenberg. Many hospitals try to gather this type of information through pulse surveys, which ask patients six-to-10 questions just before or within 24 hours of discharge. These are typically very focused, says Jonker, perhaps reflecting specific areas a team has identified as needing improvement. “If you’ve got rapid feedback on that effort, you’re in a position to say, ‘OK, let’s tweak it, let’s change it,’ [or] ‘It’s working, how might we continue to improve it?’”
There are other interesting initiatives aimed at gauging patient experience more rapidly. Yelp, for example, has made a significant foray into the U.S. hospital sector, where studies mining both qualitative and quantitative data reported not only similar results to those of patient surveys but also additional aspects of patient experience, including compassion of staff and family member care. In the U.K. and Australia, websites called Care Opinion and Patient Opinion, offers patients the opportunity to post feedback about a specific health care provider. Beside each post is an icon system that indicates whether the care provider has responded, whether other patients have had similar experiences, and whether change has resulted from the post.
Can happiness buttons make a difference?
On a concrete level, yes. At Trillium, says McCarville, HappyOrNot results helped clinical managers make decisions about how to staff certain shifts. At MGH, where the terminal was periodically placed at staff events, minor tweaks were made—“we shortened the length of one meeting and reduced the frequency of another,” says Kotanidis, director of HR at the hospital.
McCarville saw the terminals, in part, as a communication tool, “to demonstrate to our community that we actually want their feedback,” she says. She also noticed that HappyOrNot helped raise staff awareness of the service element of patients’ experience. And this is just what Neil Stuart, board member at Patients Canada, likes about the buttons. “It creates a kind of accountability,” he says. “It has a bit of a dynamic where people understand that, when this stuff is being measured, this happiness kind of thing, they maybe have a bit more obligation to really pay attention to taking care of patients, and not just from a pure clinical, get-the-prescription-right, get-the-diagnosis-right point of view, but to treat them well.”
For Julie Drury, chair of the Ontario Minister’s Patient and Family Advisory Council, it’s not enough that her feedback makes a difference—she wants feedback in return. “Put it on the front page of your website,” she says. “‘We had a happiness button in the cafeteria and we heard the food sucked and we actually did something about it.’” (MGH has posted HappyOrNot results of staff engagement, both on the machine itself and on the hospital’s intranet.) Whether it’s a quick click or a half-hour spent filling out a survey, patients and families want to know that the message was received and will result in meaningful change. “You’re giving your information up, you’re sharing your knowledge, your expertise, your experience,” says Drury. “Because that’s what they’re looking for. They want to improve themselves based on your knowledge.”
The comments section is closed.
As a research method I think it may not have the reliability of a self declared statement on a questionnaire. I would say this idea is too expensive as well. Thirdly, I’m not sure you need to worry about what is making patients happy, unless you are measuring a new program. Fourth, the buttons dont’ tell you what is making a patient ‘unhappy’. Fifth, happiness is not what you should try to measure. Sixth, people don’t want to wear unhappy buttons if they have to attend other services. They would worry about the effect of the buttons on other patients and on staff. This would affect the reliability of your survey. You should not be targeting individual care workers or doctors. You should be aiming for health care ‘accountability’. I’ve not seen any satisfaction questionnaires concerning quality of care in my local hospital nor in any clinics in Vancouver. But, with internet technology, it should be extremely cost efficient and easy to measure a patients satisfaction…and impact of medical intervention of any kind. These surveys can be made brief and easy as well as informative and reliable. There are some very serious quality of care problems in Vancouver, probably elsewhere. They should be researched professionally and appropriately.
I think there’s another area in which the use of a simple metric like HappyOrNot can make a big difference: patient satisfaction with a particular treatment. Too often we are encouraged in our practices to use measurements and scales that have been developed for research purposes, not for everyday clinical use. (Think about questionnaires for depression, anxiety, pain, headaches, prostatism — you name it.) In many cases, having a more limited menu of options from which to choose can give us a better sense of whether a treatment is helping — or harming. I like using tools like the Clinical Global Improvement score for this very reason. A HappyOrNot terminal would make this even easier!