They are the words most CIOs will never hear said aloud: “The I&IT Organization assists my business unit in understanding the rationale and costs associated with technology.” Or perhaps this one: Given my understanding of what my business unit pays for I&IT services, I think my business unit is getting good value for the money.” Or how about a simple, “I receive useful advice from the I&IT organization.”
If any of those sound somewhat familiar, though, it’s probably because they are the kind of statements with which employees in an organization are asked to either strongly agree, agree, neither agree nor disagree, disagree or strongly disagree. They are staple of technology user satisfaction surveys, and even if a growing number of CIOs are balking at describing their coworkers as “users,” or “customers” in favour of “partners” or something else, the feedback has never been more important.
At the recent CIO Association of Canada Peer Forum in Toronto, Ken Kawall highlighted all of these statements and others as examples of the kind of surveys he uses within the Ontario government. As the assistant deputy minister and CIO for the Labour and Transportation I&IT Cluster, Kawall was talking about the age-old issue of aligning IT with the business. He highlighted a number of external projects that have benefitted citizens, such as the Ontario identity card for non-drivers, and Select Ontario, an online tool for helping those investing in the province to choose an appropriate site. All of those achievements, however, require close collaboration with other parts of the organization, and ensuring their needs are met is critical to overall success, he said. The satisfaction surveys are a fundamental way of measuring that.
“If you don’t ask, they don’t tell. If they don’t tell, you don’t know where to go next,” he told the CIOCAN Peer Forum. When an audience member asked how he managed to get a decent and honest response rate, Kawall laughed. “You have to meet our users. They’re not shy at all.”
According to recently-released market research, however, IT department user satisfaction surveys remain are far from standardized across the industry, and unevenly deployed at best. While some degree of variation from one organization to another is to be expected, a failure to look holistically at user satisfaction on an ongoing basis may be one of the things that hampers CIO effectiveness as they seek to involved in more strategic business decisions.
“They may be conducting surveys on an ad hoc or occasional basis, when a perceived need arises such as before a system upgrade, or they may only be surveying users about certain services or systems,” according to User Satisfaction Surveying Adoption and Best Practices, a study published by Irvine, Calif.-based Computer Economics in February. “For example, some help desk systems send out a survey request to determine user satisfaction each time a help desk incident is closed. Such surveys are useful for determining the satisfaction with help desk services, but they will not reveal dissatisfaction with other IT services.”
Frank Scavo, president of Computer Economics, said some of the variation in practices can be attributed to size. For example, among small and medium-sized organizations 60 per cent said they conduct some form of user satisfaction surveys. Among large enterprises, the number rose to 83 per cent. If they’re simply surveying before a perceived need arises, such as a system upgrade, for example, they may only be getting at pieces of the puzzle, as opposed to a more comprehensive evaluation program.
“With some organizations you almost get the feeling they’re doing the surveys as a defensive measure, to show to their corporate organization that everything is fine,” said Scavo, who also writes The Enterprise System Spectator blog. “You can tell from the survey design that they’re looking for feedback, but not too much.”
Developing user satisfaction research may require creativity, but there are a few ideas and approaches to bear in mind first.
Find a frequency that fits
There is a difference between occasional and ad-hoc. Much as they need to determine the risk appetite or comfort level around certain kinds of IT investments, CIOs must figure out how often they can seek feedback before users tune out.
For Kawall, who has also worked at TransCanada Pipelines and the London Free Press, among other organizations, user satisfaction needs to be as regular as possible. “You’ve got to keep pushing, you’ve got to keep in their face about this,” he said. “Eventually they will tell you the truth.”
Scavo, however, suggests no one frequency will fit all. “It is possible to over-survey users,” he said. “Every time they interact could be tiresome. What you’re focusing on is improving and designing the best approach to soliciting that information that doesn’t over-tax their ability to give feedback.”
Steve Orchard, senior vice-president of operations and customer support at Internap in Altanta, said his firm often conducts user satisfaction surveys as part of its work as a service provider to large organizations. He said the company has found a balance between occasional “transactional” surveys based on support users receive, plus a more wide-ranging satisfaction survey on an annual basis.
“We’re all barraged by lots of e-mail every day. You want a decent percentage response rate,” he said. “People expect good service, so we do not respond all the time with a survey because then it becomes an expectation. The more in-depth annual surveys allows us to dig deeper.”
Go beyond ‘All of the above’
Most IT user satisfaction surveys are based on multiple-choice answers, but that may not give you the entire picture. Kawall warned about leaning on anecdotal information that may disconnect from what’s found in the research if you don’t include at least a few areas where employees can offer feedback in their own words.
“The comments have been much more helpful than the actual numeric scores,” he said. “You may have to do it a few times. And you may have to go back to the clients themselves.”
Orchard agreed. “You need to get some free form text fields and reactions,” he said, adding that Internap complements its electronic surveys with customer advisory-type sessions with a set of questions that go across six, seven or eight topics or areas of concern.
Scavo also suggested generic surveys won’t work as well as those that look specifically at the priorities of the organization and the technology needs associated with it. “A user survey program needs to be designed in light of organization’s IT strategy,” he said. “A lot will depend on whether (the CIO) is supposed to be a leader in innovation versus if they’re thought of as running more of a support program, or if the IT department is viewed more as a utility.”
Think through your data
Most IT departments are getting better at figuring out what information is important to their various lines of business, but it can be a different story when they’re examining the results of user satisfaction scores. Orchard said Internap ensures that year-over-year changes can be really telling.
“A survey can give you a general direction of how you’re doing, but unless you use that data in a comparative form, there is no true value in it,” he said.
Scavo suggests that CIOs may also need to assess how those being asked for feedback understand the questions, particularly in an era of bring-your-own-device (BYOD). “The view from the IT organization and the view from the line of business may be completely different views,” he said. “A typical example is sales. The technology they use to get their job done likely does not all come from the IT organization. Their experience of IT is a combination of what they receive from centrally developed and managed systems, complemented or supplemented by things they deploy themselves. In some cases, they may not fully know the difference between those two.”
In his surveys, Kawall uses two questions that, while broad, help begin to get at these areas. These include, “The I&IT organization understands my business challenges,” and “In the end, my business unit got the right outcomes that meets its business requirements.”
Add sentiment to your scoring
E-mail surveys will likely be the mainstay for obtaining user feedback for the foreseeable future, but Scavo said a growing selection of workforce analytics tools that monitor “sentiment analysis,” where they capture and mine the social interactions of their user population around all sorts of matters to find out how people are feeling about things. These tools could be focused on what people say on Twitter or Facebook, or more likely comments on internal social tools. That can be a sensitive area, however. “You can see that there can be pushback among the employee in their use of internal social business applications if they know that their interactions are being monitored and analyzed. It can feel very Orwellian,” he said. “They might be more careful about what they say.”
There’s another way to measure sentiment, however, that doesn’t directly involve technology. Kawall recommended “embedding” IT staff or collating them with other parts of the business. That can be a lot more effective than seeking feedback in hallway conversations, he said.
“You have to know what you’re selling,” he said. “For the clients, it really comes down to, ‘What can I do tomorrow that I can’t do today?’ and, ‘What can I do better?’ If you talk to them in those terms, they get it.”
And they may even give you some positive feedback.