Surveys, interviews, and questionnaires are important tools for gathering feedback from stakeholders during organizational change. An organizational change plan always involves some form of stakeholder analysis or research, so asking good questions is critical. As a result, I get asked for a lot of advice on designing surveys, so I thought would list some of the things that I normally say. This applies broadly to surveys, interviews, and questionnaires, and I will use survey to refer to all of them.
1. Treat the survey as a design project, not just a listing of desired questions.
Ninety-percent of survey design efforts begin with an individual or a team brainstorming a list of questions to ask. A better approach is to hold off on the questionnaire portion and to first create a design document that covers the following:
- What is the overall purpose or guiding question of this survey? Identify the overall question or questions that you want to know because they provide a purpose and a gauge to determine if the specific questions are valuable.
- How many people will be surveyed? Is it a sample or the total population? What is the expected response rate? It is often difficult to get a response from the overall population, so survey designers want to use the results from a sample to represent the whole population. However, care must be taken when drawing such conclusions. For small populations, such as 300 or 400 people, a high percentage of the target population must respond to make the results significantly meaningful. Be particularly careful when subdividing results by demographics. Sometimes the overall sample size is sufficient but the size of the subgroups is not. For example, an employee satisfaction survey for the whole company might get a 40-percent response rate, but in one location only 10 percent of the staff might have responded. Interpreting the responses in that location as representative of others at the location is not statistically valid. There are a number of sample size calculators on the web that can be used to determine the appropriate sample size.
- How will the survey be administered? For example, do you plan to use an email template, web-based survey tool, or interview? Each method introduces special considerations.
- Live interviews enable the questioner to provide clarifications for the respondent or to ask follow up questions. However, live interviews are time consuming and require skilled interviewers to get accurate responses. Complex dynamics also come in to play between the interviewer and interviewee, where the interviewer can subconsciously bias the results, especially when the interviewee does not feel safe to respond openly.
- Emailed templates provide little anonymity and often require tedious compilation of the responses.
- Survey tools, such as surveymonkey, can provide anonymity, relatively easy response compilation, and the ability to scale the survey to many respondents.
- See Designing and Using Organizational Surveys: A Seven-Step Process (Jossey Bass Business and Management Series)
2. Set the context setting the respondent.
The questions should be prefaced with an introduction and context. Sometimes, it is even advisable to contact them ahead of time and get their agreement to complete the survey. Even when the group is selected at random, there should be an introductory message that includes such items as the following:
- What will be done with the survey results?
- Will confidentiality will be protected?
- How the survey results will be used?
- Will respondents receive a copy of the final results or report?
You may also wish to offer an incentive for completing the survey, or a note from a senior manager urging them to take the survey.
3. Review your questions.
Once you have determined the overall research questions or purposes for the survey, you develop specific questions designed to gather data. The problem is surveys are notorious for providing ambiguous results. The questions get answered, but the responses mean different things. Bradburn, Sudman, and Wansink wrote a book called Asking Questions: The Definitive Guide to Questionnaire Design — For Market Research, Political Polls, and Social and Health Questionnaires that covers these issues thoroughly.
Additionally, here are some key points to look for,
- Asking two questions at once. For example, “Do you get all the direction you need to do your job from your manager on a timely basis?” This question could be interpreted as being either about the sufficiency of the direction or the timeliness of it.
- Questions with list items. Avoid questions like, “Are newspapers like The Wall Street Journal, The New York Times, and The Washington Post interesting to read?” The respondent might find The Wall Street Journal interesting and The Post not interesting, making the answer problematic.
- Questions with no answer. When respondents can select from a list, make sure the possible selections cover everyone. For example, when using the question “Which department are you located in?” the list could include Engineering, Marketing, Manufacturing, and Information Technology. But if the survey happens to hit a few people who work in Finance, what will they answer? Make sure there is an “Other” or “Not Applicable” selection for those cases. This type of question can be tricky, because the frequent renaming and reorganizing in large companies results in many employees who are unsure what their department name is at a higher level. Lists should always be tested to ensure the categories are sufficient and understandable to the respondents (see below).
- Novelty questions. When teams brainstorm questions, questions are often generated that are fun to know but have no real purpose. Check to ensure all the questions produce useful data. One way to check is to create a mock up of the intended report and see how the imagined results fit into it.
Another informative book on how people respond to survey questions is The Psychology of Survey Response. Tourangeau, Rips, and Rasinski discuss the biases and subconscious reactions people have when answering questions. They also discuss the ways that faulty memory plays into how questions are answered and how questions should be constructed to get the most reliable results.
4. Good surveys are the result of a good survey design processes.
One of the most commonly overlooked steps in designing a good survey is testing the instrument (questions). As suggested in item #3, respondents can interpret seemingly simple questions in different ways, leading to poor results. Moreover, it is very difficult, due to biases and assumptions we all make, for a single person or small team to come up with a robust set of questions that will get reliable answers from the target group. To mitigate this, testing is essential.
Here is a recommended test procedure:
- Choose five or ten people to do a cognitive walk-through of the survey. This is done live, even if you are using an electronic method to administer the ultimate survey. Read each question to the test respondent and ask them to describe their thought process as they answer the question. What came to mind? What did they recall? What words in the question did they struggle to understand? What other interpretations did they think of? You will probably find this highly enlightening. See Cognitive Interviewing: A Tool for Improving Questionnaire Design for a comprehensive discussion of the test process.
- Once the question set is stable, send it to a test group exactly as planned for the actual survey. Then follow up with members of the group after the survey is complete to get their reactions. Did they receive the survey smoothly? Could they access it? Were the instructions self-explanatory?
- Finally, take the results from the test group and compile them into the sample report. See if all the responses were useful and how difficult it was to compile them. Make adjustments to the survey.
The procedure above is just an example. Depending on the size and scope of the survey, the test process should be adjusted to include more or less test participants and, perhaps, even other iterations.
5. Organizational change surveys are usually not science.
Survey design is a deep and complex topic, and survey designers should really have taken classes and read some books on it before trying to design and conduct a survey. With surveys, it is easy to get answers but difficult to get accurate answers.
That said, organizations use many kinds of internal and external surveys. Some are used for casual inquires with small groups while others involve a hundred questions sent to thousands of customers. However, most surveys used for organizational change planning are not for advancing science. Thus, the way they are used and the rigor required is a little different.
Organizational change surveys are not standalone. They are just one more viewpoint, used in conjunction with ongoing stakeholder engagements, follow-up interviews, process consultation, and other methods are used to get a picture of where stakeholders stand with respect to the planned changes. Often the best use of surveys is to provide a sort of “heat map,” or a rough indicator to where further inquiry is needed. That is, having obtained an indication of where stakeholders stand, further investigation such as interviews or focus groups can be designed to more fully understand the responses.
This post has offered a few suggestions to help you design a better survey and be more effective at organizational change. Gathering feedback using questionnaires and surveys is used at many points in an organizational change plan, from stakeholder analysis through readiness assessment and evaluating success indicators. Developing the skill to design a survey well, and to ask questions effectively, is thus essential to an effective change process, because success will usually not happen otherwise.