Pick any team sport and you’d agree with me that choosing the right players is one of the most crucial things to do to increase one’s chances to win. The same holds true about recruiting participants for a user research study. Right participants can provide us with a goldmine of information and perspectives leading to rich, meaningful insights.
The most well designed research would fall flat on its face with wrong participants, and add no value in providing the strategic direction that it could have been capable of providing.
Copious amounts of meticulous planning goes into ensuring that we recruit the right participants for any user research project. Thoroughly understanding the problem statement followed by crafting research goals leads to identifying the target user segment(s).
Once the user segment is zeroed upon, the recruitment screener is prepared that will elicit the user habits, behaviours, product usage and the context of its usage besides the demographics information.
Multiple levels of screening and baseline interviews often boost our confidence in the participants we’ve recruited, but despite all the extensive measures and verification, bad apples upset the applecart. Now, they could be a bad apple for multiple reasons:
participating in multiple research studies (making these interviews their business)
giving incorrect information in the screening process (for e.g., employer info, product usage, or demographic info) just to be eligible for the study
recruitment agency playing foul to meet the time and recruitment targets
participant bias such as social desirability bias, acquiescence bias
Any of these reasons could lead you to realising that you’ve got into an unproductive discussion with the participant. It takes a discerning eye to identify a bad actor. Years of experience in studying human behaviour; leveraging varied interviewing techniques; practice to discern the undercurrents and identify the disconnect between the screener and what’s being said, misalignment between body language and words; and a sharp memory (to remember the face of a repeat ‘offender’) are skills that come handy in those situations.
Being able to discern isn’t enough though. It takes integrity and courage to say “we’ve got a bad recruit and we need to drop them” because that means adding more days to wrap-up up the user interactions and delaying the start of synthesis, working more as you need to replace the participant, and informing the recruitment partner (digital or offline) that you will not consider that participant for the study.
During a user research study in Chicago, USA, we travelled from India to Chicago and then for another two hours to get to a participant’s home. 10 minutes into the conversation and I knew that we weren’t talking to the right participant. The participant’s professional role and product usage did not match our target needs, and we would not have got information and stories about his habits and usage scenarios. The Principal Product Manager from the client’s side was with me during the home visit. He was mighty upset that I had made up my mind about terminating the session as all the travel time and efforts put in were going down the drain. Additionally, he felt a sense of social obligation to use the participant’s time because we had blocked it for the conversation. But, it was a decision to choose between adding noise and junk to the research stories/content or move on to find and interact with a more suitable participant.
In behavioural research, every interaction counts; it's not about numbers but about understanding human behaviours in detail.
I shared my reasoning and maintained my stand. After attending a few more IDIs, he understood and appreciated why I did what I did.
In a recent remote moderated In-depth Interview session, the UCC team was interacting with participants from the USA. It had been a long day. We started our work around 11 am IST. IDIs started at 6:30 pm, and it was almost 10 pm when we were interviewing a young professional working with a startup. 15 mins into the interview and we gauged the disconnect. Her responses to the recruitment screener and the stories & scenarios shared during the conversation did not match. Her nonchalant attitude reflected in her responses and body language. This set the alarm bells ringing, and it was the moment of putting our years of experience to detect a fraud candidate and be honest to call it out. The moderator paused the interview and quickly consulted the team for consensus to not continue with the interview. The participant was politely told why we could not continue with the interview only to see her throw a fit and leave the conversation unceremoniously.
While not all research interactions result in great stories and that's totally accepted, in those cases we wrap up the session before time, thank the participant for their time and provide the promised incentive. But, with our zero-tolerance toward framed and dishonest participants, we are never shy to call it out and provide the exact reason for the termination of the discussion. Integrity is at the core of all our functions, approach, and decision-making.
What followed was a rather interesting turn of events. I choose to use the word 'interesting' because someone from the participant’s network made multiple attempts to tarnish UCC’s reputation and alleged misbehaviour from the moderator. No, the reason behind writing this article is not to come out clean, but to tell you what happened after this.
We informed our client of the participant being a bad apple (the whole shebang!). The client was completely onboard with us and backed us. THAT is what trust is. They were in it with us as a trusted partner and we theirs.
Bad actors or apples happen albeit rarely; taking on the mantle of high standards, maintaining integrity in what we do, and doing what needs to be done is how we win/keep the faith of businesses in user research. Let’s continue to focus on recruiting right and the appearances of bad apples will shrink.