Conference PaperPDF Available

Dynamic User Research in Large-scale Service Design Projects

Authors:
  • TCS Research

Abstract

Considering the diversity and plurality of different stakeholders involved in the overall service ecosystem, there is a need for engaging in large-scale user research. It is critical that service design projects with a variety of stakeholders are geographically and culturally diverse. Service designers and researchers intend to engage in qualitative research to gain in-depth knowledge of user problems and contexts. However, qualitative research methods such as user interviews are often time and resource intensive and cumulatively add to delay in completion of the service design lifecycle. To meet delivery deadlines, businesses often prefer to engage in quantitative methods for user research; thus, service designers and practitioners are often left with insufficient information from which to build insights. Various challenges such as biases and lack of domain knowledge and insight into on-ground state impact the construction of effective quantitative studies. In this paper, we explore approaches to make quantitative research more effective and efficient with the benefits of qualitative research, to strengthen large-scale user research capabilities in organizations. The dynamic user research approach discussed in this paper helps researchers sequentially mix qualitative and quantitative approaches to conduct the study by feeding findings into subsequent cycles. This approach intends to overcome the limitations of researchers to collect user data for large sample sizes. Introduction Business organizations widely use service design to create product and service uniqueness and help employees create experiential customer engagements. Service design research, in practice, is primarily dependent on the quality of user data it collects for problem definition, insight building, and further designing of interventions. To facilitate high-quality user research in service design, it is important to accurately identify the user personas and understand the problems and contexts. Modern-day businesses have a large scope of services and cater to diverse user groups even beyond geographical borders. Besides customers, there is an involvement of a large variety of stakeholders in the overall service ecosystem, such as business stakeholders and partners, suppliers and vendors, or their own employees and human touchpoints that interface with customers, among others. The service lifecycle also consists of many granular phases and may differ in different regions or for user groups. An example of such services is government services that need to cater to all strata of society in the nation as well as across borders with high diversity in culture, language, literacy, and so on. Another example of a large-scale service could be online shopping or delivery apps, where even if the target users are limited, there is large scale involved in the number of stakeholders who need to interface smoothly. Given this complexity and diversity, such large-scale services need to provide adequate attention to user research and feedback considering service users and all stakeholders. Service designers and researchers intend to engage in qualitative research to obtain in-depth knowledge of user problems and contexts. However, qualitative research methods such as user interviews are often time and resource intensive for both researchers and participants and cumulatively add to delay in completion of the service design lifecycle. To meet delivery deadlines, businesses often prefer to engage in quantitative methods for user research; thus, service designers and practitioners are often left with less information from which to build insights. There is a need to strike the right balance of investment and effort spent in both approaches. It requires a blend of quantitative and qualitative techniques, a mixed method approach to gain the best of both worlds, and a middle ground where quality and rigour of the user research are maintained. There is a need for organizations to conduct user research at a larger scale-both qualitative and quantitative-in a more effective manner given the challenges. This is important to gain relevant insights and new perspectives for improving services and to provide the right experiences and services to their customers in a highly competitive environment. Designing relevant questions for large-scale quantitative user research is often difficult given that researchers may not be fully aware of the problem domain. Researchers must rely on their skill, knowledge, and secondary research to arrive at the right questions and create their answer options. Even in such scenarios, researchers might fail to create closed-ended questions with limited knowledge on the actual problem context. Further, when qualitative research approaches are used, there is a lot of quantitative data embedded in the qualitative responses, which is difficult to directly extract, analyse, and validate. Hence, user research approaches often fail to accurately reflect the plurality of participant scenarios and cannot vividly capture the diversity in user intents, concerns, and constraints. In this paper, we explore an approach to strengthen large-scale user research capabilities in organizations using a gradual and dynamic blend of qualitative and quantitative surveys. Leveraging mixed-method user research, our approach helps designers and researchers embark on large-scale user research with an initial set of qualitative questions. This approach intends to overcome the limitations of designers and researchers to collect user data for large sample sizes by enabling a systemic usage of qualitative and quantitative research methods, where they derive relevant information to formulate the quantitative survey through multiple cycles. The approach helps designers and researchers validate their qualitative insights through quantitative response data. The approach also naturally integrates triangulation, saturation, confirmation of findings, and enhancement of the study through the multiple cycles of administration. We believe that our approach, reflections, and recommendations would be crucial to designers and researchers in business organizations by providing them a streamlined approach to embark on large-scale user research activities, reduce the overall time and effort required without losing required rigor, and collect quality user data in large quantities.
23rd DMI: Academic Design Management Conference
Design Management as a Strategic Asset
Toronto, Canada, 3-4 August, 2022
Copyright © 2022. Copyright in each paper on this conference proceedings is the property of the author(s). Permission is granted to
reproduce copies of these works for purposes relevant to the above conference, provided that the author(s), source and copyright
notice are included on each copy. For other uses, including extended quotation, please contact the author(s).
Dynamic User Research in Large-scale Service Design
Projects
Bhaskarjyoti Dasa, Sylvan Lobob, Ravi Mahamunia
a TCS Research, Tata Consultancy Services, Pune, India; b TCS Research, Tata Consultancy Services, Thane, India
Considering the diversity and plurality of different stakeholders involved in the overall service ecosystem,
there is a need for engaging in large-scale user research. It is critical that service design projects with a variety
of stakeholders are geographically and culturally diverse. Service designers and researchers intend to engage
in qualitative research to gain in-depth knowledge of user problems and contexts. However, qualitative research
methods such as user interviews are often time and resource intensive and cumulatively add to delay in
completion of the service design lifecycle. To meet delivery deadlines, businesses often prefer to engage in
quantitative methods for user research; thus, service designers and practitioners are often left with insufficient
information from which to build insights. Various challenges such as biases and lack of domain knowledge and
insight into on-ground state impact the construction of effective quantitative studies.
In this paper, we explore approaches to make quantitative research more effective and efficient with the
benefits of qualitative research, to strengthen large-scale user research capabilities in organizations. The
dynamic user research approach discussed in this paper helps researchers sequentially mix qualitative and
quantitative approaches to conduct the study by feeding findings into subsequent cycles. This approach intends
to overcome the limitations of researchers to collect user data for large sample sizes.
Keywords: Service Design; User Research; Research Methods, Large-scale Survey
Bhaskarjyoti Das, Sylvan Lobo, Ravi Mahamuni
2
Introduction
Business organizations widely use service design to create product and service uniqueness and help
employees create experiential customer engagements. Service design research, in practice, is primarily
dependent on the quality of user data it collects for problem definition, insight building, and further designing
of interventions. To facilitate high-quality user research in service design, it is important to accurately identify
the user personas and understand the problems and contexts.
Modern-day businesses have a large scope of services and cater to diverse user groups even beyond
geographical borders. Besides customers, there is an involvement of a large variety of stakeholders in the
overall service ecosystem, such as business stakeholders and partners, suppliers and vendors, or their own
employees and human touchpoints that interface with customers, among others. The service lifecycle also
consists of many granular phases and may differ in different regions or for user groups. An example of such
services is government services that need to cater to all strata of society in the nation as well as across
borders with high diversity in culture, language, literacy, and so on. Another example of a large-scale service
could be online shopping or delivery apps, where even if the target users are limited, there is large scale
involved in the number of stakeholders who need to interface smoothly.
Given this complexity and diversity, such large-scale services need to provide adequate attention to user
research and feedback considering service users and all stakeholders. Service designers and researchers
intend to engage in qualitative research to obtain in-depth knowledge of user problems and contexts.
However, qualitative research methods such as user interviews are often time and resource intensive for
both researchers and participants and cumulatively add to delay in completion of the service design lifecycle.
To meet delivery deadlines, businesses often prefer to engage in quantitative methods for user research;
thus, service designers and practitioners are often left with less information from which to build insights.
There is a need to strike the right balance of investment and effort spent in both approaches. It requires a
blend of quantitative and qualitative techniques, a mixed method approach to gain the best of both worlds,
and a middle ground where quality and rigour of the user research are maintained.
There is a need for organizations to conduct user research at a larger scale both qualitative and
quantitative in a more effective manner given the challenges. This is important to gain relevant insights
and new perspectives for improving services and to provide the right experiences and services to their
customers in a highly competitive environment. Designing relevant questions for large-scale quantitative
user research is often difficult given that researchers may not be fully aware of the problem domain.
Researchers must rely on their skill, knowledge, and secondary research to arrive at the right questions and
create their answer options. Even in such scenarios, researchers might fail to create closed-ended questions
with limited knowledge on the actual problem context. Further, when qualitative research approaches are
used, there is a lot of quantitative data embedded in the qualitative responses, which is difficult to directly
extract, analyse, and validate. Hence, user research approaches often fail to accurately reflect the plurality
of participant scenarios and cannot vividly capture the diversity in user intents, concerns, and constraints.
In this paper, we explore an approach to strengthen large-scale user research capabilities in
organizations using a gradual and dynamic blend of qualitative and quantitative surveys. Leveraging mixed-
method user research, our approach helps designers and researchers embark on large-scale user research
with an initial set of qualitative questions. This approach intends to overcome the limitations of designers
and researchers to collect user data for large sample sizes by enabling a systemic usage of qualitative and
quantitative research methods, where they derive relevant information to formulate the quantitative survey
through multiple cycles. The approach helps designers and researchers validate their qualitative insights
through quantitative response data. The approach also naturally integrates triangulation, saturation,
confirmation of findings, and enhancement of the study through the multiple cycles of administration.
We believe that our approach, reflections, and recommendations would be crucial to designers and
researchers in business organizations by providing them a streamlined approach to embark on large-scale
user research activities, reduce the overall time and effort required without losing required rigor, and collect
quality user data in large quantities.
Motivation and Literature
Opportunities and Challenges of Large-scale User Research in Service Design
Given that modern products and services cater to a large number and diversity of users, locally and
globally, there is a need for service design approaches to also keep pace to facilitate such large-scale
design. Various challenges exist for scaling service design, such as regional differences in law, culture,
trends, and diversity of users and workforce. The situation also presents collaboration challenges in large
Dynamic user research in large scale service design projects
3
teams dispersed across borders, limited training of practitioners (especially in Information Technology, that
is, IT organizations where employees with a background in technology/business consultancy are reskilled
in-house in design thinking or service design skills). Managing teams and maintaining standardization of
user studies at such scale and the business pressure for short timelines are a big challenge.
As service design caters to a large diversity of user groups as well as other stakeholders in the service
ecosystem, consideration of end-to-end holistic journeys in large-scale user research in service design is
particularly challenging. Businesses are increasingly adopting human-centric design and service design
approaches and giving (due) attention to the significance of grounding their decisions and intervention in
user research quantitative as well as qualitative. Although qualitative studies are considered time and
resource intensive, they have significant benefits as they help researchers gain new perspectives and a rich
understanding of people, their behaviours, attitudes, perceptions, socio-cultural aspects, and so on. They
help challenge preconceptions (Bansal, Smith, & Vaara, 2018) and are useful when tackling wicked
problems. Conducting qualitative research at a larger scale can enhance the benefits for organizations,
enabling richer insights and a deeper view into the diversity and heterogeneity (Padgett, 2016) of user
groups. It makes way for reliability, rigor and quality (Camfield, 2019) and better acceptance of findings,
especially for large-scale funded projects. Larger studies can help arrive at more granular themes during
analysis, strengthening the trustworthiness of findings (Williams & Morrow, 2009).
Conducting qualitative research is generally considered resource intensive (Baker & Edwards, 2012)
the time, effort, and financial cost involved can be overwhelming, more so at a large scale. There are
questions raised about the validity, reliability, and reproducibility (Mays & Pope, 1995) (Baxter & Eyles,
1997) of qualitative research. It is difficult to recruit participants and manage and distribute the effort in large
teams, so that there is standardization (Bluhm, Harman, Lee, & Mitchell, 2011) in the approach. Analysis of
copious amounts of data can also be challenging and needs computational support. Mahamuni et al. (2021)
highlight the importance of large-scale qualitative user research for service design for scale and propose
approaches so that large-scale qualitative approaches are facilitated in an effective manner. They highlight
the need for processes and technical support at various stages. Unlike quantitative surveys where
respondents can be remote and respond to an online form, such an approach is difficult with qualitative
approaches. It is difficult to conduct large-scale qualitative user research in the form of written qualitative
surveys as respondents are likely to provide responses easily in face-to-face interactions or at least virtual
communication channels such as video conferencing work. However, written qualitative surveys are a useful
approach to gain rich insight if constructed carefully (Farrell, 2016).
While recognizing the benefits of large-scale qualitative surveys, but acknowledging the costs incurred,
the question that arises is can there be a middle ground between quantitative approaches, which are easier
to conduct at large scale, and qualitative approaches? Can studies be conducted with a quantitative
approach but also benefit from a qualitative approach without proceeding at a significant large scale? How
can a similar level of quality and rigor be obtained through a quantitative survey approach?
Use of quantitative methods for user research
While designers are comfortable with qualitative approaches, quantitative methods can be formidable
for various reasons. Considering that designers usually undertake a variety of design projects shifting across
a variety of business domains (For example, banking, sales, employee services, healthcare, and so on),
they are generalists without adequate in-depth domain knowledge. It can be challenging to construct
pertinent quantitative survey instruments with the right wording and terminology in the early phases of
exploration as they require close dependency on the client stakeholders for getting them right.
What is particularly challenging in close-ended surveys is arriving at the right scale items (that is,
measurement items or response options that respondents need to select from) and their wording
especially with nominal (categorical) variables. There is no established process to arrive at the list of right
options to include, especially for non-standard options. For example, while it may be relatively easy to arrive
at options for questions such as What is your highest level of education?”, it can be challenging to arrive at
suitable close-ended options for questions such as “What roadblocks did you face in the onboarding
service?" without sufficient insight into the on-ground situation to begin with. Designers are not yet aware of
the emerging context, and the wording of the option items may not be relatable to respondents.
Additionally, it is difficult to find the right balance between granularity and abstractness of the option
items, as granularity increases the number of options to choose from. This leads to designers including only
the supposed key items followed by putting “other” choice, which is an open text field where the respondent
may write their response. Writing close-ended options for such questions can lead to the introduction of
designer’s biases in choosing what to leave out. Respondents add variations in the “other” field , and with
lack of standardization, quantifying this becomes a problem. In very large-scale surveys, there is a need for
options that can capture the variations that are quantifiable rather depending on respondents writing openly
Bhaskarjyoti Das, Sylvan Lobo, Ravi Mahamuni
4
in the other field. Similarly open-ended qualitative questions in written surveys also tend to have poorly
articulated and less useful responses. After the pilot survey, making updates to the survey after rolling it out
to the masses is not possible.
This brings out the need for support for designers to develop effective quantitative surveys, especially
when they are working with domains unfamiliar to them. The traditional approach is to conduct a thorough
literature survey, work closely with stakeholders or experts, design the quantitative survey instruments, and
test them in the field as a pilot survey to validate and refine instruments. Another approach that can facilitate
richer quantitative surveys is using qualitative approaches initially to gain better insight into the user group
and domain. It is a good means to be more sensitive to the context of the respondents (Coyle & Williams,
2001) and can facilitate relevant wording and questions.
Combining qualitative and quantitative methods for large-scale research
Kelle (2006) argues that problems in quantitative research methods may arise from a lack of knowledge
about local culture, patterns, and social phenomena, which hampers theory building and hypothesis
construction in statistical analysis. Thus, quantitative research processes require culture-specific knowledge
for development of theoretical concepts and measurement instruments in the study. Mixed methods
combining quantitative and qualitative studies are gaining support. The use of mixed methods is researched,
sometimes debated, but it is developing in terms of its theory (since both approaches are rooted in different
philosophical schools of thought), rationale for use, and systematic approaches for combining quantitative
and qualitative methods in practice. The benefits of using mixed methods in practice are evident. Greene et
al. (1989) provide broad categories of rationales for mixed methods. Bryman (2006) further highlights more
granular categories such as for gaining the best of both worlds (middle ground), a sense of completeness,
triangulation for validation and credibility, to explain and illustrate findings, generate insights from multiple
perspectives, form hypotheses, or even develop survey instruments. A mixed method approach is usually
expected of large-funded projects, considering it as a standard approach (fashionable) as it gives a sense
of quality and rigor.
Morse (1991) proposed two distinguishable forms of combining qualitative and quantitative methods: (1)
sequential, where a qualitative study identifies core issues and aims to develop theoretical concepts and
hypotheses, which are further examined in a subsequent quantitative study (2) simultaneous or parallel,
where the qualitative data helps understand statistical associations and identify variables in the quantitative
study (Kelle, 2006). Even though there are both advantages and disadvantages in the forms of combining
the methods (Morse, 2003), it is imperative that combination of qualitative and quantitative methods help
researchers understand the different actions of users under investigation and also generate appropriate
qualitative explanations for those actions. This information is valuable for creating the right theories,
validating hypotheses, and also capturing additional information from the study-participants’ responses.
Bryman’s research (2006) highlights the rationale of combining qualitative and quantitative approaches
to develop research instruments. While it seems less widespread, it is an effective method for arriving at a
suitable quantitative instrument with questions and response items that are grounded in the context of the
respondents. Owungabuzie et al. (2010) have built on this approach and have developed an Instrument
Development and Construct Validation (IDCV) process based on using mixed research techniques,
containing 10 phases. It is an approach that begins with defining items for the constructs of interest using
in-depth secondary literature and discussion with local experts, followed by cycles of validation and
refinement using field testing of the developed quantitative instrument, qualitative personal interviews, and
focus group discussions for feedback. It uses a crossover design using qualitative and quantitative analysis
techniques to validate and refine the instruments. Rowan and Wulff (2007) also describe a process for using
qualitative research to generate measurement items in a quantitative instrument. The grounded theory
approach has guided the analysis of the qualitative date to develop a list of items using emerging themes
and codes. Three researchers individually arrived at lists that were then combined to arrive at scale items.
These instances in literature demonstrate the usefulness of qualitative techniques to inform the design
of quantitative surveys. There is still a need to facilitate and attain a middle ground of qualitative and
quantitative techniques where the rigor and advantages of large-scale qualitative study can be retained with
the reduction of costs and effort that a quantitative study can offer. Our research attempts to address this
point, while also aiding designers and researchers with a guideline and technology support to iterate initially
through qualitative approaches, and dynamically move towards quantitative techniques (qualitative and
quantitative techniques may work in parallel at this stage), till saturation is achieved. The qualitative
approach provides insights to construct relevant question and answer options while also ensuring that the
nature of large-scale qualitative research is introduced, enabling variation and responses from cross-cultural
and heterogenous target populations.
Dynamic user research in large scale service design projects
5
Approach
Kelle (2006) argues that qualitative research can help validate the quantitative measurement operations
and instruments, and the best practice in mixed-method approach involves alternating the steps of
qualitative and quantitative research in the overall study. Figure 1 illustrates the traditional approach taken
by researchers for conducting large-scale surveys as a part of user research in their service design journey.
This study approach involves the sequential form (Morse, 2003) of combining qualitative and quantitative
methods. The researcher conducts structured or unstructured interviews with a small sample selected from
the entire sample size. This information provides qualitative insights to the researcher about the participant
context to create survey instruments for quantitative study of the entire sample size.
Figure 1 Traditional mixed-method approach for large-scale surveys
To leverage the benefits of mixed method approaches for large-scale user research and align to business
need timelines, we need a systemic process that is traceable, easily analyzable, and can be replicated by
researchers in the design project. We approach users with the sequential form and introduce a dynamic
mode of combining mixed methods to conduct large scale user research. Figure 2 illustrates our proposed
approach.
Figure 2 Dynamic mixed-method approach for large-scale user research
The approach helps researchers embark on a large-scale survey starting with just a few qualitative
questions and with the knowledge of the size of the population under study. The survey is rolled-out in
batches, and for each batch, responses to the qualitative questions are collected. The batch-wise participant
responses are then analyzed using grounded theory approach to identify data saturation, if any.
Subsequently, this information is used to identify the qualitative questions that need to be converted into
quantitative questions. Analysing the quality of the responses for each question, we identify the measurable
Bhaskarjyoti Das, Sylvan Lobo, Ravi Mahamuni
6
answer options for the new quantitative question. The new questions and their answer options are compiled
to form the new survey instrument for the subsequent batches. Here are the basic steps in survey
progression as illustrated by our approach in Figure 2:
(1) The researcher first decides the sample size for the large-scale survey and prepares a survey
instrument with initial qualitative survey questions.
(2) The overall desired sample size is then segregated into smaller batches of survey participants. The
survey instrument with the qualitative questions is rolled out to the participants in each batch and
qualitative textual responses are collected batch-wise.
(3) The collected qualitative responses from the batch are then analysed for data saturation. Answers
of all participants to each question are checked for repetition of information of similar meaning. Using
this information, the questions, for which significant information saturation is observed among the
participant responses are selected for conversion to quantitative questions.
(4) If there is no significant saturation in the response data, the qualitative question remains the same
and is not selected for conversion to a quantitative one further.
(5) In the process to convert any selected qualitative question to a quantitative one, the qualitative
textual responses to the given question are analysed. Using qualitative techniques such as coding
and affinity mapping, words and word groups with similar meaning are grouped together to form
answer clusters. Using these answer clusters, we create answer options that survey participants can
respond to. Similarly, all selected qualitative questions are gradually converted to quantitative ones
and their answer options are identified from the related responses in the survey batch.
(6) Finally, a new survey instrument is formulated for roll-out to the next batch of respondents. The
instrument includes two types of questions: (a) new quantitative questions and their answer options,
and (b) unconverted qualitative questions from the previous survey instrument.
(7) The new survey instrument is then rolled out to the next batch of survey participants. After the survey
responses from the subsequent batch are collected, previously unconverted qualitative questions
undergo check again for data saturation and further conversion to a quantitative one.
(8) This process is repeated until enough responses are collected to represent the population or desired
saturation is achieved.
The approach is exemplified further in the next section.
Case Study
We argue that our dynamic mixed-method survey approach will help design researchers to strengthen
their user research process, reduce time and effort required, and get quality user data while also maintaining
the rigor of process. Our approach helps in systematically organizing instrument creation, user data
collection, and data analysis for large-scale survey requirements. To test our proposed approach, we
implemented it (Figure 2) in one of our internal service design projects to develop employee wellbeing
services in an IT organization. In this project, we wanted to study how employees in the organization engage
for their wellbeing, their needs, intents, and concerns, such that we could design desirable and impactful
services. However, given the sheer number and diversity of target users in the multi-national large
organization, the proposition of qualitative methods of design inquiry was not preferable. Although qualitative
methods are known to provide a vivid understanding of user scenarios, they are time and resource intensive,
and were not helpful in this case as we intended to capture the experiences of users from diverse
geographical and cultural backgrounds.
To engage in our proposed mixed-method approach, first, we prepared a survey instrument with
qualitative questions. For example, “What activities do you perform to stay fit?”. The survey was then rolled
out to survey participants in batches. After the qualitative responses were collected from the survey
participants of each batch, responses were analysed to check for data saturation in the participant
responses for the given question. If we observed multiple instances of similar responses for the given
question, it was marked for conversion to a quantitative question. The threshold of number of instances that
are required for question conversion is a variable for the researcher to set. Response clusters were then
formed from the responses based on their closeness of meaning. These clusters were then converted as
answer options for the new quantitative question. For example, we created the response cluster “Running
events” from the responses where we observed repeating words or word groups such as “running”, “run
marathon”, “go for sprints”, and so on. Similarly, other clusters were created such as “Gym and exercise”
(gym, workout, …), Healthy food” (diet, food, …). The response clusters were then selected as answer
options for the new quantitative question. Additionally, we also amended another answer option where we
Dynamic user research in large scale service design projects
7
asked respondents for qualitative textual input if they did not select any of the existing answer options.
Subsequently, we formed a new quantitative question and its answer options as follows:
Quantitative Question 1: “What activities do you perform to stay fit?”
Answer Option 1: Running events
Answer Option 2: Gym and exercise
Answer Option 3: Healthy food
Answer Option 4: Others ___________
The survey instrument was then updated with the new converted quantitative question for roll-out to the
next batches. Using the same approach, we also analyzed the qualitative responses of Answer Option 4 to
look for similar data saturation and further amended the answer options to the quantitative question. For
example, a new answer option was s formulated from the response cluster “Dance” from repeating similar
responses such as “do Zumba”, “aerobics”, and so on. Thus, the survey instrument was then updated for
next batches as:
Quantitative Question 1: “What activities do you perform to stay fit?”
Answer Option 1: Running events
Answer Option 2: Gym and exercise
Answer Option 3: Healthy food
Answer Option 4: Dance
Answer Option 5: Others ___________
The survey was conducted batch wise until the researcher(s) collected responses from the desirable
sample size. Using this approach, we could leverage qualitative responses from survey participants to
create quantitative questions whose answer options were developed from field responses rather than from
researcher assumptions. Through batch-wise cycles of collection and analysis of response data, the
approach enabled and informed the researcher about intermediate survey analysis while the survey was
on-going rather than only at the end. This information was critical to researchers during large-scale surveys
as corrective actions could be taken by analysing responses from a smaller sample size before the survey
instrument reaches to all the survey participants.
We chose qualitative question 1 for the purpose of simple illustration. However, we would like to make a
note that this approach would be more effective for qualitatively richer questions where responses are more
difficult to predict through literature. For example, a question like What are the hinderances you face in
leading a healthy lifestyle while working from home where cultural differences and diversity may play a
bigger role. This approach is very useful when there are lot of choices available with the participant and
designers are not able to decide whether to go granular options or with logical groups. Depending upon the
initial qualitative inputs, designers can decide the options for quantitative survey.
Discussion and Implications
Implication on researcher biases
Researchers often encounter biases when planning and designing research instruments (Smith & Noble,
2014; Mehra, 2002; Collier & Mahoney, 1996). For example, in many instances during user research
planning, ambiguity effect takes precedence when we tend to avoid exploring areas that we do not have
much previous experience about. We tend to inquire more about what we believe are the probable problem
areas, often biased with existing expertise and personal previous experiences. Also, in several instances,
researcher bias is evident when there is a strong affinity with the participants of the study or if the researcher
is a member of the population under study.
Instrumentation is as critical in qualitative as in quantitative research. It takes training and practice to
write open-ended questions, the hallmark of a qualitative interview, and then to keep from
transforming them into closed-ended questions, especially with a resistant subject, when actually
conducting the interview. (Sofaer, 2002, p. 334)
One of the advantages of conducting a pilot study is that it might give advance warning about where
the main research project could fail, where research protocols may not be followed, or whether
Bhaskarjyoti Das, Sylvan Lobo, Ravi Mahamuni
8
proposed methods or instruments are inappropriate or too complicated. (van Teijlingen & Hundley,
2001)
In user research, the mixed method approach of understanding the problem area with initial qualitative
study of a small sample size before engaging into large scale quantitative surveys has shown promising
results. However, we observe some instances of anchoring bias when the researcher is influenced by the
initial findings of the one-time qualitative study, which is crucial for design of the quantitative questions for
the subsequent large-scale survey. Our approach enables the researcher to systemically merge the
qualitative and quantitative survey techniques and mitigate some of the intermittent biases that we observe
during the survey journey. In our approach, the qualitative survey is not a one-time event with a fixed number
of participants only but is a continuous process that dynamically stops based on the quality of the responses
rather than the number of responses. This helps the researcher to mitigate anchoring biases with the initial
collected information and analyse responses continually to evolve contextual answer options for new
quantitative questions.
Implication on service design lifecycle
User research is a critical phase in the service design lifecycle. Research activities in organizations are
often constrained with a lack of time and resources for conducting large scale qualitative studies. Also, since
a large-scale study involves many researchers, to have a consistent process of data collection and analysis
is critical across researchers or research groups. Next, having the traceability of information across the
qualitative and quantitative studies is important for the quality of analysis and insight creation. Our approach
enables researchers to maintain the rigour of both qualitative and quantitative studies and facilitates
researchers to reduce the manual effort required to engage with large number of participants, bringing in
the best of both worlds. Thus, enabling researchers to quicken the user research process, the approach
also reduces the overall time required in the service design lifecycle.
Implications on user research protocol
In our current approach, we enable researchers to create measurable answer options by analysing
qualitative responses from participants to the given question. We see several other opportunities for
enhancing quantitative survey creation. In our approach to convert a qualitative question to a quantitative
one, we analyse participant responses to one given question to create possible answer options that the
other participants can respond to. However, analysing the qualitative responses to all questions in the
survey instrument can enable researchers to generate richer overall insights. We feel that our approach can
be further enriched to generate fresh quantitative questions and not just convert existing qualitative
questions to quantitative ones. This will enable researchers to embark on a research journey with only a
few initial questions and the dynamic survey approach would create relevant quantitative questions with
limited intermediate role of the researcher. It can help in arriving at new qualitative questions as well for the
next cycles.
Conclusion and future work
As service design projects in IT business are usually bound by strict timelines and include a vast number
of prospective users, design researchers are often inspired to conduct surveys for user research rather than
use qualitative methods such as structured or semi-structured interviews. With the advent of digital
communication technologies for remote research and growing use of Artificial Intelligence (AI) for large data
analysis, design research sees opportunities to leverage a mix and match of different research methods to
reach a large number of respondents and enrich their overall user research. In this paper, we see that
combining qualitative and quantitative approaches to develop research instruments is effective to create
quantitative questions and answer options that are grounded in the actual context of the participants. It can
also help in arriving at new pertinent questions. Through the approach discussed in this paper, we cater to
the need to support design researchers to develop effective surveys when the problem domain is unfamiliar
to them. We observe that making the choice between abstractness and granularity of closed ended
quantitative questions or their answer options is challenging for researchers.
A service design project caters to a wide variety of users from varied backgrounds. Our approach
supports researchers in creating effective survey instruments and conducting large-scale surveys by
leveraging a sequential process of combining qualitative and quantitative approaches to collect responses
from participants. However, we also realize there are several opportunities and challenges to further develop
the approach. (1) In our approach, the researcher can embark on the survey journey without in-depth
Dynamic user research in large scale service design projects
9
secondary research about the problem area, but some level of sensitization of the problem area is essential
to create the initial qualitative questions. The quality of initial question is critical in the current version of the
approach. Further research should reduce the limitation of the initial question quality on the final survey
(new questions and answer options) generated from the approach. (2) The success of rightly identifying
questions that are to be converted to a quantitative one is dependent on the quality of textual responses
from participants. The textual responses collected from participants often have incomplete sentences or
mistyped words. Hence, this raw response data requires cleaning to analyse the survey further. In addition,
respondents often are not able to articulate their answers properly and hence are unable to communicate
and clarify their actual opinion or intent. This severely impacts the response analysis, identification of data
saturation, and answer option generation for the new quantitative questions. (3) Our approach orchestrates
a process of batch-wise roll-out and analysis of respondent data. However, in order to form complete
insights, researcher(s) need to combine the analysis of all batches and be conscious of the difference in
survey instruments that participants have responded to. Since the conversion from qualitative question to
quantitative question is organic, different questions have different proportions of qualitative and quantitative
responses. There is opportunity to support this analysis by enabling technologically intelligent ways of
analysing the combination of qualitative and quantitative responses to the given question.
We believe that our approach has benefits for design researchers working in the organizational setting
of IT service companies. The approach eases design and roll-out of surveys, especially for scenarios when
the research sample size is large. Through this paper, we have explored only the tip of the iceberg of
possibilities. We have introduced the benefits of mixed method approaches and explored ways in which
digital technology can help large-scale user research in the context of service design projects. With
increasing conformity to Agile practices and gradual transition to digital mediums for user research, our
approach revisits the opportunities for service design research to technologically assist creation of survey
instruments and quicken the engagements for large scale user data collection and analysis. However, given
the challenges and complexities discussed, the approach deliberates further research and discussion.
References
Baker, S. E., & Edwards, R. (2012). How many qualitative interviews is enough. National Center for
Research Methods.
Bansal, P., Smith, W. K., & Vaara, E. (2018). New ways of seeing through qualitative research. Academy
of Management Journal 61.4, 1189-1195.
Baxter, J., & Eyles, J. (1997). Evaluating qualitative research in social geography: establishing ‘rigour’in
interview analysis. . Transactions of the Institute of British geographers, 22(4), 505-525.
Bluhm, D., Harman, W., Lee, T., & Mitchell, T. (2011). Qualitative research in management: A decade of
progress. Journal of Management Studies, 48(8), 1866-1891.
Bryman, A. (2006). Integrating quantitative and qualitative research: how is it done? Qualitative Research,
6(1), 97-113.
Camfield, L. (2019). Rigor and ethics in the world of big-team qualitative data: Experiences from research
in international development. American Behavioral Scientist, 63(5), 604-621.
Collier, D., & Mahoney, J. (1996). Insights and pitfalls: Selection bias in qualitative research. World
politics, 56-91.
Coyle, J., & Williams, B. (2001). An exploration of the epistemological intricacies of using qualitative data
to develop a quantitative measure of user views of health care. Journal of Advanced Nursing,
31(5), 1235-1243.
Farrell, S. (2016, 09 25). 28 Tips for Creating Great Qualitative Surveys. Retrieved from Nielsen Norman
Group: https://www.nngroup.com/articles/qualitative-surveys/
Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a Conceptual Framework for Mixed-
Method Evaluation Designs. Educational Evaluation and Policy Analysis, 255-274.
Kelle, U. (2006). Combining qualitative and quantitative methods in research practice: purposes and
advantages. Qualitative Research in Psychology, 293-311.
Mahamuni, R., Ganwani, S., Lobo, S., Das, B., Verma, R., Hirom, U., . . . Jadhav, M. (2021). Service
Design for Scale - Overcoming Challenges in Large-scale Qualitative User Research.
International Conference on Research into Design ICoRD 2021 (pp. 91-103). Mumbai: Springer.
Mays, N., & Pope, C. (1995). Rigour and qualitative research. BMJ, 312, 109-12.
Mehra, B. (2002). Bias in Qualitative Research: Voices from an Online Classroom. The Qualitative Report,
7(1), 1-19. Retrieved from https://doi.org/10.46743/2160-3715/2002.1986
Bhaskarjyoti Das, Sylvan Lobo, Ravi Mahamuni
10
Morse, J. M. (1991). Approaches to qualitative-quantitative methodological triangulation. Nursing research
40.2, 120-123.
Morse, J. M. (2003). Principles of mixed methods and multimethod research design. In Handbook of
mixed methods in social and behavioral research (pp. 189-208).
Onwuegbuzie, A. J., Bustamante, R. M., & Nelson, J. A. (2010). Mixed Research as a Tool for Developing
Quantitative Instruments. Journal of Mixed Methods Research, 4(1), 56-78.
Padgett, D. (2016). Qualitative methods in social work research (Vol. 36). Sage Publications.
Rowan, N., & Wulff, D. (2007). Using Qualitative Methods to Inform Scale Development. The Qualitative
Report, 12(3), 450-466.
Smith, J., & Noble, H. (2014). Bias in research. Evidence-Based Nursing, 100-101.
Sofaer, S. (2002). Qualitative research methods. International Journal for Quality in Health Care, 14(4),
329-336.
van Teijlingen, E. R., & Hundley, V. (2001). The importance of pilot studies. Social Research Update.
Retrieved from http://sru.soc.surrey.ac.uk/SRU35.html
Williams, E., & Morrow, S. (2009). Achieving trustworthiness in qualita-tive research: A pan-paradigmatic
perspective. Psychotherapy research, 19(4-5), 576-582.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The issue of bias in qualitative research is an important one, and demands special attention and discussion in any qualitative research methods class. This reflective paper, written in the tradition of teacher-research, presents an analysis of how my students and I, working in an online classroom environment, learn together about the role researcher self and subjectivity play in designing and conducting qualitative research. While researcher bias and subjectivity are commonly understood as inevitable and important by most qualitative researchers, the beginners in qualitative research classes are generally not very comfortable with the idea of research that is not value-neutral. A systematic and reflective analysis of some of the teaching and learning activities, and of the online exchanges in these classes suggests that issues that require more critical thinking and reflection are dealt better using the power of written word. When students write down how their understanding of an issue is developing, the knowledge gained from the experience of putting the idea in comprehensible sentences is many times the knowledge gained when they make a verbal and often casual comment on the issue being discussed in the classroom. Since online instruction allows students to work at their own pace, factors such as differences in students' ability to communicate - through verbal or written expression, and their level of understanding of the content can be better addressed in an online classroom. The students' and instructor's voices in this paper, and the unique framework in which they are organized convey their increased understanding of qualitative research as a process of self-discovery.
Article
Full-text available
YOU CAN DOWNLOAD THIS WORKING PAPER FOR FREE FROM: http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf
Article
Full-text available
This article describes the process by which one study utilized qualitative methods to create items for a multi dimensional scale to measure twelve step program affiliation. The process included interviewing fourteen addicted persons while in twelve step focused treatment about specific "pros" (things they like or would miss out on by not being involved in twelve-step programs) and "cons" (things they dislike or would benefit from if they did not engage in twelve-step programs). The triangular process used in qualitative research is described, which generated items for the subsequent instrument to measure ambivalence toward recovery programs. Mixed-method strategies included qualitative interviewing to inform scale development and three analytical approaches to produce specific codes, themes, and domains.
Article
Full-text available
In this methodological article, the authors present a meta-framework—comprising several frameworks and models, as well as multiple research approaches—that they call an Instrument Development and Construct Validation (IDCV) process for optimizing the development of quantitative instruments. Using mixed research techniques, the IDCV contains 10 phases that detail the progression from an interdisciplinary review of the literature to the development of the instrument to the evaluation of the instrument development and construct validation process and product(s). Crossover analyses represent a key mechanism in the IDCV, wherein analysis types from one tradition (e.g., quantitative analysis) are used to analyze data from a different tradition (e.g., qualitative data). Finally, the authors provide a heuristic example of a rigorous way to develop a Likert-format scale.
Article
Full-text available
Qualitative analysts have received stern warnings that the validity of their studies may be undermined by selection bias. This article provides an overview of this problem for qualitative researchers in the field of international and comparative studies, focusing on selection bias that may result from the deliberate selection of cases by the investigator. Examples are drawn from studies of revolution, international deterrence, the politics of inflation, international terms of trade, economic growth, and industrial competitiveness. The article first explores how insights about selection bias developed in quantitative research can most productively be applied in qualitative case studies. The discussion considers why qualitative researchers need to be concerned about selection bias, even if they do not care about the generality of their findings, and it considers distinctive implications of this form of bias for qualitative research, as in the problem of what is labeled "complexification based on extreme cases." The article then considers pitfalls in recent discussions of selection bias in qualitative studies. These discussions at times get bogged down in disagreements and misunderstandings over how the dependent variable is conceptualized and what the appropriate frame of comparison should be, issues that are crucial to the assessment of bias within a given study. At certain points, it becomes clear that the real issue is not just selection bias, but a larger set of trade-offs among alternative analytic goals.
Article
In the large international projects where many qualitative researchers work, generating qualitative Big Data, data sharing represents the status quo. This is rarely acknowledged, even though the ethical implications are considerable and span both process and product. I argue that big-team qualitative researchers can strengthen claims to rigor in analysis (the product) by drawing on a growing body of knowledge about how to do credible secondary analysis. Since this necessitates a full account of how the research and the analysis are done (the process), I consider the structural disincentives for providing these. Debates around credibility and rigor are not new to qualitative research in international development, but they intensify when new actors such as program evaluators and quantitative researchers use qualitative methods on a large scale. In this context, I look at the utility of guidelines used by these actors to ensure the quality of qualitative research. I ask whether these offer pragmatic suggestions to improve its quality, recognizing the common and hierarchized separation between the generation and interpretation of data, or conversely, whether they set impossible standards and fail to recognize the differences between and respective strengths of qualitative and quantitative research.
Article
Despite ongoing ‘paradigm wars’ between the methodological traditions of qualitative and quantitative research, ‘mixed methods’ represents nowadays a rapidly developing field of social science methodology. In such discussions it is often emphasized that the use of methods should be predominantly influenced by substantive research questions, and not only by methodological and epistemological considerations. As all methods have specific limitations as well as particular strengths, many discussants propose that qualitative and quantitative methods should be combined in order to compensate for their mutual and overlapping weaknesses. However, although a variety of proposals have been made for a taxonomy of mixed-methods designs, there is yet a lack of agreement regarding basic concepts and definitions, as is bemoaned by many experts in this field. This lack of common ground is due to the fact that crucial questions regarding the relations between research domains and methods have been not sufficiently discussed yet. For which types of research questions qualitative and quantitative methods are suited better? What are typical weaknesses and strengths of qualitative and quantitative methods in relation to particular research domains? The paper addresses these questions by discussing several examples from research projects that have combined qualitative and quantitative methods. Thereby it will be shown that the purposes of method integration are twofold: it can serve for the mutual validation of data and findings as well as for the production of a more coherent and complete picture of the investigated domain than monomethod research can yield.