Individual Submission Summary
Share...

Direct link:

Online Survey Bias and Multilingual and Multiethnic Populations: A Systematic Literature Review

Mon, March 11, 4:45 to 6:15pm, Hyatt Regency Miami, Floor: Third Level, Johnson 1

Proposal

With support from an Innovative Assignment Design Award through our university, this study was undertaken as a collaborative student project to investigate potential sources of bias in online surveys or questionnaires reported in empirical research (2011-2021) and to categorize and synthesize the types of bias. We implemented a collaborative, problem-based assessment task in a graduate course on survey research design for students to a. engage in the systematic literature process through the aggregation, interpretation, explanation, and integration of existing research on bias in online survey research design; b. identify potential bias in survey construction for multiethnic and multilingual population samples; c. apply content analysis to articles reviewed to compile recommendations for culturally responsive design and identify research gaps; and d. critically examine self-constructed surveys for their final critical task project to ensure they were inclusive and valid for the intended respondents.
Given the critical role of psychometrically sound instruments in educational research, methodological investigation and discussion are needed to address bias in online surveys, including multiethnic and multilingual populations. For further advancement of web-based surveys, researchers need to understand the many factors that influence participation in a survey, including target population characteristics such as age, gender, socioeconomic status (SES), and race or ethnicity (Jang & Vorderstrasse, 2019). This review was guided by the research question, “What types of bias in online instruments can affect multiethnic and multilingual population samples?”
2. Literature Review
Problems common to many surveys, such as nonresponse bias, sample representativeness, and interview bias, have been well-documented in social, health, and educational research contexts (Fosnacht et al., 2017; Groves & Lyberg, 2010; Pasick et al., 2016). However, the recent proliferation of online surveys designed for an increasingly diverse population merits research on the implications of ethnocultural and language differences in measurement. Multiethnic and multilingual survey question construction requires attention to diversity in cultural meanings, questionable cultural appropriateness of survey items, non-equivalent connotations of translated items, and demographic characteristics of respondents. Failure to address these potential elements of bias can severely limit the quality of survey data (Spector et al., 2015) and result in the misattribution of findings and theories derived from dominant-culture samples as universal traits (Nielsen et al., 2017).
Web-based surveys are generally connected with low response rates (Couper, 2000; Umbach, 2004), which can result in bias in survey estimates. Web-based survey estimates may also be subject to coverage bias because they exclude specific population segments without Internet access, and broadband access is not evenly distributed across rural,
suburban, and urban regions. Internet users tend to have higher education and income levels than the general population (Pew Research Center, 2021). Additionally, people need a certain level of proficiency to complete online tasks such as surveys (Hargittai & Hsieh, 2012), and research indicates that Internet proficiency or skills can vary across groups (Mossberger et al., 2007; Stern et al., 2009) creating a digital use divide. Therefore, the design and implementation of an online survey involve devoting attention to visual organization, interactivity, and accessibility while also considering mixed-mode methods that can increase participation by members of racially and linguistically diverse populations.

3. Methodology
Systematic reviews should be reliable and replicable, which requires the review process to be documented and made transparent (Xiao & Watson, 2019). We searched for peer-reviewed articles published within the last ten years (2011-2021) written in English. We focused on the databases EbscoHost, JSTOR, ProQuest Central, and the web search engine Google Scholar, applying different combinations of terms that generated multiple search strings, such as “bias AND web survey AND minority and ethnic groups,” “internet survey AND bias,” and “questionnaires AND surveys OR bias.” After reading abstracts and removing duplicate results, we applied exclusion criteria to remove articles that were systematic reviews or meta-analyses, did not include online surveys, had no focus on multilingual or multiethnic populations, or did not discuss bias. This process reduced our initial search set of 2049 to 31 peer-reviewed articles. To retrieve the relevant data from each article matching the inclusion criteria, a template was developed in Google Sheets that guided the extraction process. The template covered different aspects of the studies, including keywords, research questions or purpose of study, the sample, methodology and design, data analysis, theoretical framework, key findings, recommendations for practice, future research, and limitations. The researchers coded a few articles independently and then compared their results to ensure the reliability of the instrument and the validity of the process. Once the template was complete, we rechecked the coding for consistency and created a concept map to represent information visually. Thematic analysis was used to synthesize common ideas and identify gaps and potential areas for further research.
4. Results
We identified a lack of research on bias in educational research, with most of our review set covered from the medical/healthcare field. Regarding reliability, web surveys may reduce social desirability bias (SDB) and eliminate the interviewer effect (Kemmelmeier, 2016; Kinnunen et al., 2015; White et al., 2018). Mixed positive and negative response anchors were a source of bias specific to multilingual surveys despite this recommended practice in measurement (Kim et al., 2015). Online instruments could mask significant disparities in racial minority groups due to inaccurate translation and cross-cultural equivalence (Briseno et al., 2019; Sanchez & Vargas, 2016; Sukyong et al., 2014). For example, the word “fair” in a health survey does not translate equivalently in Korean or Spanish, and participants’ self-ratings can be misleading.
5. Discussion and Implications
Our goals for this study were to aggregate, interpret, and explain the existing research on bias in online survey research design and to make recommendations for instrument design that is culturally responsive and inclusive of multiethnic and multilingual populations. We have concluded that developing systematic guidelines for accommodating cultural and contextual backgrounds in instrument design and administration is warranted. We encourage practitioners to use culturally congruent and contextualized practices for recruitment and to solicit comments from the intended population before putting a survey item out into the field (e.g., conducting a pilot study with cognitive interviews).

Authors