The Electronic Journal of Business Research Methods provides perspectives on topics relevant to research in the field of business and management
For general enquiries email administrator@ejbrm.com
Click here to see other Scholarly Electronic Journals published by API
For a range of research text books on this and complimentary topics visit the Academic Bookshop
Information about The European Conference on Research Methodology for Business and Management Studies is available here

linkedin-120 

 

twitter2-125 

 

fb_logo-125 

 

Journal Article

Getting results from online surveys — Reflections on a personal journey  pp45-52

Rachel A. McCalla

© Jul 2003 Volume 2 Issue 1, Editor: Frank Bannister, pp1 - 77

Look inside Download PDF (free)

Abstract

In this paper we present a personal reflection on the implementation of an online survey, highlighting the tradeoffs between the potential benefits and pitfalls. It is argued that casting your net out too wide, in a bid to maximise responses can result ultimately in a low response rate. We evaluate the experience of completing an online survey from the perspective of both the researcher and the respondent to outline the dynamics of the completion and submission process. Finally, in a bid to assist those interested, a review of some of the online survey tools is presented.

 

Keywords: Questionnaires, Surveys, Research Design, Research Process, Design and Implementation, Stakeholder Perspectives

 

Share |

Journal Article

Mixed‑mode Surveys Using Mail and Web Questionnaires  pp69-80

Matthias Meckel, David Walters, Philip Baugh

© Sep 2005 Volume 3 Issue 1, Editor: Arthur Money, pp1 - 92

Look inside Download PDF (free)

Abstract

With the Internet now being a part of everyday life mixed‑mode surveys that use the world wide web can be seen as an opportunity to increase the response rate of surveys. This paper looks at the advantages and disadvantages of different response modes suitable for mixed‑mode surveys. Based on this consideration the paper addresses the influence of a mixed‑mode approach using conventional mail and web based questionnaires on coverage, sampling, measurement, and non‑ response error as well as pitfalls and opportunities specific to this type of survey. It discusses mixed‑ mode and web specific issues such as technological aspects, security, convenience and similarity. The paper proposes that this approach has no apparent potential error consequences if certain requirements are fulfilled. The use of mixed mode questionnaires is exemplified by a survey conducted with 1000 SMEs in the North West of England in 2002. After analysing the findings the paper concludes by looking at the relation between the mode of response and the answers provided by the respondents and by summarising the insights gained from the study.

 

Keywords: Questionnaires, Surveys, Mixed Mode, World Wide Web, Quantitative, Research

 

Share |

Journal Article

Individualised Rating‑Scale Procedure: A Means of Reducing Response Style Contamination in Survey Data?  pp9-20

Elisa Chami-Castaldi, Nina Reynolds, James Wallace

© Sep 2008 Volume 6 Issue 1, ECRM 2008, Editor: Ann Brown, pp1 - 94

Look inside Download PDF (free)

Abstract

Response style bias has been shown to seriously contaminate the substantive results drawn from survey data; particularly those conducted using cross‑cultural samples. As a consequence, identification of response formats that suffer least from response style bias has been called for. Previous studies show that respondents' personal characteristics, such as age, education level and culture, are connected with response style manifestation. Differences in the way respondents interpret and utilise researcher‑defined fixed rating‑scales (e.g. Likert formats), poses a problem for survey researchers. Techniques that are currently used to remove response bias from survey data are inadequate as they cannot accurately determine the level of contamination present and frequently blur true score variance. Inappropriate rating‑scales can impact on the level of response style bias manifested, insofar as they may not represent respondents' cognitions. Rating‑scale lengths that are too long present respondents with some response categories that are not 'meaningful', whereas rating‑scales that are too short force respondents into compressing their cognitive rating‑scales into the number of response categories provided (this can cause ERS contamination — extreme responding). We are therefore not able to guard against two respondents, who share the same cognitive position on a continuum, reporting their stance using different numbers on the rating‑scale provided. This is especially problematic where a standard fixed rating‑scale is used in cross‑cultural surveys. This paper details the development of the Individualised Rating‑Scale Procedure (IRSP), a means of extracting a respondent's 'ideal' rating‑scale length, and as such 'designing out' response bias, for use as the measurement instrument in a survey. Whilst the fundamental ideas for self‑anchoring rating‑scales have been posited in the literature, the IRSP was developed using a series of qualitative interviews with participants. Finally, we discuss how the IRSP's reliability and validity can be quantitatively assessed and compared to typical fixed researcher‑defined rating‑scales, such as the Likert format.

 

Keywords: scale length, response styles, response bias, survey research, cross-cultural surveys, individualised ratingscale procedure

 

Share |