Nadin Kastirke’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (7)


Validierung der Survey Attitude Scale zur Messung generalisierter Umfrageeinstellungen: Ergebnisse eines Survey-Experiments bei Hochschulabsolvent*innen
  • Chapter

September 2022

·

28 Reads

·

·

·

[...]

·

Nadin Kastirke

Generalisierte Einstellungen gegenüber Befragungen sind ein wichtiger Erklärungsfaktor für die Teilnahme an Umfragen sowie die Kooperationsbereitschaft während einer Befragung. Deshalb überprüfen wir in einem Survey-Experiment erstens, inwiefern das entwickelte Kurzinstrument Survey Attitude Scale (de Leeuw et al., 2019) für Hochqualifizierte eingesetzt werden kann, und zweitens, ob sich messtheoretische Schwächen des Originalinstruments durch sprachliche Anpassungen und/oder die Hinzunahme zusätzlicher Items reduzieren lassen. Dieses Experiment wurde in die dritte Befragung des DZHW-Absolventenpanels 2009 im Jahr 2019 integriert. Im Ergebnis zeigt sich, dass die dreidimensionale Struktur für diese Befragtengruppe reproduzierbar ist. Gleichwohl führen sprachliche Adjustierungen zur Verbesserung des Messmodells. Eine Hinzunahme zusätzlicher Items führt hingegen nicht zur Verbesserung der internen Validität und Reliabilität des Instruments. Abschließend skizzieren wir das Potenzial der Survey Attitude Scale für die quantitativ-empirische Hochschulforschung.


Survey Attitude Scale (SAS) Revised: A Randomized Controlled Trial among Higher Education Graduates in Germany
  • Poster
  • File available

January 2020

·

150 Reads

·

2 Citations

Relevance & Research Question: Various empirical evidence signals that general attitudes towards surveys do predict willingness to participate in (online) surveys (de Leeuw et al. 2017; Jungermann/Stocké 2017; Stocké 2006). The nine-item short form of the Survey Attitude Scale (SAS) as proposed by de Leeuw et al. (2010, 2019) differentiates between three dimensions: (i) survey enjoyment, (ii) survey value, and (iii) survey burden. Previous analyses in different datasets have shown that especially the two dimensions survey value and survey burden do not perform satisfactory with respect to internal consistency and factor loadings in different samples (Fiedler et al. 2019). Referring to de Leeuw et al. 2019, we therefore investigate into the question whether the SAS can be further improved by reformulating single items. Methods & Data: Therefore, we implemented the proposed German version of the SAS, adopted from the GESIS Online Panel (Struminskaya et al. 2015) in an online survey for German Higher Education Graduates being conducted recently (from October to December 2019, n = 1.378). Furthermore, we realised a survey experiment with split-half design aiming to improve the SAS by varying the wording of four items and adding one supplemental item per dimension (Stocké 2014, Rogelberg et al. 2001, Stocké/Langfeldt 2003). To compare both scales, we use confirmatory factor analysis (CFA) and measures for internal consistency within both groups. Results: Comparing the CFA results, our empirical findings indicate that the latent structure of the SAS is reproducible in the experimental as well as in the control group. Factor loadings as well as reliability scores support the theoretical structure adequately. But, we do find evidence that changes in the wording of the items (with respect to harmonize the use of terms and to avoid survey mode mentioning) can partially improve the internal validity of the scale. Added Value: Overall, the standardized short SAS is a promising instrument for survey researchers. By intensively validating the proposed instrument in an experimental setting, we contribute to the existing literature. Since de Leeuw et al. 2019 also do report shortcomings of the scale, we show possibilities for further improvement. Keywords: Survey Burden, Survey Enjoyment, Survey Value, Graduate Survey, 2019, Survey Experiment, CFA

Download

Survey Attitude Scale (SAS): Are Measurements Comparable Among Different Samples of Students from German Higher Education Institutions?

March 2019

·

230 Reads

·

1 Citation

Besides others, general attitudes towards surveys are part of respondent's motivation for survey participation. There is empirical evidence that these attitudes do predict participant's willingness to perform supportively during (online) surveys (de Leeuw et al. 2017; Jungermann/Stocké 2017; Stocké 2006). Hence, the Survey Attitude Scale (SAS) as proposed by de Leeuw et al. (2010) differentiates between three dimensions: (i) survey enjoyment, (ii) survey value, and (iii) survey burden. Referring to de Leeuw et al. 2017, we investigate into the question whether the SAS measurements can be compared across different online survey samples of students from German Higher Education Institutions (HEI).


What Predicts the Validity of Self-Reported Paradata? Results from the German HISBUS Online Access Panel.

March 2019

·

74 Reads

Paradata, such as user agent strings (UAS), provide us with important client-side information about the technical conditions of web surveys. If due to different reasons for example data protection issues UAS are not available, one may directly ask survey participants for the required data. Until now it is unclear whether the validity of these self-reported paradata is determined by participants’ general attitudes towards surveys, their willingness to participate and – in connection with technically demanding questions – distraction while answering. To shed light into the question what predicts the consistency between UAS and self-reported paradata, we use the HISBUS Online Access Panel. The sample comprised 3.137 members with UAS known to us that were asked for used device (DEV), operating system (OS) and web browser (WB). Additionally, data on general attitudes towards surveys (SAS; de Leeuw et al. 2010), survey participation evaluation (SPE; Struminskaya et al. 2015) and multitasking (MT; Zwarun/Hall 2014) were collected. The Big Five personality traits (BF; Rammstedt et al. 2013) serve as covariates in our applied logistic and ordinal regression analyses for DEV as well as OS/WB, respectively. Predictors with p<.05 were included in the final models. First of all, UAS and self-reported paradata were highly consistent (kappa: DEV=.95, OS=.95, WB=.87). Agreement regarding DEV depends on SAS subscale value (OR=1.31) and SPE (OR=0.80). The OS/WB agreement was predicted by electronic and nonelectronic MT (OR=1.32; OR=0.72). We conclude that directly asking web survey participants is a promising way to get valid information about their technical equipment if UAS data are not available. The chance to get valid DEV data is higher if surveys are generally considered valuable, but lower if the evaluation of willingness to participate is considered to be solid. The chance for valid OS/WB data is higher with electronic MT present that may indicate technical skills. Non-electronic MT seems to be rather distracting and predicts lower chances for valid OS/WB data.



Experimental Study on the Methodology of Online Surveys (ESMO): Design of a Randomized Controlled Trial Among Students in Germany

July 2017

·

9 Reads

In higher education and science studies, we typically interview young and educated individuals, e.g. students, graduates, doctoral students and researchers. Most of them are intensive computer and Internet user and thus well reachable by an online administered survey. Therefore, this method of data collection is gaining importance at the German Centre for Higher Education Research and Science Studies (DZHW). The DZHW’s long-term survey series are successively being converted to this mode of data collection. In our view, this leads to a number of methodological questions regarding mode effects. (1) To date, little is known about where within the invitation letter the survey link should be placed, in order to minimize nonresponse. (2) It is still not clear to what extent the mention of paradata within the informed consent form (IC) influences participants’ decision to take part in the survey. (3) If the acceptance for such an extended IC is lacking, it would also be possible to ask participants to directly deliver data that is otherwise collected automatically (e.g. about the device used and its configuration, screen orientation and resolution, question time and duration). But it is unclear if respondents are willing and able to provide valid information on this. (4) With increasing mobile Internet usage, a growing proportion of mobile respondents are taking part in online surveys, which have been designed to be answered using desktop computers. Responsive web design is used to optimize the survey experience for both user groups. It is currently uncertain to which extent this varies with data quality. (5) Gamification (i.e. integrating playful elements) could be used to encourage more participants to complete the survey. However, hardly any experience is available for this innovative approach. (6) A further question posed is the form of administration of complex question types. Extensive item batteries could be divided in a variety of ways and thus may provide better data. (7) The survey setting also plays an important role in online surveys. The social situation of mobile and non-mobile responders is very different. It can be assumed that this is reflected in the response behavior and should be taken into account in data analysis. (8) Online surveys are often associated with low participation. Thus, the acceptance of a short survey for non-responders should be determined in order to at least obtain information on a pre-defined set of core variables. In 2017, an Experimental Study on the Methodology of Online Surveys (ESMO 1) will be carried out. It will investigate the eight questions outlined above regarding participation and data quality which have not been sufficiently been taken into consideration in existing method research. About 25,000 participants of an online access panel, consisting of randomly recruited students at German higher education institutions, are invited. They are randomized into various conditions of the first seven scenarios. Non-participants receive the invitation to a short survey instead of a last reminder. At the conference we will present the complete randomized controlled trial design as well as initial results, including recruitment figures and power analyses.


[Comparing the Data Quality of 'Unintended Mobile Responders' and 'Non-Mobile Responders' of a Nationwide Online Survey on the Research Conditions at German Universities (Preliminary Results of the DZHW Scientists Survey 2016)]

September 2016

·

32 Reads

Mit zunehmender mobiler Internetnutzung beteiligt sich ein steigender Anteil von Personen mit mobilem Endgerät an Onlinebefragungen, die für die Beantwortung mittels Computer konzipiert wurden. Es gibt Hinweise darauf, dass sich das Antwortverhalten dieser Personen von dem der übrigen Befragten unterscheidet. Ziel des Beitrags ist es, im Rahmen einer laufenden bundesweiten Befragung des wissenschaftlichen Personals an deutschen Hochschulen zu untersuchen, wie hoch der Anteil solcher ‚unintended mobile responders' ist und inwieweit die Datenqualität mit der Art des genutzten Endgerätes variiert. Die Stichprobe umfasst derzeit 6.243 wissenschaftlich tätige Personen im Alter zwischen 21 und 76 Jahren, die wir entsprechend des verwendeten Endgerätes in Mobiltelefon/Tablet-(MTN) und Computer/Laptop-Nutzende (CLN) gruppierten. Über die Anteile von fehlenden Werten, beantworteten offenen Fragen, Antwortforcierungen und Befragungsabbrüchen sowie die Antwortlänge bei offenen Fragen, die Anzahl der Selektionen bei Mehrfachauswahlen und die Befragungsdauer wurde die Datenqualität bestimmt. Sowohl die Einbeziehung weiterer Indikatoren (z. B. Antworttendenzen, Primär-/Rezenzeffekte) als auch Subgruppenanalysen zu Endgeräte-wechselnden Befragten sind in Vorbereitung. Unter den Befragungsteilnehmenden befinden sich 296 (4,7 %) MTN und 5.947 (95,3 %) CLN. Im Vergleich zu CLN (4,3 %) gibt es bei MTN niedrigere Anteile fehlender Werte (3,3 %; p<.001). Offene Fragen wurden von CLN eher (39,8 %) und mit einer gemittelten Zeichenzahl von 282 ausführlicher beantwortet (MTN: 33,7 %; p<.001; 199 Zeichen; p<.01). MTN selektierten im Vergleich zu CLN (4,6) weniger zutreffende Antwortoptionen (3,1; p<.001) und erhielten im Verlauf der Befragung eher eine Antwortforcierung (50,7 %; CLN: 34,9 %; p<.001). Für CLN ermittelten wir verglichen mit MTN geringere Anteile von Befragungsabbrüchen (39,4 % vs. 59,5 %; p<.001), jedoch ähnliche Bearbeitungszeiten (51 Minuten vs. 53 Minuten; p<.05). Der Anteil von ‚unintended mobile responders' bei einer Onlinebefragung des wissenschaftlichen Personals an deutschen Hochschulen ist gering, wobei sich die Qualität dieser Daten von der der übrigen Befragten signifikant unterscheidet. Inwieweit die Fehlerraten mit der Präferenz für ein bestimmtes Endgerät konfundiert sind, soll im Beitrag erörtert werden.

Citations (1)


... Previous research that investigated attitudes toward surveys used one-dimensional to five-dimensional scales when measuring survey attitudes (Hox et al. 1995;Loosveldt and Storms, 2008;Rogelberg et al. 2001;Stocké and Langfeldt, 2004;Stocké, 2006Stocké, , 2014. Hox et al. (1995) proposed a one-dimensional general attitude towards surveys, based on eight items. ...

Reference:

Development of an international survey attitude scale: measurement equivalence, reliability, and predictive validity
Survey Attitude Scale (SAS) Revised: A Randomized Controlled Trial among Higher Education Graduates in Germany