Artem Zadorozhnyy

Artem Zadorozhnyy
The Education University of Hong Kong | ied · Department of English Language Education (ELE)

Doctor of Philosophy
Postdoctoral Fellow, Education University of Hong Kong

About

4
Publications
10,251
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
9
Citations
Citations since 2017
4 Research Items
9 Citations
201720182019202020212022202301234567
201720182019202020212022202301234567
201720182019202020212022202301234567
201720182019202020212022202301234567
Introduction
I am conducting my research in the field of Second Language Acquisition. In broad terms, I am analyzing the impact of technologies on teaching and learning of foreign languages. Specific areas of research in which I am involved at the moment include informal digital learning of English; L2 digital literacies; digital storytelling; and online learning.

Publications

Publications (4)
Chapter
The XXIst International CALL Research Smart Conference was hosted by Waseda University, Tokyo, Japan on July 8-10, 2022. The theme of the conference was Smart CALL.
Chapter
Full-text available
An extensive number of studies acknowledge the transformed nature of literacies by building on the complexity of multimodal semiotic repertoires and available digital resources (Reinhardt & Thorne, 2019; Toffoli, 2020). The exposure to such resources and tools makes digital literacies dynamic as environments provide students with opportunities to a...
Article
With the immense presence of English language video content in the online digital environment and students’ everyday exposure to multimedia content, this project aims to explore how to replace traditional in-class presentation with video presentation within an autonomous learning environment, examine the impact of doing so on the development of Eng...
Conference Paper
Full-text available
Nowadays, technologies have changed the ways we perceive every part of our life, and the sphere of education is not an exception. In particular, Web 2.0 technologies provide opportunities for new generations of learners to enhance their process of English language acquisition in various ways, allowing them to communicate with other learners from al...

Questions

Questions (5)
Question
Asking to confirm my knowledge in R. When conducting SEM analysis, is it enough to report Robust indices to account for possible outlier detection or whether running Mahalanobis distance method (e.g.) still appears essential? Thanks in advance
Question
My question is connected to rather unclear point of error correlation that many scholars encounter while conducting their SEM (structural equation modeling) analysis. It is a pretty often when scholars report procedures of correlating the error terms to enhance the overall goodness of fit for their models. Hermida (2015), for instance, provided an in-depth analysis for such issue and pointed out that there are many cases within social sciences studies when researchers do not provide appropriate justification for the error correlation. I have read in Harrington (2008) that the measurement errors can be the result of similar meaning or close to the meanings of words and phrases in the statements that participants are asked to assess. Another option to justify such correlation was connected to longitudinal studies and a priori justification for the error terms which might be based on the nature of study variables.
In my personal case, I have two items with Modification indices above 20.
lhs op rhs mi epc sepc.lv sepc.all sepc.nox
12 item1 ~~ item2 25.788 0.471 0.471 0.476 0.476
After correlating the errors, the model fit appears just great (Model consists of 5 latent factors of the first order and 2 latent factors of the first order; n=168; number of items: around 23). However, I am concerned with how to justify the error terms correlations. In my case the wording of two items appear very similar: With other students in English language class I feel supported (item 1) and With other students in English language class I feel supported (item 2)(Likert scale from 1 to 7). According to Harrington (2008) it's enough to justify the correlation between errors.
However, I would appreciate any comments on whether justification of similar wording of questions seems enough for proving error correlations.
Any further real-life examples of wording the items/questions or articles on the same topic are also well-appreciated.
Question
My question is connected to rather unclear point of error correlation that many scholars encounter while conducting their SEM analysis. It is pretty often when scholars report procedures of correlating the error terms to enhance the overall goodness of fit for their models. Hermida (2015), for instance, provided an in-depth analysis for such issue and pointed out that there are many cases within social sciences studies when researchers do not provide appropriate justification for the error correlation. I have read in Harrington (2008) that the measurement errors can be the result of similar meaning or close to the meanings of words and phrases in the statements that participants are asked to assess. Another option to justify such correlation was connected to longitudinal studies and a priori justification for the error terms which might be based on the nature of study variables.
In my personal case, I have two items with Modification indices above 20.
lhs op rhs mi epc sepc.lv sepc.all sepc.nox
12 item1 ~~ item2 25.788 0.471 0.471 0.476 0.476
After correlating the errors, the model fit appears just great (Model consists of 5 latent factors of the first order and 2 latent factors of the first order; n=168; number of items: around 23). However, I am concerned with how to justify the error terms correlations. In my case the wording of two items appear very similar: With other students in English language class I feel supported (item 1) and With other students in English language class I feel supported (item 2)(Likert scale from 1 to 7). According to Harrington (2008) it's enough to justify the correlation between errors.
However, I would appreciate any comments on whether justification of similar wording of questions seems enough for proving error correlations.
Any further real-life examples of wording the items/questions or articles on the same topic are also well-appreciated.
Question
Currently, I am running a model with 6 variables among which 2 are dependent. I also have from three to five items per variable (overall, 23 items). The results of the model fit for measurement model are pretty good except for the chi-square value (302.149), df=213, p-value = .000. Therefore, since p-value appears to be insignificant, I am wondering what procedures should I follow in this case? Also, whether eliminating the excessive number of items can help to solve this? If so, what parameters apart from factor loadings do I need to check?

Network

Cited By

Projects

Project (1)
Project
As my PhD project, I am conducting a mixed methods study, where I am implementing self-determination theory to understand how contextual support in formal and informal environments as well as satisfaction of basic psychological needs might affect students’ diversity and quality of informal digital practices for learning English. The data was collected in the context of Kazakhstan for shedding more light on the phenomenon of informal learning in the Central Asian context that has been left uncovered to date.