<?xml version="1.0" encoding="utf-8" ?> <rss version="2.0" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"> <channel> <title> <![CDATA[LDD NCERT Search for 'se,phr:&quot;Assessment in Education&quot;']]> </title> <!-- prettier-ignore-start --> <link> /cgi-bin/koha/opac-search.pl?q=ccl=se%2Cphr%3A%22Assessment%20in%20Education%22&#38;sort_by=relevance&#38;format=rss </link> <!-- prettier-ignore-end --> <atom:link rel="self" type="application/rss+xml" href="/cgi-bin/koha/opac-search.pl?q=ccl=se%2Cphr%3A%22Assessment%20in%20Education%22&#38;sort_by=relevance&#38;format=rss" /> <description> <![CDATA[ Search results for 'se,phr:&quot;Assessment in Education&quot;' at LDD NCERT]]> </description> <opensearch:totalResults>10</opensearch:totalResults> <opensearch:startIndex>0</opensearch:startIndex> <opensearch:itemsPerPage>50</opensearch:itemsPerPage> <atom:link rel="search" type="application/opensearchdescription+xml" href="/cgi-bin/koha/opac-search.pl?q=ccl=se%2Cphr%3A%22Assessment%20in%20Education%22&#38;sort_by=relevance&#38;format=opensearchdescription" /> <opensearch:Query role="request" searchTerms="q%3Dccl%3Dse%252Cphr%253A%2522Assessment%2520in%2520Education%2522" startPage="" /> <item> <title> Insight from research across the world: examining student responses, assessment practices, and feedback engagement </title> <dc:identifier>ISBN:</dc:identifier> <!-- prettier-ignore-start --> <link>/cgi-bin/koha/opac-detail.pl?biblionumber=174390</link> <!-- prettier-ignore-end --> <description> <![CDATA[ <p> By Hopefenbeck, Therese.<br /> United Kingdom:Taylor &amp; Francis,2024 , The first article, by Steinmann et al. (Citation2024), investigates patterns in student questionnaire responses in the TIMSS 2019 study. More specifically, the researchers examined which students are more likely to respond inconsistently to mixed-worded questionnaire scales, and which country samples have larger shares of inconsistent respondents. At the student level, Steinmann et al. (Citation2024) investigated four predictor variables of inconsistent responding, separately and in a joint model: 1) mathematics achievement, 2) student age, 3) language at home, and 4) gender. Across the different models, countries, and grade levels, mathematics achievement stood out as the strongest and most consistent predictor of inconsistent responding. Interestingly, it was also found that girls in many countries responded more consistently than boys and that students speaking the test language at home are more likely to respond consistently. The authors discuss interpretations of the data noting that the design of the study does not allow for directly testing their interpretation that lack-of-skills seems to be the explanation and not students being careless when responding. </p> ]]> <![CDATA[ <p> <a href="/cgi-bin/koha/opac-reserve.pl?biblionumber=174390">Place hold on <em>Insight from research across the world: examining student responses, assessment practices, and feedback engagement</em></a> </p> ]]> </description> <guid>/cgi-bin/koha/opac-detail.pl?biblionumber=174390</guid> </item> <item> <title> Who responds inconsistently to mixed-worded scales? Differences by achievement, age group, and gender </title> <dc:identifier>ISBN:</dc:identifier> <!-- prettier-ignore-start --> <link>/cgi-bin/koha/opac-detail.pl?biblionumber=174391</link> <!-- prettier-ignore-end --> <description> <![CDATA[ <p> By Steinmann, Isa.<br /> United Kingdom:Taylor &amp; Francis,2024 , We investigated two research questions: which students are more likely to respond inconsistently to mixed-worded questionnaire scales, and which country samples have larger shares of inconsistent respondents? We defined an inconsistent response as strongly agreeing or disagreeing with both positively and negatively worded items of the same scale. Since we assumed that inconsistent responding occurs due to a lack of carefulness, reading, or cognitive skills, we expected to find that inconsistent responding was associated with lower achievement, younger age, being a nonnative speaker, and being a boy. We used data from all 38 countries that participated in the fourth- and eighth-grade assessments of TIMSS (Trends in International Mathematics and Science Study) 2019. Using the mean absolute difference method, we identified shares of 1‒21% inconsistent respondents across samples. The results generally supported our hypotheses, especially the hypothesis that inconsistent responding is more common among students and countries with lower mathematics achievement levels. </p> ]]> <![CDATA[ <p> <a href="/cgi-bin/koha/opac-reserve.pl?biblionumber=174391">Place hold on <em>Who responds inconsistently to mixed-worded scales? Differences by achievement, age group, and gender</em></a> </p> ]]> </description> <guid>/cgi-bin/koha/opac-detail.pl?biblionumber=174391</guid> </item> <item> <title> Moderation of non-exam assessments: a novel approach using comparative judgement </title> <dc:identifier>ISBN:</dc:identifier> <!-- prettier-ignore-start --> <link>/cgi-bin/koha/opac-detail.pl?biblionumber=174392</link> <!-- prettier-ignore-end --> <description> <![CDATA[ <p> By Chambers, Lucy.<br /> United Kingdom:Taylor &amp; Francis,2024 , In England, some secondary-level qualifications comprise non-exam assessments which need to undergo moderation before grading. Currently, moderation is conducted at centre (school) level. This raises challenges for maintaining the standard across centres. Recent technological advances enable novel moderation methods that are no longer bound by centre. This study used simulation to investigate the feasibility of using comparative judgement (CJ) for moderating non-exam assessments. Our study explored the effects of CJ design parameters on the CJ estimates of script quality and how to assign moderator marks after the CJ procedure. The findings showed that certain design parameters had substantial effects on reliability and suggested minimum values for CJ protocols. The method used for assigning moderator marks maintained the rank order of scripts within centres and calibrated the centres to a common standard. Using CJ for moderation could transform current assessment practices, taking advantage of technological developments and ensuring reliability and fairness. </p> ]]> <![CDATA[ <p> <a href="/cgi-bin/koha/opac-reserve.pl?biblionumber=174392">Place hold on <em>Moderation of non-exam assessments: a novel approach using comparative judgement</em></a> </p> ]]> </description> <guid>/cgi-bin/koha/opac-detail.pl?biblionumber=174392</guid> </item> <item> <title> Lower secondary school teachers’ arguments on the use of a 26-point grading scale and gender differences in use and perceptions </title> <dc:identifier>ISBN:</dc:identifier> <!-- prettier-ignore-start --> <link>/cgi-bin/koha/opac-detail.pl?biblionumber=174393</link> <!-- prettier-ignore-end --> <description> <![CDATA[ <p> By Gamlem, Siv M..<br /> United Kingdom:Taylor &amp; Francis,2024 , This study explores lower secondary school teachers’ arguments and perceptions for using a 26-point grading scale (26-PGS), and gender differences in assessment practice. An explanatory sequential design was conducted. First, teachers (n = 6) assessment of students’ text (n = 182)was analysed.In the subsequent phase, an open-ended questionnaire with teachers (n = 54) was conducted and analysed. The study revealed that the teachers perceive that the 26-PGS provides precision. Teachers highlight the significance of using the 26-PGS as an alternative assessment method, aiming to foster students’ growth, and as a message to motivate for learning. In addition, gender disparities in teachers’ provision of grades and arguments of using a 26-PGS as part of their assessment practice were found. The study contributes to the existing literature by shedding light on teachers’ assessment practice and gender differences regarding the use of grading scales and discuss its potential challenges in educational contexts. </p> ]]> <![CDATA[ <p> <a href="/cgi-bin/koha/opac-reserve.pl?biblionumber=174393">Place hold on <em>Lower secondary school teachers’ arguments on the use of a 26-point grading scale and gender differences in use and perceptions</em></a> </p> ]]> </description> <guid>/cgi-bin/koha/opac-detail.pl?biblionumber=174393</guid> </item> <item> <title> Searching students’ reflective writing for linguistic correlates of their tendency to ignore instructors’ feedback </title> <dc:identifier>ISBN:</dc:identifier> <!-- prettier-ignore-start --> <link>/cgi-bin/koha/opac-detail.pl?biblionumber=174394</link> <!-- prettier-ignore-end --> <description> <![CDATA[ <p> By Nash, Robert A..<br /> United Kingdom:Taylor &amp; Francis,2024 , Students who ignore feedback are poorly positioned to reap its intended benefits. In this study we examined three reflective assignments written by undergraduate Psychology students about their experiences of receiving feedback. We also recorded what proportion of their instructors' feedback each student had accessed during the first two years of their degree, plus their average grades. Using linguistic text analysis software we searched for linguistic features of students’ reflective writing that were statistically associated with their tendency to ignore instructors’ feedback. We found no meaningful associations between feedback-accessing and students’ language use. Exploratory analyses, however, indicated that a greater tendency to ignore feedback was associated with lower grades, and that students with lower grades tended to focus relatively more on the past or present in their reflections than on the future. We discuss the possible merits of using language as an indirect measure in studies of feedback literacy. </p> ]]> <![CDATA[ <p> <a href="/cgi-bin/koha/opac-reserve.pl?biblionumber=174394">Place hold on <em>Searching students’ reflective writing for linguistic correlates of their tendency to ignore instructors’ feedback</em></a> </p> ]]> </description> <guid>/cgi-bin/koha/opac-detail.pl?biblionumber=174394</guid> </item> <item> <title> An exploratory field study of students’ memory for written feedback comments </title> <dc:identifier>ISBN:</dc:identifier> <!-- prettier-ignore-start --> <link>/cgi-bin/koha/opac-detail.pl?biblionumber=195149</link> <!-- prettier-ignore-end --> <description> <![CDATA[ <p> By Winstone, Naomi E..<br /> UK : Talyor &amp; Franics , 2024 .<br /> 296 p. , Feedback information can be a powerful influence on learning, yet there is currently insufficient understanding of the cognitive mechanisms responsible for these effects. In this exploratory study, students (N = 279) received teacher feedback on a practice exam paper, and a few days later we assessed the amount and type of feedback information they successfully remembered. Overall, students performed relatively poorly, recalling on average just 25% of the coded feedback comments they had received. We found that students were more likely to remember critique comments over praise, and more likely to recall critique that was process-focused rather than task-focused. In contrast with recent laboratory studies, though, we found minimal evidence of a memory advantage for evaluative critique over directive critique. We call for greater understanding and measurement of learners’ cognitive processing of feedback information, as a means to develop more robust scientific accounts of how and when feedback is impactful. </p> ]]> <![CDATA[ <p> <a href="/cgi-bin/koha/opac-reserve.pl?biblionumber=195149">Place hold on <em>An exploratory field study of students’ memory for written feedback comments</em></a> </p> ]]> </description> <guid>/cgi-bin/koha/opac-detail.pl?biblionumber=195149</guid> </item> <item> <title> Mapping oral feedback interactions in young pupils’ writing </title> <dc:identifier>ISBN:</dc:identifier> <!-- prettier-ignore-start --> <link>/cgi-bin/koha/opac-detail.pl?biblionumber=195151</link> <!-- prettier-ignore-end --> <description> <![CDATA[ <p> By Winstone, Naomi E..<br /> UK : Taylor &amp; Francis , 2024 .<br /> 296 p. , The quality of feedback interactions, when young pupils write, influences their learning processes. Still, teachers tend to use feedback that provides little information to enhance pupils’ understanding and learning regarding their literacy skills. More knowledge about feedback interactions for young pupils as they write is needed. Thus, we wanted to investigate: What characterises oral feedback interactions in classrooms between teachers and young pupils while pupils write? Observations were collected using video recordings from 14 second-grade classrooms in Norway (pupils 7 years old). Seventy-two hours of video-recorded lessons were studied using thematic analysis. The results show a pattern where teachers praise general ability at the self-level and correct specific mistakes at the task level, less information about writing strategies is provided. </p> ]]> <![CDATA[ <p> <a href="/cgi-bin/koha/opac-reserve.pl?biblionumber=195151">Place hold on <em>Mapping oral feedback interactions in young pupils’ writing</em></a> </p> ]]> </description> <guid>/cgi-bin/koha/opac-detail.pl?biblionumber=195151</guid> </item> <item> <title> A self-feedback model (SEFEMO): secondary and higher education students’ self-assessment profiles </title> <dc:identifier>ISBN:</dc:identifier> <!-- prettier-ignore-start --> <link>/cgi-bin/koha/opac-detail.pl?biblionumber=195152</link> <!-- prettier-ignore-end --> <description> <![CDATA[ <p> By Winstone, Naomi E..<br /> UK : Taylor &amp; Francis , 2024 .<br /> 296 p. , While self-assessment is a widely explored area in educational research, our understanding of how students assess themselves, or in other words, generate self-feedback, is quite limited. Self-assessment process has been a black box that recent research is trying to open. This study explored and integrated two data collections (secondary and higher education) that investigated students’ real actions while self-assessing, aiming to disentangle self-assessment into more precise actions. Our goal was to identify self-assessment processes and profiles to better understand what happens when students self-assess and to design and implement better interventions. By combining such data, we were able to explore the differences between secondary and higher education students, the effects of external feedback on self-assessment, and to propose a model of ideal self-assessment (SEFEMO). Using think-aloud protocols, direct observation and self-reported data, we identified six main actions (read, recall, compare, rate, assess, and redo) and four self-assessment profiles. In general, secondary and higher education students showed the same actions and very similar profiles. External feedback had a negative effect on the self-assessment actions except for the less advanced self-assessors. Based on data from more than 500 self-assessment performances, we propose a model of self-feedback. </p> ]]> <![CDATA[ <p> <a href="/cgi-bin/koha/opac-reserve.pl?biblionumber=195152">Place hold on <em>A self-feedback model (SEFEMO): secondary and higher education students’ self-assessment profiles</em></a> </p> ]]> </description> <guid>/cgi-bin/koha/opac-detail.pl?biblionumber=195152</guid> </item> <item> <title> The conceptualisation implies the statistical model: implications for measuring domains of teaching quality </title> <dc:identifier>ISBN:</dc:identifier> <!-- prettier-ignore-start --> <link>/cgi-bin/koha/opac-detail.pl?biblionumber=195153</link> <!-- prettier-ignore-end --> <description> <![CDATA[ <p> By White, Mark.<br /> UK : Taylor &amp; Francis , 2024 .<br /> 296 p. , Classroom observation rubrics are a widely adopted tool for measuring the quality of teaching and provide stable conceptualisations of teaching quality that facilitate empirical research. Here, we present four statistical approaches for analysing data from classroom observations: Factor analysis, Rasch modelling, latent class or profile analysis, and formative measurement models. Each statistical model conceptualises the latent variable differently, which may or may not align with the observation rubric’s conceptualisation of teaching quality. We discuss the differences across these models, focusing on the alignment between the rubric’s conceptualisation of teaching quality and the model’s modelling of the latent variable. We discuss the need to align model selection with observation rubric so that the measured teaching quality reflects the theoretically conceptualised teaching quality. </p> ]]> <![CDATA[ <p> <a href="/cgi-bin/koha/opac-reserve.pl?biblionumber=195153">Place hold on <em>The conceptualisation implies the statistical model: implications for measuring domains of teaching quality</em></a> </p> ]]> </description> <guid>/cgi-bin/koha/opac-detail.pl?biblionumber=195153</guid> </item> <item> <title> EduSEL-R – the refined educators’ social-emotional learning questionnaire: expanded scope and improved validity </title> <dc:identifier>ISBN:</dc:identifier> <!-- prettier-ignore-start --> <link>/cgi-bin/koha/opac-detail.pl?biblionumber=195154</link> <!-- prettier-ignore-end --> <description> <![CDATA[ <p> By Kasperski, Ronen.<br /> UK : Taylor &amp; Francis , 2024 .<br /> 296 p. , Educators’ Social-Emotional Learning (SEL) is crucial for fostering positive, supportive, and effective learning environments. This study seeks to improve SEL assessment among educators by addressing limitations of the previous EduSEL questionnaire. Study 1 established convergent validity by comparing EduSEL with a validated SEL questionnaire. Study 2 focused on expanding the scope of the assessment, by adding new items. In Study 3, exploratory factor analysis yielded a four-factor structure comprising self-management, ethical problem solving, self and social awareness and relationship skills. In Study 4, the five most robust items from each subscale (20 items) were assessed in confirmatory factor analysis, yielding good model fit indices for the four-factor structure, high reliability (α = .92) and adequate convergent validity. To summarise, the EduSEL-R questionnaire exhibits closer alignment with the CASEL framework and encompasses additional SEL competencies. Thus, the current study contributes to the advancement of a comprehensive SEL assessment. </p> ]]> <![CDATA[ <p> <a href="/cgi-bin/koha/opac-reserve.pl?biblionumber=195154">Place hold on <em>EduSEL-R – the refined educators’ social-emotional learning questionnaire: expanded scope and improved validity</em></a> </p> ]]> </description> <guid>/cgi-bin/koha/opac-detail.pl?biblionumber=195154</guid> </item> </channel> </rss>
