Friday, May 29, 2015

Valid and Reliable Assessments To Measure Scale Literacy of Students in Introductory College Chemistry Courses

Valid and Reliable Assessments To Measure Scale Literacy of Students in Introductory College Chemistry Courses
Karrie Gerlach, Jaclyn Trate, Anja Blecking, Peter Geissinger, and Kristen Murphy
Journal of Chemical Education 2014 91 (10), 1538-1545
DOI: 10.1021/ed400471a
ABSTRACT: http://pubs.acs.org/doi/abs/10.1021/ed400471a?journalCode=jceda8&quickLinkVolume=91&quickLinkPage=1538&selectedTab=citation&volume=91
The goal of this study was to develop and test valid and reliable assessments that measure the scale literacy of introductory chemistry course students (like Chemistry 31 and maybe 30A at LPC).  They define validity and reliability as follows: “Validity is determined by multiple methods, including experts constructing and analyzing the test items, the use of student responses and item statistics to edit and select test items, and comparison of the assessment measurement to other valid measures.16 The reliability of an assessment instrument can be defined as the consistency of its measurement each time it is used under the same set of conditions with the same group of subjects.”  Their study was guided by the following research questions:
1. How can scale literacy be measured for classwide assessment?
2. What is the scale literacy of introductory college chemistry students?
3. How does scale literacy predict performance in general chemistry?

The two assessment tests developed were (see article for how these were developed and validated):
Scale Literacy Skills Test (SCLT): The final version had 45 items, 36 of which were assigned to categories of the Scale Concept Trajectory developed by Jones (see article). The other 9 items pertain to macroscopic and particle representations matter (see distribution below).
Scale Concept Inventory (SCI): The final version contains 40 statements that are scored using a 5-point Likert scale: a five-option continuum from strongly agree (5) to strongly disagree (1). Twenty-three of the statements were written to elicit a positive or agree response, while 13 were written for a negative or disagree response. This technique was used to ensure that the students read each question. In addition, a verification item was used to identify those students who did not correctly utilize the SCI (for example, not reading the statements, not understanding the rating scale or simply entering random responses).
The participants were students from a 4-unit preparatory chemistry course with a college algebra or math placement test score requirement (like Chemistry 31 at LPC), from a general chemistry I course with  a college algebra and preparatory chemistry course pre-requisites (or placement tests) (Chemistry 1A at LPC), and experienced graduate students as experts.
Test Administration:
Both the SCI and SLST were administered to participants in both courses at the beginning of the semester and at the end of the semester (only starting in 2010 for the SCI).
Results of the SLST: See graph above.  Authors’ summary: The validation studies of the inventory revealed areas of particular need where students again struggle with concepts related to scale. Included in this are concepts related to the continuity of matter, number sense, magnification and the definition of a macroscopic versus a particle-level property. Incidentally, these definitions were also an issue on the SLST where the two lowest performing items for both groups were these definitions
To come up with predictors for success in the course, the authors analyzed correlations between test scores in various assessments and performance in final exams.  Some notable results (copied verbatim to preserve precision of statement):
For the General Chemistry, correlations were analyzed between scores in various tests and performance in two different final exams (ACS).  Here is the overall result for the General Chemistry: The scale literacy measure was the best predictor for performance on the conceptual final exam and the same as the combined placement test for the other final exam.
For the Preparatory Chemistry course, correlations were analyzed between scores in various tests (excluding the chemistry placement) and performance in the final exam and overall class percent: Unlike the results found for general chemistry I, the results for preparatory chemistry show a better correlation between the final exam and the ACT composite or mathematics score and the final percent in the course and the mathematics placement test score. This suggests that the level of content understanding expected of the students in this course differs from that in general chemistry I, particularly with those concepts related to scale. Indeed, the focus of this course and many remediation or preparatory chemistry courses is the ability to solve problems and begin to understand some of the language of chemistry. However, the scale literacy score is still a moderate and significant predictor of success.
Overall summary by the authors on predictors for course success:

As individual assessments, the SLST and SCI had moderate, significant, positive correlations to final exam scores. Similar results were found for other traditional measures. However, when considering combined scores (ACT composite, combined placement or combined scale literacy score), the best predictor of success by conceptual final exam performance27 was the scale literacy score. In other words, students with higher scale literacy are predicted to perform better on conceptual test items in a general chemistry course. This relationship was also found when examining similar predictors for success in a preparatory chemistry course; however, the best predictor of success in this course were the traditional measures of mathematics knowledge or ACT composite or mathematics score.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.