Font Size: a A A

Known-Groups Validity and Generalizability of a Measure of Engineering Design

Posted on:2018-03-18Degree:Ph.DType:Dissertation
University:University of South AlabamaCandidate:Hibberts, Mary FFull Text:PDF
GTID:1447390002495309Subject:Instructional design
Abstract/Summary:
Numerous reports have increased national awareness of the need to improve K-12 education in science, technology, engineering, and mathematics (STEM) in order to meet the needs of our increasingly technological workforce (e.g., Honey, Pearson, & Schweingruber, 2014; National Academy of Science, 2007; The President's Council of Advisers on Science and Technology, 2010). Engaging Youth through Engineering (EYE) modules were developed to increase interest and proficiency in STEM fields in middle schools in Mobile, Alabama (Harlan, Pruet, Van Haneghan, & Dean, 2014). The modules covered relevant engineering design challenges integrated into existing science and mathematics curricula and focused on the engineering design process. Initial results were promising and showed that the EYE program was affecting students' engineering design performance and attitudes in some areas, when compared to a control group (Harlan, Van Haneghan, Dean, & Pruet, 2015). However, the assessment instruments for used measuring engineering design performance require further validity and reliability research before researchers can be confident in their interpretations based on assessment data. The purpose of this study was to evaluate the psychometric properties of three engineering design performance assessments developed for the EYE initiative.;Known-groups validity was tested by comparing engineering design scores from a middle school data set, collected by Harlan et al. (2015), with scores from two groups of college students (i.e., college freshmen with little to no engineering experience and senior engineering students). As expected, senior engineering students had better engineering design performance than the other groups (measured by the engineering design assessments developed for the EYE program). However, the assessment instruments used to measure engineering design performance yielded inconsistent results when comparing the groups with less engineering experience. There were also inconsistencies in group differences when comparing scores on four dimensions of engineering design (i.e., depth and breadth of thinking, teams and expertise, critical evaluation of a design, and use of data and research).;A generalizability analysis was used to evaluate the reliability of the three assessment instruments completed by the college students. When considering total performance scores, there was enough generalizability across people, independent of rater and form, to suggest the instruments measured a general underlying engineering design construct. Generalizability coefficients were lower and inconsistent when considering each engineering dimension individually. Overall, the data suggest that total scores from the three engineering design assessments yield reliable results but have weak to moderate validity. Recommendations for future research are discusses including revisions to the assessments and scoring criteria to increase reliability for engineering dimensions, conducting a generalizability study with middle school students, and testing the psychometric properties of the assessment instruments with additional populations.
Keywords/Search Tags:Engineering, Generalizability, Assessment instruments, Validity, Students, Science, EYE
Related items