Paper Summary
Share...

Direct link:

Assessing Computational Thinking in Libraries: A Suite of Assessment Tools for Librarians and Library Staff

Fri, April 12, 11:25am to 12:55pm, Philadelphia Marriott Downtown, Floor: Level 5, Salon B

Abstract

The rise of Computational Thinking (CT) has extended beyond schools to include many informal learning spaces, including libraries (Braun & Visser, 2017; Weintrop et al., 2022). While the goals of introducing CT into library contexts are often consistent with the goals of educators in formal contexts (Authors, 2022), libraries, and other informal learning spaces (e.g., museums, Makerspaces) the nature of assessment is very different. Much of the difference in assessment stems from the fact that library programming is almost always voluntary, meaning if participants are not enjoying what they are doing, (i.e., taking a written test), then they are unlikely to continue with the activity or return for future events. Given this arrangement, assessment must take a different form, however, assessment is no less important in informal contexts, as it is critical to understand whether designed CT programming is achieving its stated goal.

In our original publication for the special issue, we presented results from interviews with librarians and library staff from across the country and topics related to assessing CT in libraries, including their motivations for assessment, challenges associated with assessment, and how they would like to assess CT in their libraries (Blinded). In the years since this work was published, our project, inspired by these findings, has developed and piloted a suite of assessment tools designed to meet the goals expressed in these interviews while also working within the constraints of the library context. In our presentation, we will share the assessment materials we have designed which incorporate seven distinct assessment approaches: Talkback Boards, Interview Protocols, Observation Checklists, Surveys, Design Challenges, Artifact Rubrics, and Facilitator Self-Assessments. Each of these assessments is aligned with a specific desired outcome expressed by librarians and library staff during the interview portion of the study. Each assessment tool comes ready to use by the library/library staff and also includes a facilitator guide that explains how to analyze and interpret the results. All of the assessment tools and accompanying facilitator guides are available for free at [Blinded website].

Along with presenting the assessment materials, as part of the presentation, we will present case studies of library staff using the assessments in their library programming. These case studies include describing the library program, how it was assessed, and results from interviews with the library staff on if and how the assessment helped them understand the experience of youth in their programs and whether or not the program was achieving its goals. For example, a librarian from rural Pennsylvania used our assessments to understand if a STEM Robot building challenge was achieving its goals. The librarian used the CT Dispositions Checklists and Talkback Boards over the course of her program. In reflecting on the assessments at the end of the program, the librarian shared “the assessment tools allowed me to ask the correct questions to know what the students really wanted to do,” ending the interview by saying “I love the assessment tools and will continue to use them.”

Authors