Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
The Teacher Education Tests working group focused on identifying and describing the validity characteristics of tests assessing K–12 teachers’ mathematics knowledge. In this brief summary, we first provide an overview of how we identified articles and associated tests for inclusion, highlighting one notable pattern we observed. Then, we describe two additional findings.
We screened 12,065 articles (from 21 mathematics education journals from 2000 to 2020) for key terms in the title, abstract, and keywords, identifying 2,249 potentially relevant articles. After manually reviewing the titles and abstracts, we identified 256 articles likely to contain a Teacher Mathematical Knowledge (TMK) test. We examined these articles' methods sections, confirming 173 included a TMK test. During this process, we also recorded additional citations related to the test and its validity. We retrieved 95 additional source and validity articles.
It was at this point we noticed one of the most salient patterns for our working group. Our tests tended to fall into two distinct categories; those that provided some form of validity evidence and thus appeared to show intent on the part of the authors to contribute to this public validity record (we termed these community-accountable) versus those that were generally designed solely for the study in question (we termed these as other). We operationalized community-accountable TMK tests as follows: at least one article associated with the test showing (a) a stated validity argument; (b) a sustained focus on documenting the instrument development process (e.g., an article written just about the instrument development); and/or (c) explicit intent for use by others. We classified anything not meeting any of these criteria as other.
In the next step of our review, we identified articles from the original 173 and the 95 source and validity articles that included community-accountable TMK tests. We found 38 articles that met this criterion. Within this set of articles we identified 19 community-accountable tests or projects with a total of 35 different test scales. As an example, the Teacher Knowledge Assessment System from the University of Michigan counted a one community-accountable project with eight scales. The number of community-accountable test scales was about a third of the number of other tests (n = 109).
A second salient pattern we noticed within the community-accountable tests/projects was the majority of community-accountable TMK tests are not frequently represented in mathematics education research publications. One project had eight publications, seven tests/projects had two publications, and the remaining 11 tests/projects had just one publication. We wondered what this indicated about their frequency of use and how that impacts the ability for our field to make generalizations about teachers' mathematical knowledge.
Lastly, we found few robust validity arguments. Authors rarely provided an explicit interpretation and use statement and infrequently provided a validity framework to guide their argument. We wondered how common this was across the working groups and what this means for our field.