Monday, April 29, 2019

Making Student Learning Objectives (SLOs) Meaningful

I recently met with a group of teachers to discuss assessments that align with the vision of the new Wisconsin Standards for Science (very similar the NGSS). When I brought up Student Learning Objectives (SLOs) as an opportunity to collaboratively create aligned assessments and review student work, they shared that their principals required them to use standardized tests that didn’t align well to their discipline-specific visions or standards.

I have heard similar challenges across states, so I wanted to share a few thoughts here (and a longer version of these ideas can be found here).

The Wisconsin Educator Effectiveness (EE) System is a learning-centered system of continuous improvement designed to support teachers and principals -- a structure for districts to enable meaningful, individualized, and professional learning.

Student Learning Objectives (SLOs) are one of two goals within an educator’s Educator Effectiveness Plan (EEP). The WI EE System User Guide for Teachers describes the EEP goals as teacher-driven, with “his/her SLO based on his/her subject area, grade-level, and student data.” The teacher also develops a Professional Practice Goal (PPG) based on “self-identified needs for individual improvement” (p. 2) that will ideally relate to their SLOs. It is imperative that teachers have ownership over their learning goals and plans, making them relevant to their subject, their students, and their own needs. The alignment of teacher SLO goals to principal SLO goals and/or district goals can provide opportunities for leveraging collaborative efforts to support student learning. When administrators prescribe generic SLOs for their teachers, however, the professional growth intent of the EE System is lost. The power of SLOs lies in the analysis of data to inform specific change in instructional practice. When content teachers use data from the same standardized test, it does not empower teachers to reflect on their student data to inform their own learning. This approach to SLOs wastes an opportunity for deep, collaborative learning.

One example of a district focusing on a common SLO goal, while remaining true to particular subject areas and teacher needs, is Baraboo. At the high school, they collaboratively developed a rubric for evidence-based writing that is used across subjects multiple times per year. Teachers review subject-specific writing tasks that would’ve been done as part of a unit anyway--not some additional, artificial tasks. While student products look different in each subject, the rubric emphasizes common skills such as using evidence and discipline-based reasoning. This process has allowed for meaningful cross-disciplinary conversations while staying true to subject matter learning. Over time, it has also improved teacher practice and student outcomes.

Educator Effectiveness is designed to be a collaborative process of setting relevant goals, implementing new instructional practices, reviewing student learning through authentic assessments, and determining what to do next. This cycle of continuous improvement is also referred to as action research or “Plan-Do-Study-Act.” When done well, it is shown by research to be highly impactful professional development for school improvement. Teachers pouring over students’ work together in relation to standards-based learning progressions enhances their practice and student outcomes.

Accomplishing meaningful growth requires that teachers have the opportunity to create or identify assessments that are connected to their unique SLO goals. With direct links to particular standards and classroom learning, these types of assessments are arguably more valid than a standardized test. A performance assessment’s reliability comes from the teachers’ collaborative review of assessment responses in relation to rubrics, establishment of anchor papers to guide their reasoning, and check-ins on one another’s work. For the science world, fabulous 3D assessment resources examples can be found from Achieve and DPI, with rubric ideas on this DPI website. Notably, considering equity and bias in testing, disenfranchised students are much less likely to engage in a standardized test than in one that connects directly to their learning, their communities, and their interests and identities.

Looking forward, EE is a process that will always continue in education systems as the core elements of this process are the basis of effective professional learning communities and structures for professional development.

Wednesday, April 17, 2019

Students Using Proper Science Vocabulary Can Mask Authentic Understanding

A couple weeks ago, I participated in a workshop session led by Professor Rosemary Russ of UW-Madison. She shared a story of a mystifying event: her dog tends to sniff around more on walks after it rains. She broke us into small groups, gave us some chart paper, and asked us to discuss why that may be happening. Our group gradually dug in, shared ideas, and started drawing out our thinking (we were modeling, though she never used that term). After a while groups shared their thoughts, and she asked questions. In particular, she repeatedly pushed us to explain our thinking, our “why,” our understanding of concepts, how our ideas compared with others’ ideas… When I shared, she didn’t let me get away with using the term “volatile” – she made me explain what I meant!

Professor Russ then emphasized that students too often hide a lack of full understanding behind memorized vocabulary words and definitions. In this Illusion of Explanatory Depth, students repeat ideas without fully understanding what they mean. They can’t use these ideas to help make sense of a phenomenon, because they’ve never truly understood them. Often, in a typical class discussion, assignment, and assessment, students are able to throw around these words and regurgitate ideas, and they appear to really get it. They pass the test but aren’t pressed. They sound capable but aren’t challenged. The concepts are not retained.

Worksheets, questions at the end of a chapter, and taking notes in class do not constitute strong pillars of instruction. Effective science learning happens when students engage in dialogue about phenomena, revise models, and evaluate whether the evidence they have is sufficient to support one explanation over another. It comes when they have to do the work to make sense of the world, not when the figuring out has really already been done for them.

In the conversation with Professor Russ, someone brought up a concern that student explanations might contain “misconceptions” that other students will pick up on. As she noted, that’s an essential part of the scientific process. We hear things all the time in life that are unfounded and not based in accurate or sufficient evidence. In the classroom, it’s critical that students don’t stop after this first stage of sensemaking, just like it’s essential that all people don’t stop thinking about and looking for further evidence after reading some random “scientific” article online. Students will work together to engage in further investigation and evidence gathering after this initial process. They figure out why a particular explanation doesn’t pass muster. They must figure that out themselves if it’s going to stick; for conceptual change, it does not work to have the teacher jump in and counter an idea.

Importantly, this Illusion of Explanatory Depth does not only happen in science. Students in math can hide behind the algorithm (the formula, the typical problem, etc.) to mask their lack of sensemaking and of conceptual understanding. Students in economics, history, or psychology might throw out terms such as “supply and demand” or “culture,” or note theories such as “institutional determinism” or “behaviorism.” As noted in the previous blog post, students should be wrestling with phenomena across subject areas to develop deep understanding and use these ideas as part of their efforts to make sense of various aspects of their world.