A couple of weeks ago I was reading through some of my edu feed and I came across a post about assessing curricular competencies in the new BC science curriculum. The post discussed feedback cycles on the competencies. After reading the post I felt kind of anxious, which isn’t that uncommon for me when I read something that I know I can improve on, or should be doing better with. Later in the day I was still feeling bothered and then it finally dawned to me that when I read about assessing curricular competencies, I end up feeling crappy.
One thing that I’ve always struggled with is adding challenging questions to my assessments within a SBG scheme. Like a lot of people using SBG, I use a 4 point scale. The upper limit on this scale is similar to an A, and for the sake of the post I’ll refer to the top proficiency as “mastery”. If a student were to get an A in a course I teach, roughly speaking they would have to be at the mastery level in at least half of the learning objectives, and then only if they don’t have any level 2 grades.
I’ve written about my usual SBG scheme here. It works fine and many students take advantage of learning at a slightly different pace but still getting credit for what they know, once they know it. However, I’m interested in keeping small quizzes primarily in the formative domain, yet using an assessment tool that is based on clear learning objectives, re-testable and flexible. This post talks about a possible transition from using a few dozen learning objectives in quizzes to a new, larger goal assessment tool.
Today in physics 11 I tried a new lab using our motion and force sensors, carts and tracks. The lab idea is from New Visions and I believe that the script that I was working from was written by Kelly O’Shea and Mark Schober.
I was pretty excited to give it a try because I’ve always just told my students that the area under a force-distance graph is work. With this lab, students develop the idea from direct evidence.
One thing that I’ve struggled with for years is trying to fit in the curricular content for physics 11. I know that I’m weeks behind most physics teachers in BC. I always start the year off with the best intentions with planning, and the planning is generally ok in that I remain focused on the goals and sequence.*
I’m interested in trying to improve my sequence and scheduling so that it is appropriate in coverage and understanding, and accomplishes what I want it to, recognizing that unit planning is a personal thing even when working within the guidelines of the set curriculum.
This year I had students use Excel for plotting graphs in physics. I went pretty good for constant velocity but very awkward for constant acceleration. It required students to manually find tangent lines on their x-t graphs, calculate slopes, then put it all together coherently to produce a v-t graph. There are lots of things going on in this task and while I think many physics teachers see value in this, I also think that performing all the manual steps the confounds foundation ideas in the constant acceleration model.
I’ve been asking many of my grade 11 students what their ideal math lesson would look like. Not in terms of content, but in terms of process. I wanted to focus this question on math instead of science because I didn’t want to confound typical learning activities with demonstrations and experiments.
Most of the students cited very similar ideas, as follows:
take up questions about homework or last day’s work connect the new material to what they were working on last day possibly give some notes give (lots of) examples have them try some practice questions #1 above was universal, all students started with this.
One thing I’ve been trying to implement more and more into my units are Performance Tasks. McTighe and Wiggins in their Understanding by Design framework say that a Performance Task is an authentic assessment where students demonstrate the desired understandings. In my context, I currently use small SBG quizzes for the bulk of my assessments. Jay McTighe, who I had the pleasure and privilege of having lunch with, would probably call my quizzes “supplementary” evidence.
My classes just finished doing a conservation of momentum lab. In many ways it was a big disappointment. We ended up spending 2-1/2 classes on the lab, with little to show for it. The general idea was to record position and time data from 6 videos (6 different types of collisions), calculate velocities and momentum, and compare total momentum before and after. There were lots of problems with this:
Students would make mistakes in recording data or making a calculation, and every mistake helps to obscure the goal of seeing that total momentum doesn’t change.
Lately I’ve been thinking a lot about Modeling Instruction (MI) and Cognitive Load Theory (CLT). I started this post a couple of weeks ago and then was further inspired by a post by Brian Frank (if you read both posts you’ll see some similarities). In my head I know that I want to compare them, but that is something that I shouldn’t really do because MI is a teaching and learning methodology while CLT is a theory about how people learn.