I’ve posted a lot about my trials and errors with SBG but for the most part what I do is what I was doing in 2017. I’ve written about this scheme here. I’ve given different thoughts to how (or if I should) incorporate performance tasks, tests or other summative assessments but mostly I just use SBG for all my grading.
In 2017 I said that the final mark creation involved some voodoo.
A couple of weeks ago I was reading through some of my edu feed and I came across a post about assessing curricular competencies in the new BC science curriculum. The post discussed feedback cycles on the competencies. After reading the post I felt kind of anxious, which isn’t that uncommon for me when I read something that I know I can improve on, or should be doing better with. Later in the day I was still feeling bothered and then it finally dawned to me that when I read about assessing curricular competencies, I end up feeling crappy.
One thing that I’ve always struggled with is adding challenging questions to my assessments within a SBG scheme. Like a lot of people using SBG, I use a 4 point scale. The upper limit on this scale is similar to an A, and for the sake of the post I’ll refer to the top proficiency as “mastery”. If a student were to get an A in a course I teach, roughly speaking they would have to be at the mastery level in at least half of the learning objectives, and then only if they don’t have any level 2 grades.
I’ve written about my usual SBG scheme here. It works fine and many students take advantage of learning at a slightly different pace but still getting credit for what they know, once they know it. However, I’m interested in keeping small quizzes primarily in the formative domain, yet using an assessment tool that is based on clear learning objectives, re-testable and flexible. This post talks about a possible transition from using a few dozen learning objectives in quizzes to a new, larger goal assessment tool.
The Task I recently sent out a survey to Twitter where 50 respondents were presented with series of scores for students. The scores were for individual learning objectives and all the scores are based on a 3 point or 4 point proficiency scale. Each score was indicated by one of four different colours. Users were asked to come up with an overall letter grade and percent for each student based on these learning objective scores.
Last night I was at a district meeting on Communicating Student Learning. There are a few different CSL projects going on in our school district and these meetings are good places to share our individual school experiences and collaborate on new ideas.
At one point in the meeting, two concerns about proficiency based assessment/reporting came up. I wanted to write about them because these are two issues that I see raised with regards to assessment and Standards Based Grading (SBG) quite often and they are great questions.
Kids these days don’t know as much because of grade inflation. That makes no sense to me. Kids may, or may not, know as much as they use to but what they “know” is a result of the teaching that happens in the classroom. After the lessons, learning and practicing a student is assessed and typically given some number. Whether that number is 70 or 90, the learning has already happened.
At a recent “Communicating Student Learning” meeting in Vancouver we were presented with a proposed 4pt scale for recording student progress.
Emerging - Developing - Proficient - Extending
At first glance this scale seems pretty good. I’m a fan of smaller vs. larger scales. In fact, in my day-to-day formative assessment using SBG I prefer to use a 3pt scale. The reason for this because the scale is very easy to understand and the levels are never ambiguous.
In an earlier post I wrote about how I felt that I tend to move slowly through curriculum. One of the things I do that slows things down is frequent quizzing and post-quiz self/group assessment. Usually once every 5 classes (or less) we will have a quiz that can take anywhere from 10 minutes to 25 minutes. Once everyone is finished, the quizzes are handed back to the students and we go over the solutions.
I recently read Daisy Christadoulou’s new book “Making Good Progress? The future of Assessment For Learning”.
It was a good read and helped clarify several questions or ideas I’ve had about assessment in education. In her book, Christodoulou discusses why formative assessment hasn’t delivered the goods, the pitfalls of invalid summative assessments, and how to improve both of these. It’s worth noting that the generalities of Christodoulou’s book apply for everyone but there are some things that are specific to England’s education system.