Stop Assessing Competencies?
A couple of weeks ago I was reading through some of my edu feed and I came across a post about assessing curricular competencies in the new BC science curriculum. The post discussed feedback cycles on the competencies. After reading the post I felt kind of anxious, which isn’t that uncommon for me when I read something that I know I can improve on, or should be doing better with. Later in the day I was still feeling bothered and then it finally dawned to me that when I read about assessing curricular competencies, I end up feeling crappy. This happens a lot. I feel crappy about my teaching and crappy about not helping others with this type of assessment.
Feeling down about something like this is not really in my nature. I know I’m not the best teacher in the world, but I’m not terrible. I also know that I don’t shy away from new ideas in education and I don’t think anyone would consider me an education conservative. I also know that I was embracing a lot of pedagogy behind the new BC curriculum before it was released. That’s when the next realisation hit me. There is something seriously wrong with the BC curriculum if parts of it makes me feel crappy about myself.
This post shouldn’t be generalised across all subject areas, it’s specific to the curricular competencies in secondary math and science. Perhaps some of things I will touch on also apply to other subject areas but I’ll let others with more experience in those areas deal with that.
Secondary Math Curricular Competencies
Math has curricular competency learning objectives that are part of four main groupings: reasoning and analyzing; understanding and solving; communicating and representing; and connecting and reflecting. Specific objectives include estimate reasonably, apply mental math strategies, apply multiple strategies, visualize to explore math concepts, use math vocabulary to contribute to discussions, reflect on mathematical thinking. I don’t think we should be assessing these and I don’t think I’ve ever seen a really good example of anyone assessing these.
There are few reasons why I don’t want to keep trying to assess the competencies. First of all, for the most part I have no way to give advice to a student that wants to improve. If I were to grade a student on using logic and the student wanted to improve, what could I tell them? What could anyone tell them? What if a student wants to improve with their estimations? Perhaps in some cases I could give a student specific instruction on how to do to an estimation, at which point they would no longer be doing estimation - they would be executing an algorithm that I taught them. The second problem with these competencies is the inability to track progress. How would I record and track student progress in their use of math vocabulary in discussions? Or their mental math strategies? The third problem is a why? question. Why would I actually want to assess any of the competencies? What would my assessment be used for? I already know that a student mostly can’t use my feedback for improvement. What would a parent do with the information?
Assessing reflections is a good example of why? It turns out that I can assess journal reflections but I don’t particularly feel great about it. The student that has something important to reflect on and is a competent writer will have good journal reflections. Other students may have something important to reflect on but they’re not very comfortable with writing (ok, I know I could collect reflections in other ways but that’s very tertiary to what I’m trying to do as a math teacher). Other students actually don’t have much to reflect on. Some students learn quickly and can make a brief note about how things are going but have very little trouble or A Ha! moments to write about. “Math is really easy for me,” isn’t much of a reflection but can be a very real experience.
Don’t get me wrong about the journal writing, I actually value it a lot and think it is very important. I just don’t want to assess it. This goes for all the competencies actually. We use and practice the competencies constantly in my classrooms. I constantly engage students and prompt them for discussions, presenting solutions, presenting alternative strategies, and justifying claims and conjectures with logic and reasoning. We practice a lot of communication, collaboration, and contextualised problem solving. And the only way I know of how to improve any of these competencies is simple: model and practice them. Prescriptive feedback is not the answer. I would also argue that in order for a student to be successful in mastering content in math, they must be competent in the curricular competencies. If we assess math content, we are also assess competencies - even if we are not explicitly saying that is what we are doing.
Is this just my problem and should I just work through it? I’m pretty confident that the answer to this is a resounding “no.” At each BCAMT (British Columbia Assocation of Mathematics Teachers) professional learning event for the last four years I’ve made sure that I attended workshops on assessing curricular competencies. I’ve heard from teachers across the province, and I don’t think any of them are doing it. Some teachers walk around a classroom with a clipboard noting when they see a student use a competency (although I notice that 2 years later this teacher does not talk about doing this anymore), some promote a software that they are trying to sell, some use projects for assessment (which doesn’t address any of my concerns noted above), and some teachers categorize an activity as a competency and give it a grade (which also does not address any of my concerns). It’s entirely possible that some teachers are doing something really useful with competency assessment in math, but I haven’t heard or seen it. If it’s that rare, I think it’s fair to say that it’s a functional failure.
We should embrace the competencies in math, we should practice and value them, but we should not assess them. One path forward, which I’ve been doing this year, is to tag my content orientated learning objectives with the curricular competencies involved. I suppose it’s kind of like a lookup table - a person can look at their gradebook, scan the learning objectives and get a feel for how they might be doing with the competencies.
Secondary Science Curricular Competencies
The science curricular competencies are quite different from the ones in math, and I also don’t like assessing them but for different reasons. The science competencies are in six broad categories: questioning and predicting; planning and conducting; processing and analyzing data and information; evaluating; applying and innovating; communicating. These categories for the most part fit nicely into the scientific method and the specific competencies in many ways describe doing a science fair project.
Like the math competencies, I mostly cannot teach a student to improve in a competency. If a student wants to improve their “planning”, what would I tell them? Be more careful, think about it more, consider the things that could go wrong, etc. Those are just really general suggestions that apply to practically anything in life. Many (most?) of the competences in science are similar to this. Use logic, draw a better conclusion, think of more things that give errors, etc. I think advice on any of these competencies can be summarized as “try to use more when doing your lab.”
Science classrooms have some specific problems when it comes to assessing competencies. They are generally part of lab experiments. It’s been years since I graded labs as an amount of points where I deduct marks for mistakes. Actually, I don’t think I ever did that. Instead I have a rubric where I list the included competencies used in the lab and grade these. Here is the little secret though: I’m mostly just grading myself.
I think you could describe labs in my science classroom as being “guided inquiry.” I try to give the students as much licence and autonomy as possible, but I’m also very careful to ensure that it is all within the students’ cognitive load*. Here is what generally happens though. If there are 10 groups in the classroom, I can count on at least 8 groups having a problem with their hypothesis, some of their plans or procedures, their data, their plans for data analysis and their ideas on experimental errors. Most groups have parts missing in several of these aspects. It’s my job to give feedback on the spot to make sure that the lab is completed in a useful way. These labs would be a complete waste of time if students were constantly going down the wrong path and collecting confusing data. So when the students hand in their lab report, I am grading a hypothesis that I already corrected for the student, I am grading a procedure that I’ve already corrected, I’m grading a data analysis that I’ve already corrected, etc. You get the idea. As well, since labs are done in groups, it’s not hard to believe that only a fraction of the students came up with a hypothesis, procedure, error analysis, etc. I’ve seen where teachers discuss this and say that it’s all part of collaboration and we’re teaching them to work together. I completely agree with that, but that’s almost always not what is in the gradebook.
I’d like to take a moment here to interject a specific criticism of the science curriculum. One of the curricular competencies is to “Formulate multiple hypotheses and predict multiple outcomes.” Uhm, what? Multiple hypotheses? I can assure readers that very little is so complicated in Physics 11 that a student would have multiple hypotheses for an experiment. That is, unless they were so utterly clueless as to what might happen and the reasons for it. For example, when we add another resistor to a circuit that has a constant voltage source, you better think that the current will go down because by definition, V=IR.
Similar to math, I think we should practice science curricular competencies as much as possible and we should value them. I just think that they are very hard to legitimately assess. Often times I think the best way to assess them is to create very well crafted test questions. This depends on the course and topic though. I think a teacher can create a scenario with information on a test and students need to design an experimental question and hypothesis and/or procedure. A different question could present data for the student to analyze. Another question could present data and analysis and have a student make a logical conclusion. With these kinds of test questions we can individually assess students. I don’t know how much this is in the spirit of what was expected of the curricular competencies but I believe this type of assessment would be in some way reliable.
*cognitive load - We need to make sure that students have the proper physical and cognitive tools required for the labs. For example, if a particular mathematical analysis is required, students should be experts with this math. If they are not, we cannot expect them to learn the science that depends on the math that they don’t know.