Yeah, but are the kids learning? – Thoughts on improving the Ascendly assessment system.
Ascendly’s after-school engineering classes are popular, but are they good? In other words, “Are the kids actually learning?” Every organization needs to know if its latest changes, both purposeful and accidental, are affecting improvements, or are they just making things worse. So, what are the different ways an organization can gauge its progress? Surveys for one. Direct evidence collection, like observing and recording changes in skill-levels is another. Implied satisfaction, like examining repeat business rates or improvements in test scores outside of class, for a third. So, in my classes to date, I’ve been giving the parents an end-of-course survey and I’ve also been tracking students’ retention. Although those simple assessments have been good enough so far, it’s now time to up my game. But how?
My initial surveys were strictly end-of-course affairs heavily populated with open-ended questions and given to the parents as they were picking up their kids. Since the parents were right there, I got a nearly 100% response rate, which is totally unheard of in the survey industry. It was nice watching the parents fill out the survey, almost always with their kiddo’s input. I got great feedback that let me quickly hone-in to a good offering. The downsides of this approach was that it waited until the end of the course to get good feedback and the lack of quantitive feedback made it difficult track progress, let alone student learning.
For my fourth semester of classes, I started adding some quantitative questions to the survey. There are now about seven questions that address the overall course value and other specific aspects of the class, such as whether the parents observed an increase in building skills, an awareness of the engineering design process, or an increase in usage of engineering vocabulary words. This quantitative addition is a good and necessary improvement, but still not sufficient to successfully run an educational venture.
A colleague recently suggested I look at Kirkpatrick’s Model of Evaluation. Basically, it suggests to assess by looking at four things:
- Reaction: How do the students feel
- Learning: Test them!
- Behavior: Observe or interview, looking for evidence of change
- Results: Big picture, beyond a single student, impact.
There are other assessment schemes out there. They even have their own club: American Evaluation Association (http://www.eval.org/)
Now, with any measurement system, there are all kinds of pitfalls. I’m particularly concerned with the administrative burden. It can’t be such a hassle that the teachers/coaches don’t want to perform the assessment. It also must recognize multiple intelligences. This is especially important to Ascendly’s mission since I don’t want to discourage the non-hard-core engineering kids. If a student can internalize the engineering design process, then they can innovate and think critically, regardless of how differently they might approach the challenge. It would a shame to discourage a young kid or their parent by falsely flagging them as not-learning, where they were actually just learning differently. I certainly don’t wont the teacher/coach to teach to the test. Lastly, it needs to be timely. Only gathering feedback at the end of a course is probably too late to effect change in some circumstances. So, keeping the assessments light-weight, non-discriminatory, and timely, are some of the main the keys to a good assessment system.
Let me try to enumerate what I want our assessments to achieve
- Ensure teachers/coaches are good
- Give teachers/coaches feedback
- Test whether changes in the program are helpful vs. hurtful
- Class progress
- Student progress
- Longitudinal student tracking
- Parent feedback
- Customer satisfaction
- Risk management
- Marketing effectiveness
- Meeting and exceeding customer needs & expectations
- Prove to grant sources Ascendly’s efficacy
And now for some known constraints
- Low burden
- Many classroom have neither wi-fi nor mobile data coverage
- Should be numeric, or codable, so we can track changes over time
- Should be compatible with kids that can’t read yet
- Should acknowledge that parents might not be present at the end of class.
In a future post I’ll cover how this morphs into an actually improved assessment system, suitable for running a education venture and for proving its efficacy.