Purdue University has for years touted the ability of its early-warning system Signals to improve student retention, but a series of blog entries analyzing the institution’s claims has not found a causal connection between students who use the system and their tendency to stick with their studies.
Signals combines demographic information with online engagement and produces a red, yellow or green light to show students how well they are doing in their courses — and provides that information to their professors so they can provide help to students before they drop or fail.
The Retention Center in Blackboard has some true potential. However, it needs system-wide capabilities. Right now it is course-level and time intensive to set up and customize rules and interventions. We’re going to be running a pilot in Spring 2014 at Tri-C.
How can you make a causal claim the other way?
Put another way, “students are taking more … Signals courses because they persist, rather than persisting because they are taking more Signals courses,” Caulfield wrote.
Pistilli defended the claims about Signals’ ability to increase retention — with the caveat that more research needs to be done. “The analysis that we did was just a straightforward analysis of retention rates,” he said. “There’s nothing else to it.”
He’s probably right that additional research needs to be done. However, without it, we shouldn’t attempt to replicate or scale it. The technology needs to be put into action where it can be successful – as a tool for student success, not as a tool that we like so we try to connect it to student success. We also have to follow the money with some of these software programs – who has a stake?
With Signals marking its fifth anniversary this year, Pistilli said “it was probably just a matter of time for people to start looking for these pieces and begin to draw conclusions.” In that sense, the discussion about early warning systems resembles that of other ed-tech innovations, like flipping the classroom and massive open online courses, where hype drowns out any serious criticism.
“I think part of the answer is we’re really bad at statistical reasoning,” Essa said. “Even experts get tripped up by statistics, and it’s very easy to make claims like this, but it’s difficult to dig in and try to make sense of it.”
As we move forward with new technologies in learning analytics, how and who will be evaluating the claims that people put forward?”
Posted from Diigo. The rest of my favorite links are here.