IMS Quarterly Notes on Learning Analytics

IMS Quarterly Event in Scottsdale, November 2016, Arizona. It’s been a few weeks now! It was a great event, in a great location (so odd to be surrounded by cactus), and at a great timing too (got to live the election unfold live, and staying at an AirBnB, I could even share the excitement with a charming local couple not exactly sharing my ideas). It was a great event, because I think we’ve made good progress on making LTI more approachable in the future by unbundling the LTI 2.x specifications and seeing great adoptions of the Content Item Selection Request.

But this post is more an opportunity to reflect on Learning Analytics, which was the main theme of this quarterly event. Although I was not a big fan of the long day of panels, there were quite a few great points made. I took some notes, and I’m sorry, can’t recall who said what, but here is a tidied up version of my notes.

Analytics is easy!

What’s difficult is what comes next. Collecting data, that is ‘easy’ bit. We can instrument, we can move them in a store. The mechanics of big data are now quite well known. The question is how to transform data into information, and even more actionable information: what can you do with it? How can it be used to prevent a student to fail? And better, allow a student to achieve her goal and thrive. How can we move up the conundrum:

Collect > Describe > Predict > Prescribe

As a panelist said: ‘Before asking the question, what would you do with the answer?’.

Question of Scale

Big Data and the Machine Learning Algorithms feed on, well, a lot of data. Yet Data is not seen as commodity, but rather well kept in silos. Each vendor, each institution, is keeping a hold on this perceived treasure chest. However, the amount of data available in each of those silos, is it  really enough to feed the Learning Machine? How can you feed proper research without actual data sets available? Privacy sure comes as an objection to sharing, but data can be made anonymous. One quote from that day:

Lower the tariff on data

Adding to this issue of scale is the disparity of experiences makes it difficult to aggregate. Take for example the diversity of courses, even one a given discipline within a single institution. When google collects data on clicking on ads, or searching for this or that item, it is a repeatable experience across users and inferences may be made from the colossal amount of data gathered. Caliper does help this issue by normalizing the events through the Caliper Metric Profiles. But even if we could standardize the grammar and the vocabulary (a great 1st step!), a course is usually very much a crafted experience by the instructor, and might not very well generalize itself. That I got 65% on the 1st quiz does only mean something based on how that course is made. Maybe it is actually very good! An heavily customized course, which changes regularly, and delivered to a small set of students, how can it collect enough data to make sensible inferences? Take make it work, should we let go of machine learning but just simpler rule based if-this-then-that kind of algorithm tailored for each course by the instructor or course designer?

Or should we go towards a standardized content, so that each course content and general flow is pre-established, thus allowing gathering data across all its deliveries? That imposes are revisit of the role of the Instructor, as it goes against the usual approach that the professor is the steward of the course.

Another aspect I was reminded of recently listening to a Triangulation episode interviewing Cathy O’Neil is that, when involving Machine Learning, it is very important to have a feedback loop, to allow an algorithm to correct itself. When the thing measured is easy, feedback loop are short and allow for a repeated short corrections (the example was a Netflix movie recommendation, that you rated poorly, thus allowing the recommendation algorithm to adapt). In the context of learning, how does a feedback loop looks like? As a panelist said:

Your systems should be learning while your students are learning

But how easy is it to build a short term feedback loop for Learning?

Learning Analytics can mean a lot of different things, from adaptive learning building a dynamic learning path or proposing remediation content on demand, to helping surface early indicators of students at risk, to assess how a piece of content is efficient or flawed, … One last quote of that day: Learning Analytics is not like traditional ‘business’ analytics, it’s analytics for… learning.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload the CAPTCHA.