Category Archives: Uncategorized

Learning App Connect: App first!

This is an echo from some other posts I have made in the past, but in a more concrete way using a fictional API, Learning App Connect. It feeds in the same territory as the Learning Tools Interoperability LTI specifications, and could be seen as an extension of it, but differs in a few key elements:

  1. It puts the end user (student/instructor) in the forefront by using a strict 3-legged OAuth 2 flow
  2. It is not LMS centric. The interaction does not initiate from the LMS nor does it require the user to ever go to it. The LMS is more a central nerve or a way to get extra data about the user to enrich her experience. App first!
  3. By using standardized services and authorization scopes, a Learning App should be able to connect to any institution Learning Hub/LMS

For now the API only exposes the ability for an app to discover a student’s enrollment, and a way to push engagement data back to the institution. Note that the API does not intend to be used as an Identity provider. The student may connect to the App using, for example, a Google Open Id flow, and then grant access to his institution’s data (and possibly even multiple institutions!).

The engagement data format is open for discussion: Closed vocabulary or open-ended? Caliper and metrics profile, or XAPI to let the app open to express any interaction (as no one can know what kind of app can be used in a learning context, for example if it is a complex simulation or a game like Kerbal Space Program ‘User successfully launched a satellite in orbit’)?

When an app becomes a key aspect of the course and is selected by the instructor rather than the student, then the instructor should also be able to set up a more closely integrated experience, including grade reporting, possibly adding activities to the course calendar, and so on. Those operations are limited to instructor like roles, and would be covered by a set of OAuth scopes that can only be granted in a context where the user has a non-learner role. Those services could ideally be the LTI services (Outcome, ContentItemService, Membership, ….).

A important aspect from the LTI specifications is that little knowledge of the services end point needs to be known in advance (outside of where to do the LTI launch) as those are injected as launch parameters. To keep that spirit, the Learning App Connect api will use an Hypermedia like API by embedding the services end point the user may access in the result of the initial connect response.

One barrier of entry here is for an app to have to register to all institutions. Possibly the LTI Registration Service could ease the deployment by including it as part of the Tool Proxy (i.e. the deployment contract between the app and the LMS). In that scenario, the LMS Admin would start a registration process, and the Tool Provider would indicate it requires the Learning App Connect flow and the services for which it may ask the user to grant access to. However this still needs an explicit consent from the Institution. Possibly an ad-hoc app mode could allow any app to connect and retrieve a very limited subset of data (i.e. user enrollment) since, in any case, disclosure of that data is explicitly granted by the end user himself.

Another usability issue is for the App to know where to send the Connection request, i.e. which institution the current user should connect to. There could be hundreds of institutions registered (ideally, there would be). One way would be to let the user indicates it by entering the institution name, that might be enough. Also, sometimes, you just want to connect to a given course as quick as possible. What about then a universal course locator that would encode the provider and the course within the provider space?

You can view the mock draft of the API in swagger: https://app.swaggerhub.com/api/claudevervoort-perso/LearningAppConnect/1.0.0

All of this is fictional, and is aimed to serving as an illustration and bring some ideas to discuss whether or not such an integration model would indeed by useful in the ed-tech ecosystem. In a fragmented space, I think it might make sense for the LMS to sometimes let the apps take the front row, and mostly focus on being the glue joining those various experiences…

IMS Quarterly Notes on Learning Analytics

IMS Quarterly Event in Scottsdale, November 2016, Arizona. It’s been a few weeks now! It was a great event, in a great location (so odd to be surrounded by cactus), and at a great timing too (got to live the election unfold live, and staying at an AirBnB, I could even share the excitement with a charming local couple not exactly sharing my ideas). It was a great event, because I think we’ve made good progress on making LTI more approachable in the future by unbundling the LTI 2.x specifications and seeing great adoptions of the Content Item Selection Request.

But this post is more an opportunity to reflect on Learning Analytics, which was the main theme of this quarterly event. Although I was not a big fan of the long day of panels, there were quite a few great points made. I took some notes, and I’m sorry, can’t recall who said what, but here is a tidied up version of my notes.

Analytics is easy!

What’s difficult is what comes next. Collecting data, that is ‘easy’ bit. We can instrument, we can move them in a store. The mechanics of big data are now quite well known. The question is how to transform data into information, and even more actionable information: what can you do with it? How can it be used to prevent a student to fail? And better, allow a student to achieve her goal and thrive. How can we move up the conundrum:

Collect > Describe > Predict > Prescribe

As a panelist said: ‘Before asking the question, what would you do with the answer?’.

Question of Scale

Big Data and the Machine Learning Algorithms feed on, well, a lot of data. Yet Data is not seen as commodity, but rather well kept in silos. Each vendor, each institution, is keeping a hold on this perceived treasure chest. However, the amount of data available in each of those silos, is it  really enough to feed the Learning Machine? How can you feed proper research without actual data sets available? Privacy sure comes as an objection to sharing, but data can be made anonymous. One quote from that day:

Lower the tariff on data

Adding to this issue of scale is the disparity of experiences makes it difficult to aggregate. Take for example the diversity of courses, even one a given discipline within a single institution. When google collects data on clicking on ads, or searching for this or that item, it is a repeatable experience across users and inferences may be made from the colossal amount of data gathered. Caliper does help this issue by normalizing the events through the Caliper Metric Profiles. But even if we could standardize the grammar and the vocabulary (a great 1st step!), a course is usually very much a crafted experience by the instructor, and might not very well generalize itself. That I got 65% on the 1st quiz does only mean something based on how that course is made. Maybe it is actually very good! An heavily customized course, which changes regularly, and delivered to a small set of students, how can it collect enough data to make sensible inferences? Take make it work, should we let go of machine learning but just simpler rule based if-this-then-that kind of algorithm tailored for each course by the instructor or course designer?

Or should we go towards a standardized content, so that each course content and general flow is pre-established, thus allowing gathering data across all its deliveries? That imposes are revisit of the role of the Instructor, as it goes against the usual approach that the professor is the steward of the course.

Another aspect I was reminded of recently listening to a Triangulation episode interviewing Cathy O’Neil is that, when involving Machine Learning, it is very important to have a feedback loop, to allow an algorithm to correct itself. When the thing measured is easy, feedback loop are short and allow for a repeated short corrections (the example was a Netflix movie recommendation, that you rated poorly, thus allowing the recommendation algorithm to adapt). In the context of learning, how does a feedback loop looks like? As a panelist said:

Your systems should be learning while your students are learning

But how easy is it to build a short term feedback loop for Learning?

Learning Analytics can mean a lot of different things, from adaptive learning building a dynamic learning path or proposing remediation content on demand, to helping surface early indicators of students at risk, to assess how a piece of content is efficient or flawed, … One last quote of that day: Learning Analytics is not like traditional ‘business’ analytics, it’s analytics for… learning.

LTI and OAuth 2.0, towards a more decentralized and individualized LTI?

This is a bit of a follow up from a YouTube video I posted a couple of months ago on LTI vs OAuth 2.0. The VS was not very necessary really, it is more a question whether or not it would make sense to bring OAuth 2.0 in the LTI spec. This is a continuation of that reflection, as writing about it helps me think about it 🙂

But first a bit of history…

At first, there was the launch.  ltilaunch When LTI re-emerged as Basic LTI, it was all centered around the launch. In one packed browser redirect, all the contextual information was securely passed to the tool provider. Secure, how? By using a shared secret between both parties to sign all the payload. The idea of signing is easy to understand but often hard to get right, as a single small error leads to a totally wrong signature. Impossible to know if you completely foobared your implementation, or just forgot to escape one character somewhere in the signing flow. To simplify that work, IMS decided to leverage a well established signing process described as part of the OAuth 1.0 spec. Thus LTI is not using OAuth per say, just a technical part of it, the form post signing. But as such it allowed the re-use of existing libraries to sign and verify the signature of payloads, greatly simplifying the work of the implementers.

But then time passed…

And with that, one started to what to do more with LTI. Launching, sure that’s fine, but one now demanded more interaction with the Learning Platform. Starting by reporting grades. But that was just opening the door to a wider set of services. How do we secure those? Well, it turned out that there was an extension to Oauth 1.0 that proved timely: the ability to sign any type of request’s body payload, not just from url encoded ones. This extension is called body hashing. You compute the hash of the body, and include the hash value in the signed OAuth 1.0 header. Once you have verified the OAuth header was properly signed, you know the hash of the body. You just need to recompute it on your side, and verify it matches. Sure, the body of the payload is not encrypted, but you can trust it has not been altered, and you can trust the source. So with that, that gave a sound foundation to address the security of LTI services.

But then time passed…

SSL/HTTPS has became the norm. OAuth 2.0 is the norm. Having to sign your payload in your application code, not so. And OAuth 1.0 is becoming deprecated. Sure, there has been some well known controversy around OAuth 2.0, but more or less, the world has moved on. Should LTI move there too? If so, how, since LTI was not really doing an OAuth flow anyway, so how could OAuth 2.0 even be applicable in the simpler interaction model needed by current LTI?

No need for the user to grant access

Under the LTI model, authorization is granted to the tool provider, identified per its consumer key and authorized using the shared secret. While a consumer may restrict what a given provider may do, like: can it get the user’s email? can it post grades? can it access the roster? it always does it for the tool at large.

robotvsbob_robotThe negotiation for access and authorization can either be manual in LTI 1.x where the admin or instructor will manually configure the permissions for a given tool, or semi-automatic using the registration flow from LTI 2.0 that allows a tool to negotiate with the consumer a deployment contract (aka tool proxy), establishing the key/secret to use and the enabled services. In either case, the host learning system trusts the app to make good usage of its granted rights as those are not further limited by the acting user. Actually, the API never knows the acting user if there is one.

So why not Basic Authorization?

In that context, a possible simple replacement for the AOuth 1.0a body hashing is simply to use a simple basic authentication over SSL, where the tool key and secret, just base 64 encoded, are passed on every request. Doing so greatly simplifies the work of the implementer, and have the SSL layer do the heavy lifting:

  • It verifies the host learning system is indeed the recipient, by refusing connection to hosts without a valid certificate signed by a shared certificate authority
  • It ensures the content is untampered
  • The authorization header contains both the key and the secret, identifying the caller, and allowing the learning system to apply the appropriate rights to the incoming request
    If the secret is compromised, a re-registration process may be triggered to replace with a new one.

After all, in OAuth 2.0 the client key and secret are also passed directly (for example to acquire the access token).

Use Access Token instead?

Although passed exclusively over SSL, and as a request header, one might still frown at having the secret passed on the wire on every service request, be it in an encrypted fashion. In that case, we could think of passing it a more limited access token.

A token might present additional benefit over the direct of the secret:

  • A token might only be scoped for operation under a limited context, for example a course
  • A token might only be short lived; although the sort-liveness is a weak security argument, as it might give a false sense of safety (it’s ok if it is a bit vulnerable as it will not be active for long…)

Of course the use of the token brings the question on how is the token acquired on the 1st place. One may see a token service, but that causes a circular dependency, as we would need to go back to Basic Auth to acquire the Access Token.

An individual token

But if we acquire an access token, we might as well extend it to not only be limited by its time-to-live or context scope, but truly carry the identity and restrictions of the user too. In short a true OAuth 2.0 token.

robotvsbob_bob

With a real user access token, now the Learning Platform can know who changed a grade. Goodbye: the grade was changed to 87% by System.

How do you acquire the token?

Remember that the launch shortcuts the authorization flow of OAuth. By virtue of launching, the tool is granted the right to do the operation it had in the contract, the user does not have to grant any extra right (at least not those covered in the tool deployment). So the Tool just wants to use it’s tool privileges to get an access token. How?

  1. A service can be added to grant an access token: it would contain the user id, the context id, and would have the client id and secret. This would yield the access token the same way the Access Token Request would in an OAuth 2.0 flow
  2. The LTI Launch can contain an authorization code, that can be traded using the Access Token Request flow

Should the user’s entitlements be applied on top of the tool privileges?

Now that we can know on each action the acting user, the question is then: should the token be used to enforce authorization? For example can you use an instructor’s launch token from a course A to post a grade in course B where that instructor is not enrolled? The original LTI trust model is to give a tool access to all its proxied resources in the host, so a tool could change any of its grades in that example. Using the token here to enforce authorization could make the tool service call less predictable: how do I know the host will not reject my call if I use a student’s token to post a student’s grade (a legit call, in particular for autograding). One would have to clarify the expected authorization rules for the various services.

A ‘Tool Root User’ token?

And is there always an acting user? For example, during batch operations? We might need to define a ‘Tool User’,  the ‘root’ user of that LTI integration. But, theoretically, the only way to enter a tool UI is to go through a launch, so should we even want to have that root user? If the only way to go to the tool is not through a launch, then maybe there is a better way to acquire a token… it’s called OAuth Authorization flow 🙂 We’ll get back to that in a bit…

But what about the launch?

Ok, we might possibly have a way to unify the LTI services behind an OAuth 2.0 mechanic, but what about the launch? As we’ve seen, it is the still the cornerstone of an LTI integration. It packs in one redirect a huge set of data:

  • who the user is (which can be used as a way to authenticate the user)
  • what is the role in the current context
  • the current context information
  • and the intent of the link: what is launched?resolved resource URLs to communicate back (an interesting twist on hypermedia API).

Since all of that is in a client redirect, there is no other mean but to sign it, and so for that OAuth 1.0 is a great choice. How can we replacethat? Part of the data we pass along is the user’s information, and there is already something to authenticate a user’s in the OAuth ecosystem, it’s called Open ID Connect. Let’s see how we could adopt a similar flow for the launch.

In OpenIdConnect, one the user has given the right to authenticate herself, there is a redirect to the client code with an id token. The interesting thing about that token is that it is an actual JSON payload encoded in base 64. In order to trust that token, so it may not be re-used for example to use it to authenticate inside another app using the same identity provider, we must verify that token has been issued for us. For that, there is a service, the checkId end point. But that means an extra rest call… But the JSON data is not any kind of JSON, it’s a JSON Web Token aka JWT, which is signed signed using the client secret. So, rather than call the check id endpoint, one may instead verify the signature using the shared secret. Payload signed with a shared secret, now that does sound a lot like our LTI launch, isn’t it? With the extra luxury of using an end point (over SSL) instead of doing signature verification, which is more in the spirit of OAuth 2.0. Best of both worlds…

So let’s just see how a brute force port of that approach could work for an LTI launch:
remember than in LTI we don’t have a flow to goto the identity provider (the LMS) since we are already logged in it and launching from it, it’s implicit by using the course and launching (although the LMS might prompt the user on 1st launch for explicit consent to launch to an external tool).
So now we can replace the OAuth 1.0a POST by a JWT token, containing the same information as we used before; let’s call this a launch_token.

jwtlaunchHowever, how do we prevent a replay of that launch, for example being able to relaunch an instructor launch that would have been captured? Open Id being initiated from the client app uses server state (i.e. uses the user’s session) to make sure the initiating session that started the flow is the same as the one finalizing it. Since a launch starts from the Learning Platform, the session is not even existing yet possibly on the Tool side. So maybe let’s just resurrect the nonce and timestamp from OAuth 1.0a and include them in the JWT payload. Verifying the nonce and the timestamp can then be the job of the checkLaunch end point. By preventing replay, we make a launch a one time operation, limiting the cross-site forgery attach to not consumed launches. It remains key that a launch token is only computed at the time of the actual launch (and a short timestamp actually prevents stale links to be pre-rendered when a set of links are displayed on screen). Since the token can only be launched once, it might be safer to keep a POST as a GET redirect can more easily be replayed by the browser (reload, back button).

Being a JSON object now, the call might be more structured and unified with the JSON-LD approach adopted in the rest of the API, but the goal should still be to keep the verbosity under control, in particular if we want to allow for GET requests. Here is an early rendition on how it could look like:

{
    "@context": "http://imsglobal.org/contexts/lti/basiclaunch.jsonld",
    "lti_version": "LTI-1p0",
    "resource_link_id": "7-10e453dc-3bf9",
    "context_id": "7st3d",
    "context_label": "Medical Term",
    "context_title": "Medical Terminology for Health Professionals - Section A",
    "user_id": "23e044:13647b54f5:-7ff4"
    "roles": ["Instructor"],
    "launch_presentation_locale": "en - US",
    "launch_presentation_return_url": "about: blank",
    "lis_outcome_service_url": "http: //local.lp/nb/service/ltiOutcome/pox/",
    "lis_person_contact_email_primary": "claude.vervoort@fancylp.com",
    "lis_person_name_family": "vervoort",
    "lis_person_name_given": "claude",
    "lis_person_sourcedid": "bae854c0f5e1a6:325ee044:136f47b54f5:7ff4",
    "resource_link_title": "FlashNote subject A",
    "tool_consumer_info_product_family_code": "fancylp",
    "tool_consumer_info_version": "alpha-1",
    "tool_consumer_instance_guid": "local.lp"
    "nonce": "1444750410184860000",
    "timestamp": 1444750410,
    "custom": {
        "search": "subjectA"
    },
    "extension": {
        "somekey": "someval"
    }
}

No more a vassal of the Learning Platform!

While we may now have a modernized stack, easier to implement, giving us key information (the current user calling on the api), compatible with the platform OAuth protected API at large, the launch flow still forces an asymmetrical relationship between the tool and the hosting learning platform. One must enter the platform first!

What if we wanted to break that enclosure? Initiate the relationship from the Tool.

For example, as a student, I might have an app that has would like to automate my connection with a course. I would start the app, the app would ask me: do you want to connect with your institution? I would pick my institution in a list, and click Connect! Then would follow a typical OAuth flow with that institution, then for example grant access to my enrollment. Then the client application can automatically get and synchronize my enrollment.

allowzunivThe key here is LTI would offer a set of well standardized set of e-learning services, so that I as an application provider, I can have a common integration regardless of the learning platform used by the institution, an API portal.

learning_app_interopOf course that moves the Learning Platforms, and the institutions they support, a bit in the background. The integration is more at the data layer than the UI layer. Is that something an LMS Vendor would do? Would they accept to not be the entry point? Would it be ok someone else’s develop the cool app students and instructors would actually use? Would it be possible to really build a standardized API rich enough to be useful yet universal enough to be truly worth the investment from the implementers?

With the world of apps, we’re moving from one place does it all to an exploded universe of dedicated apps that each do a few things great. Connecting those apps together through the learning hub could possibly be the next mission of LTI. Learning App Interoperability !

Was gaming the undercurrent theme @LearningImpact 2015?

Maybe it is just the confirmation bias at play, as I have an interest in the matter, and I’m currently following the very well done and insightful ed-x course Design and Development of Games for Learning. But I wonder if Gaming was not the undercurrent theme this year… as I don’t remember Gaming being mentioned so often…

There were of course the few explicit mentions, like Collegis Education but of which the GradeCraft platform first comes to mind. Developed by the University of Michigan, and used in actual courses (2000 students) including – how à propos – a course on videogame and learning, it allows students to take control and experiment: various activities have a set of points, you can assemble as you want, and also simulate using the grade predictor what would be your projected final grade. And to that add the possibility to have badges, and ranks (who wants to stay a rat?), and you can have a possibly great experience… with the caveat of the extra investment in course design. It also puts more burden on the student, which has to do the extra effort to choose its own path, compared to a more traditional linear course. However, it still lives within the constraints of a traditional course, which has an end date, which has a final grade. If you can experiment, I was not sure I was seeing a key element of gaming: the freedom to fail.

Freedom to fail, fail forward, well, that was something we were hearing in another context: learning analytics, competency and learning maps, and adaptive learning. A game is a system of rules you navigate towards a final outcome, progressively harder, but never too hard. Too easy, and that’s not fun, you’re bored and disengaged. Too hard, then you let the game down. There is an art in setting difficulty in Game Design , and arguably, it seems the same kind of art in that balancing act of course progression. Here is an image from the article Game Design Theory Applied: The Flow Channel which I think could really apply to a course design:

Flow Channel Wave (from

Flow Channel Wave (from “The Art of Game Design” book by Jesse Schell)

While the right progression of difficulty can be done in a traditional course, as it is possible in a linear game (masterfully illustrated in the series of Uncharted and The Last of Us), one can see the arsenal of Learning Maps, analytics and adaptive learning as a way to dynamically build forward a path that keeps the student in the flow zone. However I myself feel a bit at odd with a truly adaptive model, as I want to be (or be given the impression to be) in control. In Shadow of Mordor, it’s an open world sandbox game. While yet there is an ultimate goal, at any given stage I am given a set of missions I can choose from. The more rewards a mission has, the more challenging it is. So I can set my path, if I fail too hard on a mission, I can decide to change course and go tackle simpler ones. Simpler missions, smaller rewards, but those rewards allow me to prepare myself – more practice, I can buy more powers, learn new attacks, etc… – to re-try the challenging missions. Freedom to fail… And like a game, a course could be just a set of failures until you reach the final success, a course like a game only has one 2 endings: success or abandon.

I see GameCraft has possibly some of that in it, letting the user to choose its missions. The Shadow of Mordor universe is alive and react to the player past actions, successes and failures, by opening the right set of options at the right time. All of that sounds a lot like how an adaptive course could function, giving the right set of choices to the learner to keep the progression going, while keeping the learner in control. It’s obviously easier said than done, as the investment on course design to craft such an experience I imagine would be huge. And as much as I like the dynamic universe of Shadow of Mordor, I personally still have a preference for sense of strong linear gameplay with a great narrative. Sometimes you think you have choices, but really you don’t…

Of course there is more to Game and Learning than designing a course using game design principles. One can use game (simulation) to acquire a first hand understanding of complex systems. The mechanics of the game is then ideally the intent of the learning. Literal and obvious/in your face educational content is probably a bad sign, where the subject and the mechanics are unrelated (as illustrated by the early edutainment game MathBlaster, shooting at the asteroid with the right multiplication result!). I have often thought of the example of Civilization brought forward by Kurt Squire. How much more meaningful it must be to read about the fall of the Roman empire when your own empire also has barely survived a flow of barbarians. It sure demands a greater investment in time than reading a chapter, but the systemic understanding allows one to now relate to the historical facts. But all of that is theoretical for me still, I only have 8 minutes gameplay time on civ!

So Gaming and Learning I think are on a colliding trajectory, and, if you are interested, I encourage to follow the edx course (which, even has a lurker, I am running behind!). But more importantly, play 🙂

LTI from B to C: a 1 hour+ tutorial covering LTI from the basic to the emerging services

Phew! Took me almost 2 months to get through, when I thought it would take 2 weeks 🙂 But here it is, a tutorial on Learning Tools Interoperability, starting with the Basic Launch and ending up with Caliper. It’s long, yet short, as there is so much to cover! And there is even more cooking (as I am privileged to be part of the IMS LTI Working Group, I have an idea of the scope we want to add). But no needs to run if everybody is walking, so I hope this could give an idea why it is time to look beyond the basic launch and outcomes.

As a one 1 hour 30 minutes YouTube Video about LTI by a French speaker trying the best it can to soften his accent can get a bit tiresome, i’ve also sliced it in a YouTube playlist: https://www.youtube.com/playlist?list=PLb5mG7w3UZkM_kx0mbojgDX4qFkGQsXO_

 

A new new year video !

As I do every year, in a revisited way of doing photo albums that we can open in years to come to to remember those past and dear memories, here is our family Bye Bye 2014 video.

As usual it is also a way to not completely loose my skills when it comes to 3d and editing. This year it is done using Luxology Modo as my software of choice, Softimage, was discontinued 🙁

And back from the IMS Quaterly Meeting in San Francisco

Looks like I’m repeating myself. So again back again from an IMS get together, now this time it is a Quarterly meeting, a lower key lower attendance gathering where we actually do work on the specs!

It was a rich set of days, hosted by Oracle on its Campus. I’ve never stayed that long in the ‘valley’ before. Driving a few miles on the 101 you truly realize how this is the epicenter of the tech world, and the new Klondike as a friend was pointing out. (And the Klondike eventually ran out of gold…). But back on the highlight of those few days:

Common Cartridge

It is nice to see a little bit of interest picking up again. Common Cartridge is indeed the shadowed sibling of the popular Learning Tools Interoperability specification. I see that as just a reflection of this new Cloud-based world: you do not bring your content in one’s system anymore, you bring one’s users to your system instead! However I see Common Cartridge might still have a card to play in the Open Education Resource (OER) movement, allowing content to freely flow between Learning Management Systems and be re-hashed. I also think in a world where everything goes to the cloud, the Common Cartridge can be a ‘downloadable’ course, maybe competing with the edupub initiative.

 Learning Tools Interoperability

Great progress there too! My current main focus (and a key need for my employer) is being addressed by 2 additional specifications added to the LTI ecosystem:

  • content item launch is a dedicated interaction between the tool consumer and the tool provider around the creation of resources in the tool consumer. It defines how a resource (in my current interest, an LTI Link, but it can also be HTML fragment, image, …) can be added without the user having to enter cumbersome parameters by defining a UI interaction where the user is sent to the tool consumer to select/create a resource, and on completion, returning to the tool consumer the proper information for the resource to be created.
  • outcome service:  we agreed to act rapidly to provide a first version of a richer outcome service (outcome being so overloaded this days, we almost thought to rename it grade service 🙂 ). It would essentially decouple the LTI link from the Gradebook Item (aka Line Item). This allows for heterogeneous experience like a Courseware to create multiple line items in the host gradebook without to require a link to have been pre-created in the tool consumer.

The LTI 2.0 is also nearing completion. History was made when Sakai and John Tibbett’s LMS did connect! And also we got to see the fresh new Coursera tatoo from Chuck!

CASA: the distributed home of Learning Apps

Interesting work being done by UCLA. From what I understand, it offers a distributed trusted discovery network of Learning Apps and Resources, a federated network of directories. It is associated with the idea of stores but its academic rooting and its distributed model kind of collides with the traditional idea of a ‘store’. So a bit curious to see how that will unfold…

Next stop: Salt Lake City

Next quaterly will be hosted by Instructure, at Salt Lake City. Sounds exciting!

Back from Learning Impact 2013

IMS Learning Impact at Marriott, San Diego

Don’t let the greyish photo fool you, Learning Impact 2013 was sunny outside and inside. I usually have a great time attending the IMS Quarterlies (where the working group I am part of meets) but that was actually my first Learning Impact (IMS annual conference) and it was a blast.

First and foremost, it was so much fun meeting old friends, and making new ones (didn’t think I’d spend an evening chatting around beers about philosophy – en français -, thanks Jeff!). After so many face to face meetings, I really grew a sense of belonging to that community. But it was not all fun, friends and great people, we were there for a purpose: technology and learning. And how we can make an impact in education. Rob Abel, CEO of IMS, mentioned that we were living a revolution. Revolution, I know, what a  washed out world, bastardized by years of marketing. But there is no way to deny it: as many industries, Internet is a tsunami that is shattering the very core assumption of education, the old 200+ years paradigm of the one size fits all teacher/expert and the classroom model, and it is unclear what will be the new reality when the water recesses. It is a tale of opportunity (wanna be google of e-learning) and survival, and quite exciting to be a part of actually!

So a few highlights of the conference:

LTI is the buzzword: it is funny how the smallest spec, the less ‘architected’, the Basic LTI Launch, is the buzzword and the standards spec everybody is aware of. A simple answer to a common problem, here is a recipe for success. LTI 2.0 builds on the success and is much more architected. Will it give it the legs to build a rich plug and play interoperable ecosystem of learning resources and tools? Or would its apparent complexity mean LTI 1.x (rebranded Basic LTI) will be the actual standards that is widely adopted, because it is mostly ‘enough’? I sure do hope for the former.

Role of the teacher: we hear things like teachers do not scale; building a student centric experience; the broken lecture model; can technology be used to scale up education to an internet scale audience? Or to improve the learner experience in more blended mentor/learner hierarchy. The answer is probably not in either but in both of them at the same time.

Learning Analytics: Big (and small!) Data carries to be a significant potential disruptor. Everybody knows it, but I feel it is unclear yet how it will finally be used. We like to think of predictive analytics, and think of the success of Netflix as a guidance. However, Learning is not Buying, the final outcome is not easy to evaluate: What constitutes good learning? Even good grades is probably not enough (even if it is the most obvious) way to measure. I see how learning maps coupled with Learning Analytics can help a user understand her path in the learning scape she is working on.

QTI Works: I like the assertive tone! Yes, QTI works 🙂 I still wear the scars of implementing QTI 1.2 years ago, but I feel the time is ripe to take a dive in the APIP and QTI 2.1 world again: QTIWorks (delivery engine) and Uniqurate (the authoring platform) might be the only thing I need to reboot.

LMS CEOs Panel: Leave all Political Correctness at the door, thanks! I was amazed. It all started smoothly but it did not take long for Instructure’s CEO Josh Coates to give a kick in the anthill. I could not believe my ears 🙂 Not always constructive but surely entertaining! But at the end, consensus grew around the fact that LMSes should inter-operate (what a surprised at an IMS conference). I loved they were challenged by Zach from Measured Progress to support QTI 2.1 by next year conference. Blank Stares, then ‘We are big fan of QTI’, ‘Oh yes we already export QTI something’… They sure agreed they would support it, at the same time having probably not a glimpse of what it entails (and it means a lot, you don’t write a QTI 2.1 APIP assessment engine that easily). Here is a pledge that I bet will quickly be forgotten!

And I could go on and on, it was a rich 3 days, I took a bunch of notes and will try to reflect a bit more in depth in later posts, for example on the excellent keynotes by Dr. Zhao on Education Paradigms, and Steve Kingler, on the remarkable work done at Western Governors University.

A last word to thanks my employer, Cengage Learning, for allowing to carrying on participating in the standardization works under way at IMS.

Game of Life in Creation Platform

Time to play a bit with Python and KL… You probably know Python. But what is KL? Kernel Language of course 🙂 That is a compiled (at run time though) strongly typed language dedicated to writing high performing operations build with easy parallelism abstraction. The idea is you write most of your code (User Interface) in Python and let the performance part execute in KL…

For who is this for? well mostly TDs (the guys that do the coding in Studios) as Creation Platform is meant to be tool to build tools! And in a DCC (Digital Content Creation think Maya/Softimage) agnostic way. In addition of offering the creation platform SDK, the Fabric Engine team also proposes a set of rather astonishing modules built ontop of it. All that to say you should really go check those out: Fabric Engine. And those guys are from Montreal, there are some serious skills in the city 🙂 Thanks Softimage/Avid, Autodesk and all the game studios for creating such a fertile eco system.

So back to my experiment, I gave myself a try on the platform by doing a Game of Life Cellular Automata, in 3d for a spin. Rules are similar to the regular 2d, although I must say I did not get the nice emerging patterns you usually see in the 2d examples (i.e. the gliders). From an implementation standpoint, I did not achieve as much parallelism as I wanted as the KL was then missing atomic add operation. But nevertheless it worked quite fine. The code is on github, it might require some tweaking if you run it against newer version of the platform: github repo.