LTI and OAuth 2.0, towards a more decentralized and individualized LTI?

This is a bit of a follow up from a YouTube video I posted a couple of months ago on LTI vs OAuth 2.0. The VS was not very necessary really, it is more a question whether or not it would make sense to bring OAuth 2.0 in the LTI spec. This is a continuation of that reflection, as writing about it helps me think about it 🙂

But first a bit of history…

At first, there was the launch.  ltilaunch When LTI re-emerged as Basic LTI, it was all centered around the launch. In one packed browser redirect, all the contextual information was securely passed to the tool provider. Secure, how? By using a shared secret between both parties to sign all the payload. The idea of signing is easy to understand but often hard to get right, as a single small error leads to a totally wrong signature. Impossible to know if you completely foobared your implementation, or just forgot to escape one character somewhere in the signing flow. To simplify that work, IMS decided to leverage a well established signing process described as part of the OAuth 1.0 spec. Thus LTI is not using OAuth per say, just a technical part of it, the form post signing. But as such it allowed the re-use of existing libraries to sign and verify the signature of payloads, greatly simplifying the work of the implementers.

But then time passed…

And with that, one started to what to do more with LTI. Launching, sure that’s fine, but one now demanded more interaction with the Learning Platform. Starting by reporting grades. But that was just opening the door to a wider set of services. How do we secure those? Well, it turned out that there was an extension to Oauth 1.0 that proved timely: the ability to sign any type of request’s body payload, not just from url encoded ones. This extension is called body hashing. You compute the hash of the body, and include the hash value in the signed OAuth 1.0 header. Once you have verified the OAuth header was properly signed, you know the hash of the body. You just need to recompute it on your side, and verify it matches. Sure, the body of the payload is not encrypted, but you can trust it has not been altered, and you can trust the source. So with that, that gave a sound foundation to address the security of LTI services.

But then time passed…

SSL/HTTPS has became the norm. OAuth 2.0 is the norm. Having to sign your payload in your application code, not so. And OAuth 1.0 is becoming deprecated. Sure, there has been some well known controversy around OAuth 2.0, but more or less, the world has moved on. Should LTI move there too? If so, how, since LTI was not really doing an OAuth flow anyway, so how could OAuth 2.0 even be applicable in the simpler interaction model needed by current LTI?

No need for the user to grant access

Under the LTI model, authorization is granted to the tool provider, identified per its consumer key and authorized using the shared secret. While a consumer may restrict what a given provider may do, like: can it get the user’s email? can it post grades? can it access the roster? it always does it for the tool at large.

robotvsbob_robotThe negotiation for access and authorization can either be manual in LTI 1.x where the admin or instructor will manually configure the permissions for a given tool, or semi-automatic using the registration flow from LTI 2.0 that allows a tool to negotiate with the consumer a deployment contract (aka tool proxy), establishing the key/secret to use and the enabled services. In either case, the host learning system trusts the app to make good usage of its granted rights as those are not further limited by the acting user. Actually, the API never knows the acting user if there is one.

So why not Basic Authorization?

In that context, a possible simple replacement for the AOuth 1.0a body hashing is simply to use a simple basic authentication over SSL, where the tool key and secret, just base 64 encoded, are passed on every request. Doing so greatly simplifies the work of the implementer, and have the SSL layer do the heavy lifting:

  • It verifies the host learning system is indeed the recipient, by refusing connection to hosts without a valid certificate signed by a shared certificate authority
  • It ensures the content is untampered
  • The authorization header contains both the key and the secret, identifying the caller, and allowing the learning system to apply the appropriate rights to the incoming request
    If the secret is compromised, a re-registration process may be triggered to replace with a new one.

After all, in OAuth 2.0 the client key and secret are also passed directly (for example to acquire the access token).

Use Access Token instead?

Although passed exclusively over SSL, and as a request header, one might still frown at having the secret passed on the wire on every service request, be it in an encrypted fashion. In that case, we could think of passing it a more limited access token.

A token might present additional benefit over the direct of the secret:

  • A token might only be scoped for operation under a limited context, for example a course
  • A token might only be short lived; although the sort-liveness is a weak security argument, as it might give a false sense of safety (it’s ok if it is a bit vulnerable as it will not be active for long…)

Of course the use of the token brings the question on how is the token acquired on the 1st place. One may see a token service, but that causes a circular dependency, as we would need to go back to Basic Auth to acquire the Access Token.

An individual token

But if we acquire an access token, we might as well extend it to not only be limited by its time-to-live or context scope, but truly carry the identity and restrictions of the user too. In short a true OAuth 2.0 token.

robotvsbob_bob

With a real user access token, now the Learning Platform can know who changed a grade. Goodbye: the grade was changed to 87% by System.

How do you acquire the token?

Remember that the launch shortcuts the authorization flow of OAuth. By virtue of launching, the tool is granted the right to do the operation it had in the contract, the user does not have to grant any extra right (at least not those covered in the tool deployment). So the Tool just wants to use it’s tool privileges to get an access token. How?

  1. A service can be added to grant an access token: it would contain the user id, the context id, and would have the client id and secret. This would yield the access token the same way the Access Token Request would in an OAuth 2.0 flow
  2. The LTI Launch can contain an authorization code, that can be traded using the Access Token Request flow

Should the user’s entitlements be applied on top of the tool privileges?

Now that we can know on each action the acting user, the question is then: should the token be used to enforce authorization? For example can you use an instructor’s launch token from a course A to post a grade in course B where that instructor is not enrolled? The original LTI trust model is to give a tool access to all its proxied resources in the host, so a tool could change any of its grades in that example. Using the token here to enforce authorization could make the tool service call less predictable: how do I know the host will not reject my call if I use a student’s token to post a student’s grade (a legit call, in particular for autograding). One would have to clarify the expected authorization rules for the various services.

A ‘Tool Root User’ token?

And is there always an acting user? For example, during batch operations? We might need to define a ‘Tool User’,  the ‘root’ user of that LTI integration. But, theoretically, the only way to enter a tool UI is to go through a launch, so should we even want to have that root user? If the only way to go to the tool is not through a launch, then maybe there is a better way to acquire a token… it’s called OAuth Authorization flow 🙂 We’ll get back to that in a bit…

But what about the launch?

Ok, we might possibly have a way to unify the LTI services behind an OAuth 2.0 mechanic, but what about the launch? As we’ve seen, it is the still the cornerstone of an LTI integration. It packs in one redirect a huge set of data:

  • who the user is (which can be used as a way to authenticate the user)
  • what is the role in the current context
  • the current context information
  • and the intent of the link: what is launched?resolved resource URLs to communicate back (an interesting twist on hypermedia API).

Since all of that is in a client redirect, there is no other mean but to sign it, and so for that OAuth 1.0 is a great choice. How can we replacethat? Part of the data we pass along is the user’s information, and there is already something to authenticate a user’s in the OAuth ecosystem, it’s called Open ID Connect. Let’s see how we could adopt a similar flow for the launch.

In OpenIdConnect, one the user has given the right to authenticate herself, there is a redirect to the client code with an id token. The interesting thing about that token is that it is an actual JSON payload encoded in base 64. In order to trust that token, so it may not be re-used for example to use it to authenticate inside another app using the same identity provider, we must verify that token has been issued for us. For that, there is a service, the checkId end point. But that means an extra rest call… But the JSON data is not any kind of JSON, it’s a JSON Web Token aka JWT, which is signed signed using the client secret. So, rather than call the check id endpoint, one may instead verify the signature using the shared secret. Payload signed with a shared secret, now that does sound a lot like our LTI launch, isn’t it? With the extra luxury of using an end point (over SSL) instead of doing signature verification, which is more in the spirit of OAuth 2.0. Best of both worlds…

So let’s just see how a brute force port of that approach could work for an LTI launch:
remember than in LTI we don’t have a flow to goto the identity provider (the LMS) since we are already logged in it and launching from it, it’s implicit by using the course and launching (although the LMS might prompt the user on 1st launch for explicit consent to launch to an external tool).
So now we can replace the OAuth 1.0a POST by a JWT token, containing the same information as we used before; let’s call this a launch_token.

jwtlaunchHowever, how do we prevent a replay of that launch, for example being able to relaunch an instructor launch that would have been captured? Open Id being initiated from the client app uses server state (i.e. uses the user’s session) to make sure the initiating session that started the flow is the same as the one finalizing it. Since a launch starts from the Learning Platform, the session is not even existing yet possibly on the Tool side. So maybe let’s just resurrect the nonce and timestamp from OAuth 1.0a and include them in the JWT payload. Verifying the nonce and the timestamp can then be the job of the checkLaunch end point. By preventing replay, we make a launch a one time operation, limiting the cross-site forgery attach to not consumed launches. It remains key that a launch token is only computed at the time of the actual launch (and a short timestamp actually prevents stale links to be pre-rendered when a set of links are displayed on screen). Since the token can only be launched once, it might be safer to keep a POST as a GET redirect can more easily be replayed by the browser (reload, back button).

Being a JSON object now, the call might be more structured and unified with the JSON-LD approach adopted in the rest of the API, but the goal should still be to keep the verbosity under control, in particular if we want to allow for GET requests. Here is an early rendition on how it could look like:

{
    "@context": "http://imsglobal.org/contexts/lti/basiclaunch.jsonld",
    "lti_version": "LTI-1p0",
    "resource_link_id": "7-10e453dc-3bf9",
    "context_id": "7st3d",
    "context_label": "Medical Term",
    "context_title": "Medical Terminology for Health Professionals - Section A",
    "user_id": "23e044:13647b54f5:-7ff4"
    "roles": ["Instructor"],
    "launch_presentation_locale": "en - US",
    "launch_presentation_return_url": "about: blank",
    "lis_outcome_service_url": "http: //local.lp/nb/service/ltiOutcome/pox/",
    "lis_person_contact_email_primary": "claude.vervoort@fancylp.com",
    "lis_person_name_family": "vervoort",
    "lis_person_name_given": "claude",
    "lis_person_sourcedid": "bae854c0f5e1a6:325ee044:136f47b54f5:7ff4",
    "resource_link_title": "FlashNote subject A",
    "tool_consumer_info_product_family_code": "fancylp",
    "tool_consumer_info_version": "alpha-1",
    "tool_consumer_instance_guid": "local.lp"
    "nonce": "1444750410184860000",
    "timestamp": 1444750410,
    "custom": {
        "search": "subjectA"
    },
    "extension": {
        "somekey": "someval"
    }
}

No more a vassal of the Learning Platform!

While we may now have a modernized stack, easier to implement, giving us key information (the current user calling on the api), compatible with the platform OAuth protected API at large, the launch flow still forces an asymmetrical relationship between the tool and the hosting learning platform. One must enter the platform first!

What if we wanted to break that enclosure? Initiate the relationship from the Tool.

For example, as a student, I might have an app that has would like to automate my connection with a course. I would start the app, the app would ask me: do you want to connect with your institution? I would pick my institution in a list, and click Connect! Then would follow a typical OAuth flow with that institution, then for example grant access to my enrollment. Then the client application can automatically get and synchronize my enrollment.

allowzunivThe key here is LTI would offer a set of well standardized set of e-learning services, so that I as an application provider, I can have a common integration regardless of the learning platform used by the institution, an API portal.

learning_app_interopOf course that moves the Learning Platforms, and the institutions they support, a bit in the background. The integration is more at the data layer than the UI layer. Is that something an LMS Vendor would do? Would they accept to not be the entry point? Would it be ok someone else’s develop the cool app students and instructors would actually use? Would it be possible to really build a standardized API rich enough to be useful yet universal enough to be truly worth the investment from the implementers?

With the world of apps, we’re moving from one place does it all to an exploded universe of dedicated apps that each do a few things great. Connecting those apps together through the learning hub could possibly be the next mission of LTI. Learning App Interoperability !

6 thoughts on “LTI and OAuth 2.0, towards a more decentralized and individualized LTI?

  1. Carina

    Hi Claude! Great, this article seems very interesting, for someone who develops educational apps.

    I’ve got a Virtual Campus over Moodle 2.6, and I need implement the use of an external tool with LTI protocol. The external tool is an app developed by a vendor, who tested connection with other LMS and it resulted ok.

    I’m having problems testing this LTI connection. My LMS users, sometimes get an error message “bad basic lti launch request” and other times they connect ok, to app.
    We are very confused about what could be the problem. For example, there are people that connect ok using Chrome, and don’t with Firefox or IE. Other people, can’t connect trough Chrome either.

    Did you experience this before? or maybe you know how find the cause?

    A bit more of context: The vendor of the app gave me parameters for setting connection in Moodle. Our Moodle it’s hosting in own servers, behind proxy and firewall. App and LMS, both use secure connections (SSL).

    Thank you in advance!

    Carina

    Reply
    1. claude Post author

      It’s hard to say; if the error is on the app side, which seems like it is, it might be caused by encoding issues; you may contain some characters in the course or user name that might not be escaped in the same way when Moodle builds the signature (using I imagine UTF-8) for the launch, and when the app is actually receiving the data, causing a signature mismatch leading to bad lti launch. One thing is to make sure your LMS web pages are always encoded in UTF-8 (view page info on your browser), and also that the app on the other side assumes that encoding. A total shot in the dark, but I know it has happened to me in the past. Good luck! Claude.

      Reply
  2. Lukas

    great Article, thanks a lot! I see it was written long time ago, do you have now final answers to your questions? I like the idea using JWT Token for the Launch button as replacement for old oAuth1.0 system. But is there meanwhile some standard to this? If I understand oAuth2.0 correctly, 1 request login is against the idea of oAuth2.0, so what would be the final solution for this button?

    Reply
    1. claude Post author

      Yes! There is an updated security document being on the works right now that would replace the Form based OAuth 1.0 launch by a signed token (still need to be passed as a form POST though, as it is still a client redirect). I don’t think it’s publicly available yet, but I think it should be soon, just check the IMS global website. It will also propose to use an OAuth Token to secure call backs to the LMS. However it will be a while before it gets adopted I imagine.

      As for OAuth 3-legged authentication, it is true that right now the LTI flow does not involve explicitly the student in the launch. The student is either implicitly acknowledging to sharing some basic info to the App based on she accepting to be in the course, and agreements between the Tool and the Institution that led to that tool being available there. However I know some platforms will prompt the user an explicit acknowledgment prior to launch (alike a ‘By launching this app, you agree to share your name… to this provider.”)

      The launch is still the key integration mechanic, I do hope we can eventually allow for a reverse flow when the connection originates from the Learning App. It would be great for mobile, but not only. I keep pushing the idea 🙂

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload the CAPTCHA.