Was gaming the undercurrent theme @LearningImpact 2015?

Maybe it is just the confirmation bias at play, as I have an interest in the matter, and I’m currently following the very well done and insightful ed-x course Design and Development of Games for Learning. But I wonder if Gaming was not the undercurrent theme this year… as I don’t remember Gaming being mentioned so often…

There were of course the few explicit mentions, like Collegis Education but of which the GradeCraft platform first comes to mind. Developed by the University of Michigan, and used in actual courses (2000 students) including – how à propos – a course on videogame and learning, it allows students to take control and experiment: various activities have a set of points, you can assemble as you want, and also simulate using the grade predictor what would be your projected final grade. And to that add the possibility to have badges, and ranks (who wants to stay a rat?), and you can have a possibly great experience… with the caveat of the extra investment in course design. It also puts more burden on the student, which has to do the extra effort to choose its own path, compared to a more traditional linear course. However, it still lives within the constraints of a traditional course, which has an end date, which has a final grade. If you can experiment, I was not sure I was seeing a key element of gaming: the freedom to fail.

Freedom to fail, fail forward, well, that was something we were hearing in another context: learning analytics, competency and learning maps, and adaptive learning. A game is a system of rules you navigate towards a final outcome, progressively harder, but never too hard. Too easy, and that’s not fun, you’re bored and disengaged. Too hard, then you let the game down. There is an art in setting difficulty in Game Design , and arguably, it seems the same kind of art in that balancing act of course progression. Here is an image from the article Game Design Theory Applied: The Flow Channel which I think could really apply to a course design:

Flow Channel Wave (from

Flow Channel Wave (from “The Art of Game Design” book by Jesse Schell)

While the right progression of difficulty can be done in a traditional course, as it is possible in a linear game (masterfully illustrated in the series of Uncharted and The Last of Us), one can see the arsenal of Learning Maps, analytics and adaptive learning as a way to dynamically build forward a path that keeps the student in the flow zone. However I myself feel a bit at odd with a truly adaptive model, as I want to be (or be given the impression to be) in control. In Shadow of Mordor, it’s an open world sandbox game. While yet there is an ultimate goal, at any given stage I am given a set of missions I can choose from. The more rewards a mission has, the more challenging it is. So I can set my path, if I fail too hard on a mission, I can decide to change course and go tackle simpler ones. Simpler missions, smaller rewards, but those rewards allow me to prepare myself – more practice, I can buy more powers, learn new attacks, etc… – to re-try the challenging missions. Freedom to fail… And like a game, a course could be just a set of failures until you reach the final success, a course like a game only has one 2 endings: success or abandon.

I see GameCraft has possibly some of that in it, letting the user to choose its missions. The Shadow of Mordor universe is alive and react to the player past actions, successes and failures, by opening the right set of options at the right time. All of that sounds a lot like how an adaptive course could function, giving the right set of choices to the learner to keep the progression going, while keeping the learner in control. It’s obviously easier said than done, as the investment on course design to craft such an experience I imagine would be huge. And as much as I like the dynamic universe of Shadow of Mordor, I personally still have a preference for sense of strong linear gameplay with a great narrative. Sometimes you think you have choices, but really you don’t…

Of course there is more to Game and Learning than designing a course using game design principles. One can use game (simulation) to acquire a first hand understanding of complex systems. The mechanics of the game is then ideally the intent of the learning. Literal and obvious/in your face educational content is probably a bad sign, where the subject and the mechanics are unrelated (as illustrated by the early edutainment game MathBlaster, shooting at the asteroid with the right multiplication result!). I have often thought of the example of Civilization brought forward by Kurt Squire. How much more meaningful it must be to read about the fall of the Roman empire when your own empire also has barely survived a flow of barbarians. It sure demands a greater investment in time than reading a chapter, but the systemic understanding allows one to now relate to the historical facts. But all of that is theoretical for me still, I only have 8 minutes gameplay time on civ!

So Gaming and Learning I think are on a colliding trajectory, and, if you are interested, I encourage to follow the edx course (which, even has a lurker, I am running behind!). But more importantly, play :)

LTI from B to C: a 1 hour+ tutorial covering LTI from the basic to the emerging services

Phew! Took me almost 2 months to get through, when I thought it would take 2 weeks :) But here it is, a tutorial on Learning Tools Interoperability, starting with the Basic Launch and ending up with Caliper. It’s long, yet short, as there is so much to cover! And there is even more cooking (as I am privileged to be part of the IMS LTI Working Group, I have an idea of the scope we want to add). But no needs to run if everybody is walking, so I hope this could give an idea why it is time to look beyond the basic launch and outcomes.

As a one 1 hour 30 minutes YouTube Video about LTI by a French speaker trying the best it can to soften his accent can get a bit tiresome, i’ve also sliced it in a YouTube playlist: https://www.youtube.com/playlist?list=PLb5mG7w3UZkM_kx0mbojgDX4qFkGQsXO_

 

A new new year video !

As I do every year, in a revisited way of doing photo albums that we can open in years to come to to remember those past and dear memories, here is our family Bye Bye 2014 video.

As usual it is also a way to not completely loose my skills when it comes to 3d and editing. This year it is done using Luxology Modo as my software of choice, Softimage, was discontinued :(

LTI at works @Learning Impact 2014

I’ve been pushing @Cengage the Learning Tools Interoperability internally so that it becomes the foundation of the integration API with our Courseware platform, MindTap. We lately achieved our first LTI integration with an Video Assessment engine called YouSeeU (think like the students do not watch the video, but are the one being recorded and evaluated). I did a presentation on it helped with YouSeeU Chief Learning Officer Jeff Lewis Phd. Here are the slides (made with reveal.js), and also a case study I made on the same matter.

It was as usual a great opportunity to meet old friends and colleagues, and also make new ones! New Orleans is the best city for that… So rich with history and grit, I feel it is the opposite of Las Vegas and much more enjoyable! Here are a few pix from the event.

Oh and that was also my 1st tweet ever, about time :)

 

Confoo: what I learned today (1)

I’m attending the Confoo conference this year. As with all those conferences, it’s a lot of information flowing, so this my self-assigned exercise to remember at the end of the day the nice things I learned. So what did I learn today?

I started with a non tech discussion about freelancing. I’ve never freelanced, but seems everybody else is doing it, so that 101 presentation was great. Some random thoughts out of it: hourly rate = yearly rate / 1900 * 1.25 to account for the overhead of insurance, retirement, … Have a strong relationship with the head hunter, avoid exclusivity. Corporations can pay off using dividends, salary or blend, and protects you. Capital sin for a staffing agency: sending your resume to a client without letting you know first. The presenter, Martin Handfield (stratweb.ca) seemed a trustworthy guy, at least I can imagine he got quite a few extra resumes out of that session :)

Then, a bit of a refresher on Java EE 7. It’s been a while I have left the EJB shores, so it’s fun to hear back terms like container manager transactions, brought back old memories. I like the way it’s going, using annotations like TransactionScoped and dependency injection rather than monolithic bean structured. JAX-RS Client might actually be something we could use, in place of using HttpClient directly. Seems to have some potential to simplify client code communicating with REST end points in the same way Jersey simplified the exposition of REST end points.

The following session on CSS by Rachel Andrew was great: multi-column layout, flex box and grid models, CSS regions and exclusions (go here for more). I could not resist trying the multi column layout directly on our LTI Test app:

TwoColLayoutAfter lunch, a going back the basics of Linked Data and RDF. Now that I have been acquainted with JSON-LD and REST API design, going back to the core was a good refresher. I finally understood why SPARQL, the RDF query language, seems to be a standardized alternative to one-off API to expose data and allowing cross pollination of content that APIs silos make otherwise very difficult (open data movement).

And to wrap the day, a lively presentation on Responsive Design. Is it only about media queries? In a way the web was meant to be responsive from early on (liquid flow). So nothing new around the sun in this talk, but a few snippets of good sense: viewport approach being a coarse approach (basically wrapping x alternate designs based on x viewport sizes) to which we would prefer a more progressive adaptability (almost having each component be responsive on their own with softer transitions). And also that responsive design is not the only answer: sometimes a distinct mobile experience (most likely delivered using an App) might actually be a better answer as it can natively and fully leverage the device features (GPS/Camera/Offline/…). An app does not have to replicate the website features, but needs to offer a distinctly mobile experience.

That’s it for the day! Back at it tomorrow…

Bye Bye 2013

The ‘Bye Bye’ family souvenir video is the only excuse to keep myself from getting rusty in 3D and Video stuff (as my main work is taking a fair bit of my time).

This year, no particles, no wave spectrum. I decided to ramp up a bit with Modo, as I might just stick to Softimage 2014 from now on, and invest my hobby money in Modo instead (seems a more complete toolbox for the hobbyist, and the amount of changes I was getting for Softimage for the price of the subscription did not seem to make it really worth it).

2013 sketch

So here, it’s a simple parallax trick around 2013. I sketched the 2013 and got the various elevations (0-5) and whether a block top was flat, descending or raising. I wrote a little python script to convert those values into a grid data that I could feed directly into Softimage ICE using the String to Array node. That gave the tree a final destination as to where to grow the points making each cube. The grid in itself was done using ICE geometry, replicating a simple cube and tagging the proper material depending on whether it is a ‘black’ square or a ‘white’ one. Finally the whole thing was exported as MDD and loaded in Modo for the lighting, shading and camera work, and of course final rendering. I must say all in all it worked quite smoothly! If you are interested, here is the Softimage scene: byebye2013.scn.

So what to say about modo? I’ve tried to love it for so long, but it never clicked like Softimage did. I’m never quite sure what to expect really, where Softimage it mostly always make sense. I’m saddened that Softimage seems to be stalling. I’ll sure keep using it as much as possible as I have a deep investment in it, but for a hobbyist, I much rather fork $400 to modo every other year for substantial update, rather that $900 a year for minor update (and now it would cost me 3k I believe!). Hopefully modo 801 will bring me joy!

On this, happy new year!

 

And back from the IMS Quaterly Meeting in San Francisco

Looks like I’m repeating myself. So again back again from an IMS get together, now this time it is a Quarterly meeting, a lower key lower attendance gathering where we actually do work on the specs!

It was a rich set of days, hosted by Oracle on its Campus. I’ve never stayed that long in the ‘valley’ before. Driving a few miles on the 101 you truly realize how this is the epicenter of the tech world, and the new Klondike as a friend was pointing out. (And the Klondike eventually ran out of gold…). But back on the highlight of those few days:

Common Cartridge

It is nice to see a little bit of interest picking up again. Common Cartridge is indeed the shadowed sibling of the popular Learning Tools Interoperability specification. I see that as just a reflection of this new Cloud-based world: you do not bring your content in one’s system anymore, you bring one’s users to your system instead! However I see Common Cartridge might still have a card to play in the Open Education Resource (OER) movement, allowing content to freely flow between Learning Management Systems and be re-hashed. I also think in a world where everything goes to the cloud, the Common Cartridge can be a ‘downloadable’ course, maybe competing with the edupub initiative.

 Learning Tools Interoperability

Great progress there too! My current main focus (and a key need for my employer) is being addressed by 2 additional specifications added to the LTI ecosystem:

  • content item launch is a dedicated interaction between the tool consumer and the tool provider around the creation of resources in the tool consumer. It defines how a resource (in my current interest, an LTI Link, but it can also be HTML fragment, image, …) can be added without the user having to enter cumbersome parameters by defining a UI interaction where the user is sent to the tool consumer to select/create a resource, and on completion, returning to the tool consumer the proper information for the resource to be created.
  • outcome service:  we agreed to act rapidly to provide a first version of a richer outcome service (outcome being so overloaded this days, we almost thought to rename it grade service :) ). It would essentially decouple the LTI link from the Gradebook Item (aka Line Item). This allows for heterogeneous experience like a Courseware to create multiple line items in the host gradebook without to require a link to have been pre-created in the tool consumer.

The LTI 2.0 is also nearing completion. History was made when Sakai and John Tibbett’s LMS did connect! And also we got to see the fresh new Coursera tatoo from Chuck!

CASA: the distributed home of Learning Apps

Interesting work being done by UCLA. From what I understand, it offers a distributed trusted discovery network of Learning Apps and Resources, a federated network of directories. It is associated with the idea of stores but its academic rooting and its distributed model kind of collides with the traditional idea of a ‘store’. So a bit curious to see how that will unfold…

Next stop: Salt Lake City

Next quaterly will be hosted by Instructure, at Salt Lake City. Sounds exciting!

Back from Learning Impact 2013

IMS Learning Impact at Marriott, San Diego

Don’t let the greyish photo fool you, Learning Impact 2013 was sunny outside and inside. I usually have a great time attending the IMS Quarterlies (where the working group I am part of meets) but that was actually my first Learning Impact (IMS annual conference) and it was a blast.

First and foremost, it was so much fun meeting old friends, and making new ones (didn’t think I’d spend an evening chatting around beers about philosophy – en français -, thanks Jeff!). After so many face to face meetings, I really grew a sense of belonging to that community. But it was not all fun, friends and great people, we were there for a purpose: technology and learning. And how we can make an impact in education. Rob Abel, CEO of IMS, mentioned that we were living a revolution. Revolution, I know, what a  washed out world, bastardized by years of marketing. But there is no way to deny it: as many industries, Internet is a tsunami that is shattering the very core assumption of education, the old 200+ years paradigm of the one size fits all teacher/expert and the classroom model, and it is unclear what will be the new reality when the water recesses. It is a tale of opportunity (wanna be google of e-learning) and survival, and quite exciting to be a part of actually!

So a few highlights of the conference:

LTI is the buzzword: it is funny how the smallest spec, the less ‘architected’, the Basic LTI Launch, is the buzzword and the standards spec everybody is aware of. A simple answer to a common problem, here is a recipe for success. LTI 2.0 builds on the success and is much more architected. Will it give it the legs to build a rich plug and play interoperable ecosystem of learning resources and tools? Or would its apparent complexity mean LTI 1.x (rebranded Basic LTI) will be the actual standards that is widely adopted, because it is mostly ‘enough’? I sure do hope for the former.

Role of the teacher: we hear things like teachers do not scale; building a student centric experience; the broken lecture model; can technology be used to scale up education to an internet scale audience? Or to improve the learner experience in more blended mentor/learner hierarchy. The answer is probably not in either but in both of them at the same time.

Learning Analytics: Big (and small!) Data carries to be a significant potential disruptor. Everybody knows it, but I feel it is unclear yet how it will finally be used. We like to think of predictive analytics, and think of the success of Netflix as a guidance. However, Learning is not Buying, the final outcome is not easy to evaluate: What constitutes good learning? Even good grades is probably not enough (even if it is the most obvious) way to measure. I see how learning maps coupled with Learning Analytics can help a user understand her path in the learning scape she is working on.

QTI Works: I like the assertive tone! Yes, QTI works :) I still wear the scars of implementing QTI 1.2 years ago, but I feel the time is ripe to take a dive in the APIP and QTI 2.1 world again: QTIWorks (delivery engine) and Uniqurate (the authoring platform) might be the only thing I need to reboot.

LMS CEOs Panel: Leave all Political Correctness at the door, thanks! I was amazed. It all started smoothly but it did not take long for Instructure’s CEO Josh Coates to give a kick in the anthill. I could not believe my ears :) Not always constructive but surely entertaining! But at the end, consensus grew around the fact that LMSes should inter-operate (what a surprised at an IMS conference). I loved they were challenged by Zach from Measured Progress to support QTI 2.1 by next year conference. Blank Stares, then ‘We are big fan of QTI’, ‘Oh yes we already export QTI something’… They sure agreed they would support it, at the same time having probably not a glimpse of what it entails (and it means a lot, you don’t write a QTI 2.1 APIP assessment engine that easily). Here is a pledge that I bet will quickly be forgotten!

And I could go on and on, it was a rich 3 days, I took a bunch of notes and will try to reflect a bit more in depth in later posts, for example on the excellent keynotes by Dr. Zhao on Education Paradigms, and Steve Kingler, on the remarkable work done at Western Governors University.

A last word to thanks my employer, Cengage Learning, for allowing to carrying on participating in the standardization works under way at IMS.

Game of Life in Creation Platform

Time to play a bit with Python and KL… You probably know Python. But what is KL? Kernel Language of course :) That is a compiled (at run time though) strongly typed language dedicated to writing high performing operations build with easy parallelism abstraction. The idea is you write most of your code (User Interface) in Python and let the performance part execute in KL…

For who is this for? well mostly TDs (the guys that do the coding in Studios) as Creation Platform is meant to be tool to build tools! And in a DCC (Digital Content Creation think Maya/Softimage) agnostic way. In addition of offering the creation platform SDK, the Fabric Engine team also proposes a set of rather astonishing modules built ontop of it. All that to say you should really go check those out: Fabric Engine. And those guys are from Montreal, there are some serious skills in the city :) Thanks Softimage/Avid, Autodesk and all the game studios for creating such a fertile eco system.

So back to my experiment, I gave myself a try on the platform by doing a Game of Life Cellular Automata, in 3d for a spin. Rules are similar to the regular 2d, although I must say I did not get the nice emerging patterns you usually see in the 2d examples (i.e. the gliders). From an implementation standpoint, I did not achieve as much parallelism as I wanted as the KL was then missing atomic add operation. But nevertheless it worked quite fine. The code is on github, it might require some tweaking if you run it against newer version of the platform: github repo.

 

Bye Bye 2012 Video

Finally done, the 2012 Bye Bye video. That is my main excuse to not get too rusty using Softimage, Premiere and After Effects… So this is a wrap up of our best family and friends pictures and videos mixed with a Trance Track (Dash Berlin feat. Emma Hewitt – Waiting).

Technically, since I do have a limited time (and resource, my PC was churning 1 frame every 6 minutes!) I wanted to go with a simple concept, both in term of modelling (limited skills!), animation and rendering. And even there I had to trim it a bit down.

So this is a simple model built and animated in Softimage. There is ICE in there! It is the spectrum bars that yes are supposed to follow up the music, I used my SpectrumToWave plugin described in prior posts. ICE is also used for the counter: it is actually 3 particles each one assigned a value between 0 and 9 depending on current time. This is then used to pick the Instance Shape for that particle (there are 10 objects, each one representing one digit). The also track their position and 3 nulls bound to the surface of the memory pod.

The fake somewhat depth-of-field effect is coming from Magic Bullet Looks filter that is applied in the Premiere sequence. I used different looks for all the videos. Magic Bullet rocks!

Any question, post here! Happy new year!