Confoo: what I learned today (1)

I’m attending the Confoo conference this year. As with all those conferences, it’s a lot of information flowing, so this my self-assigned exercise to remember at the end of the day the nice things I learned. So what did I learn today?

I started with a non tech discussion about freelancing. I’ve never freelanced, but seems everybody else is doing it, so that 101 presentation was great. Some random thoughts out of it: hourly rate = yearly rate / 1900 * 1.25 to account for the overhead of insurance, retirement, … Have a strong relationship with the head hunter, avoid exclusivity. Corporations can pay off using dividends, salary or blend, and protects you. Capital sin for a staffing agency: sending your resume to a client without letting you know first. The presenter, Martin Handfield (stratweb.ca) seemed a trustworthy guy, at least I can imagine he got quite a few extra resumes out of that session :)

Then, a bit of a refresher on Java EE 7. It’s been a while I have left the EJB shores, so it’s fun to hear back terms like container manager transactions, brought back old memories. I like the way it’s going, using annotations like TransactionScoped and dependency injection rather than monolithic bean structured. JAX-RS Client might actually be something we could use, in place of using HttpClient directly. Seems to have some potential to simplify client code communicating with REST end points in the same way Jersey simplified the exposition of REST end points.

The following session on CSS by Rachel Andrew was great: multi-column layout, flex box and grid models, CSS regions and exclusions (go here for more). I could not resist trying the multi column layout directly on our LTI Test app:

TwoColLayoutAfter lunch, a going back the basics of Linked Data and RDF. Now that I have been acquainted with JSON-LD and REST API design, going back to the core was a good refresher. I finally understood why SPARQL, the RDF query language, seems to be a standardized alternative to one-off API to expose data and allowing cross pollination of content that APIs silos make otherwise very difficult (open data movement).

And to wrap the day, a lively presentation on Responsive Design. Is it only about media queries? In a way the web was meant to be responsive from early on (liquid flow). So nothing new around the sun in this talk, but a few snippets of good sense: viewport approach being a coarse approach (basically wrapping x alternate designs based on x viewport sizes) to which we would prefer a more progressive adaptability (almost having each component be responsive on their own with softer transitions). And also that responsive design is not the only answer: sometimes a distinct mobile experience (most likely delivered using an App) might actually be a better answer as it can natively and fully leverage the device features (GPS/Camera/Offline/…). An app does not have to replicate the website features, but needs to offer a distinctly mobile experience.

That’s it for the day! Back at it tomorrow…

Bye Bye 2013

The ‘Bye Bye’ family souvenir video is the only excuse to keep myself from getting rusty in 3D and Video stuff (as my main work is taking a fair bit of my time).

This year, no particles, no wave spectrum. I decided to ramp up a bit with Modo, as I might just stick to Softimage 2014 from now on, and invest my hobby money in Modo instead (seems a more complete toolbox for the hobbyist, and the amount of changes I was getting for Softimage for the price of the subscription did not seem to make it really worth it).

2013 sketch

So here, it’s a simple parallax trick around 2013. I sketched the 2013 and got the various elevations (0-5) and whether a block top was flat, descending or raising. I wrote a little python script to convert those values into a grid data that I could feed directly into Softimage ICE using the String to Array node. That gave the tree a final destination as to where to grow the points making each cube. The grid in itself was done using ICE geometry, replicating a simple cube and tagging the proper material depending on whether it is a ‘black’ square or a ‘white’ one. Finally the whole thing was exported as MDD and loaded in Modo for the lighting, shading and camera work, and of course final rendering. I must say all in all it worked quite smoothly! If you are interested, here is the Softimage scene: byebye2013.scn.

So what to say about modo? I’ve tried to love it for so long, but it never clicked like Softimage did. I’m never quite sure what to expect really, where Softimage it mostly always make sense. I’m saddened that Softimage seems to be stalling. I’ll sure keep using it as much as possible as I have a deep investment in it, but for a hobbyist, I much rather fork $400 to modo every other year for substantial update, rather that $900 a year for minor update (and now it would cost me 3k I believe!). Hopefully modo 801 will bring me joy!

On this, happy new year!

 

And back from the IMS Quaterly Meeting in San Francisco

Looks like I’m repeating myself. So again back again from an IMS get together, now this time it is a Quarterly meeting, a lower key lower attendance gathering where we actually do work on the specs!

It was a rich set of days, hosted by Oracle on its Campus. I’ve never stayed that long in the ‘valley’ before. Driving a few miles on the 101 you truly realize how this is the epicenter of the tech world, and the new Klondike as a friend was pointing out. (And the Klondike eventually ran out of gold…). But back on the highlight of those few days:

Common Cartridge

It is nice to see a little bit of interest picking up again. Common Cartridge is indeed the shadowed sibling of the popular Learning Tools Interoperability specification. I see that as just a reflection of this new Cloud-based world: you do not bring your content in one’s system anymore, you bring one’s users to your system instead! However I see Common Cartridge might still have a card to play in the Open Education Resource (OER) movement, allowing content to freely flow between Learning Management Systems and be re-hashed. I also think in a world where everything goes to the cloud, the Common Cartridge can be a ‘downloadable’ course, maybe competing with the edupub initiative.

 Learning Tools Interoperability

Great progress there too! My current main focus (and a key need for my employer) is being addressed by 2 additional specifications added to the LTI ecosystem:

  • content item launch is a dedicated interaction between the tool consumer and the tool provider around the creation of resources in the tool consumer. It defines how a resource (in my current interest, an LTI Link, but it can also be HTML fragment, image, …) can be added without the user having to enter cumbersome parameters by defining a UI interaction where the user is sent to the tool consumer to select/create a resource, and on completion, returning to the tool consumer the proper information for the resource to be created.
  • outcome service:  we agreed to act rapidly to provide a first version of a richer outcome service (outcome being so overloaded this days, we almost thought to rename it grade service :) ). It would essentially decouple the LTI link from the Gradebook Item (aka Line Item). This allows for heterogeneous experience like a Courseware to create multiple line items in the host gradebook without to require a link to have been pre-created in the tool consumer.

The LTI 2.0 is also nearing completion. History was made when Sakai and John Tibbett’s LMS did connect! And also we got to see the fresh new Coursera tatoo from Chuck!

CASA: the distributed home of Learning Apps

Interesting work being done by UCLA. From what I understand, it offers a distributed trusted discovery network of Learning Apps and Resources, a federated network of directories. It is associated with the idea of stores but its academic rooting and its distributed model kind of collides with the traditional idea of a ‘store’. So a bit curious to see how that will unfold…

Next stop: Salt Lake City

Next quaterly will be hosted by Instructure, at Salt Lake City. Sounds exciting!

Back from Learning Impact 2013

IMS Learning Impact at Marriott, San Diego

Don’t let the greyish photo fool you, Learning Impact 2013 was sunny outside and inside. I usually have a great time attending the IMS Quarterlies (where the working group I am part of meets) but that was actually my first Learning Impact (IMS annual conference) and it was a blast.

First and foremost, it was so much fun meeting old friends, and making new ones (didn’t think I’d spend an evening chatting around beers about philosophy – en français -, thanks Jeff!). After so many face to face meetings, I really grew a sense of belonging to that community. But it was not all fun, friends and great people, we were there for a purpose: technology and learning. And how we can make an impact in education. Rob Abel, CEO of IMS, mentioned that we were living a revolution. Revolution, I know, what a  washed out world, bastardized by years of marketing. But there is no way to deny it: as many industries, Internet is a tsunami that is shattering the very core assumption of education, the old 200+ years paradigm of the one size fits all teacher/expert and the classroom model, and it is unclear what will be the new reality when the water recesses. It is a tale of opportunity (wanna be google of e-learning) and survival, and quite exciting to be a part of actually!

So a few highlights of the conference:

LTI is the buzzword: it is funny how the smallest spec, the less ‘architected’, the Basic LTI Launch, is the buzzword and the standards spec everybody is aware of. A simple answer to a common problem, here is a recipe for success. LTI 2.0 builds on the success and is much more architected. Will it give it the legs to build a rich plug and play interoperable ecosystem of learning resources and tools? Or would its apparent complexity mean LTI 1.x (rebranded Basic LTI) will be the actual standards that is widely adopted, because it is mostly ‘enough’? I sure do hope for the former.

Role of the teacher: we hear things like teachers do not scale; building a student centric experience; the broken lecture model; can technology be used to scale up education to an internet scale audience? Or to improve the learner experience in more blended mentor/learner hierarchy. The answer is probably not in either but in both of them at the same time.

Learning Analytics: Big (and small!) Data carries to be a significant potential disruptor. Everybody knows it, but I feel it is unclear yet how it will finally be used. We like to think of predictive analytics, and think of the success of Netflix as a guidance. However, Learning is not Buying, the final outcome is not easy to evaluate: What constitutes good learning? Even good grades is probably not enough (even if it is the most obvious) way to measure. I see how learning maps coupled with Learning Analytics can help a user understand her path in the learning scape she is working on.

QTI Works: I like the assertive tone! Yes, QTI works :) I still wear the scars of implementing QTI 1.2 years ago, but I feel the time is ripe to take a dive in the APIP and QTI 2.1 world again: QTIWorks (delivery engine) and Uniqurate (the authoring platform) might be the only thing I need to reboot.

LMS CEOs Panel: Leave all Political Correctness at the door, thanks! I was amazed. It all started smoothly but it did not take long for Instructure’s CEO Josh Coates to give a kick in the anthill. I could not believe my ears :) Not always constructive but surely entertaining! But at the end, consensus grew around the fact that LMSes should inter-operate (what a surprised at an IMS conference). I loved they were challenged by Zach from Measured Progress to support QTI 2.1 by next year conference. Blank Stares, then ‘We are big fan of QTI’, ‘Oh yes we already export QTI something’… They sure agreed they would support it, at the same time having probably not a glimpse of what it entails (and it means a lot, you don’t write a QTI 2.1 APIP assessment engine that easily). Here is a pledge that I bet will quickly be forgotten!

And I could go on and on, it was a rich 3 days, I took a bunch of notes and will try to reflect a bit more in depth in later posts, for example on the excellent keynotes by Dr. Zhao on Education Paradigms, and Steve Kingler, on the remarkable work done at Western Governors University.

A last word to thanks my employer, Cengage Learning, for allowing to carrying on participating in the standardization works under way at IMS.

Game of Life in Creation Platform

Time to play a bit with Python and KL… You probably know Python. But what is KL? Kernel Language of course :) That is a compiled (at run time though) strongly typed language dedicated to writing high performing operations build with easy parallelism abstraction. The idea is you write most of your code (User Interface) in Python and let the performance part execute in KL…

For who is this for? well mostly TDs (the guys that do the coding in Studios) as Creation Platform is meant to be tool to build tools! And in a DCC (Digital Content Creation think Maya/Softimage) agnostic way. In addition of offering the creation platform SDK, the Fabric Engine team also proposes a set of rather astonishing modules built ontop of it. All that to say you should really go check those out: Fabric Engine. And those guys are from Montreal, there are some serious skills in the city :) Thanks Softimage/Avid, Autodesk and all the game studios for creating such a fertile eco system.

So back to my experiment, I gave myself a try on the platform by doing a Game of Life Cellular Automata, in 3d for a spin. Rules are similar to the regular 2d, although I must say I did not get the nice emerging patterns you usually see in the 2d examples (i.e. the gliders). From an implementation standpoint, I did not achieve as much parallelism as I wanted as the KL was then missing atomic add operation. But nevertheless it worked quite fine. The code is on github, it might require some tweaking if you run it against newer version of the platform: github repo.

 

Bye Bye 2012 Video

Finally done, the 2012 Bye Bye video. That is my main excuse to not get too rusty using Softimage, Premiere and After Effects… So this is a wrap up of our best family and friends pictures and videos mixed with a Trance Track (Dash Berlin feat. Emma Hewitt – Waiting).

Technically, since I do have a limited time (and resource, my PC was churning 1 frame every 6 minutes!) I wanted to go with a simple concept, both in term of modelling (limited skills!), animation and rendering. And even there I had to trim it a bit down.

So this is a simple model built and animated in Softimage. There is ICE in there! It is the spectrum bars that yes are supposed to follow up the music, I used my SpectrumToWave plugin described in prior posts. ICE is also used for the counter: it is actually 3 particles each one assigned a value between 0 and 9 depending on current time. This is then used to pick the Instance Shape for that particle (there are 10 objects, each one representing one digit). The also track their position and 3 nulls bound to the surface of the memory pod.

The fake somewhat depth-of-field effect is coming from Magic Bullet Looks filter that is applied in the Premiere sequence. I used different looks for all the videos. Magic Bullet rocks!

Any question, post here! Happy new year!

Custom ICE Node: Wave To Spectrum

It’s been something that I wanted to achieve for a quite a while, and finally got the right mix and enough to get it to work. Picking a WAVe file, get the spectrum decomposition of it (using a Fast Fourier Transform library named KISS FFT) and finally wrap the whole thing into a Custom ICE Node so that I could use it to drive particles.

Here is a few trial, one using a typical spectrum bars, and the other one using spectrum information to drive strands and deformation.

I’ve also made a tutorial on how to use the plug-in:

And finally here is a screenshot of the ICE Tree (there is a smaller ICE Tree before that that sets for each point its frequency, a normalized value between 0 and 1 covering the full frequency range from the audio file):

wavespectrum_commented

Scene is available for download: spectrumBars scene.

I am not an expert in C++ nor in Softimage Development, so there is sure some room for improvement! The code is available just for that :) You can browse and fork it on Github: https://github.com/claudevervoort-perso/xsi-audio-spectrum. The DLL is also available on GitHub.

My 1st MOOC! Learning Analytics 2012

Today was the kick-off of the Learning Analytics Massive Open Online Course 2012 (http://lak12.mooc.ca/). I feel that is going to be a great experience on 2 accounts at least: last semester I took the Stanford Machine Learning class (I highly recommend it btw!), so I see that now that I got acquainted with the tools, I can see how they could be made to use in the context of Analytics in Education (for which I know very little).

The other very interesting aspect for me is the pedagogical approach. The Stanford course was a great course, but I would say very traditional, with centrally hosted and authored content, a very linear flow. That’s something I’ve always be used too and can easily comprehend. Here, in a MOOC, from what I comprehend, that seems all reversed. There is no central repository, facilitators more than instructors, and a network of content that’s alive and fed by participants, ready to harvest, re-hash and re-share the richness of the web. I’m curious to see how that will unfold!

Particle Filter experiment (no ICE here!)

Sounds like another post on ICE and Softimage Particles, but not at all! This semester I took 2 of the online courses offered by Stanford (introduction to AI and Machine Learning), that was a great experiment that I’m glad will continue on next semester. If you have not yet, check out those courses: ai-class.com and ml-class.org.


particle filter in action

In the AI Course Sebastian Thurn introduced seemingly a key algorithm in Robotics: Particle Filter, which is used to derive the position of a robot based on subsequent observations of its environment. I decided to give it a try, also a good reason to play with HTML5 canvases. Although the core of the logic truly lies in a bit more than 10 lines of code, it ended up being a bit more to build all the environment around it. What I actually found the most challenging was to build the correlation between a particle’s observation and the robot’s one so that a proper weight could be derived.

Anyway, enough said. You can have a look at the experiment here: particle filter.

You can also follow some discussion on it on reddit.

Glue Mesh to Jello

Here is a new little ICE video/how-to I posted on Vimeo. Lagao is all about point cloud simulation, and while it can use a mesh to mold the emitted points, it does not directly deform the mesh.

So I made this little compound which binds back the mesh to the point cloud simulation. Since it is only taking the closest point, it is a bit crude and requires a rather high density on the point cloud simulation. I’m thinking it can be improved by taking more than one point, and why not, apply a barycentric coordinate on those.

Anyway, you can see it here:

I’ve also attached the compound: GlueToPointCloud.compound