The JLeRN Experiment

JISC's Learning Registry Node Experiment at Mimas

Archive for the category “JLeRN Task Group task”

Wider Potential Summary

The Learning Registry – Wider Potential - Executive Summary

The Wider Potential report (121109 JLeRN Wider Potential Report – DK) introduced here presents personal but hopefully useful observations on the Learning Registry as a work in progress.  The report considers the wider potential and affordances of the Learning Registry as an architecture or conceptual approach, looking beyond its core educational technology focus to the broader information environment. Inevitably, this report only scratches the surface and should therefore be weighed against the more detailed inputs (covering both technology and practice) to the JISC Learning Registry Node (JLeRN) project.

1 – Status

In organizational terms, the LR is only a time-limited project and therefore its potential has to be realized beyond those boundaries; however, it is already valuable that practitioners intuitively recognise both the relevance of its approach and the possibilities it may open up.

In solution terms, the LR is only a configuration of IT plumbing, of machines talking to machines (that’s all it set out to be); therefore it is down to the community (self-selecting) to build both local and larger scale services and to enhance the range of interfaces (APIs) and the scope of value-added applications. Such engagement might be more readily achieved if the potential of two key aspects were to be more clearly demonstrated – how paradata might work at scale given the nature of the underlying identifiers and how the networked node model could serve a range of practical use cases.

In technology terms, the LR made some choices at a moment in time (e.g. to use the Couch noSQL database). Whilst these are far from out-dated, it may be that it is the approach that is more significant going forward than the specific architecture, tools or code.

In terms of vision, the LR appears to have catalysed encouraging levels of interest in three constituencies – policy makers (especially in US K-12 education), learning technologists and, perhaps most refreshingly, parties responsible for delivery (such as the Liverpool and Newcastle University teams engaged in the UK JLeRN investigation).  Whilst enthusiasms and movements are dangerous, it is not insignificant when an approach captures imaginations in an embattled landscape such as Learning Resource description, discovery and reuse.

2 – Crossroads

At this point in the story, as initial project funding comes to an end, we are however faced with a familiar ‘investment’ dilemma (whether about effort or funds) concerning the tensions between ‘forever beta’ rapid innovation (technology and tools are always moving on) and the challenges of embedding in the community and of reliable productisation. Survival in this technology ‘gene pool’ is a complex proposition. The key sustainability questions are:

  • In product terms, does the LR add enough to the underlying technology stack to establish a necessary and valued role? Or is the LR simply an exemplification of what can be achieved using increasingly malleable lower level components?
  • In terms of engagement, are the educational audience too narrow and the post-project governance too uncertain to elicit the ongoing commitment needed to deliver the power of the LR approach? Or is there a wider value in the LR approach potentially involving other domains that would bring critical mass and a sustainable trajectory?

3 – Recommendations

It is perhaps unhelpful in the current funding climate to propose further work. It seems certain however that neither the benefits to the leaning community (with the possible exception of specific US K-12 targets) nor any wider /generic potential of the LR approach can be achieved without further proof of concept around its potentially groundbreaking features – notably harnessing paradata and offering a node based data aggregation model.

To market the current ‘solution’ to the wider learning community or to other information domains, such as libraries and estates in HE, without clarity in those areas would likely be fruitless. Technologists would justifiably resort to the underlying toolset (notably the power of CouchDB) and practitioners would be left, as we are, to imagine outcomes on a half-promise.

I for one would suggest that low budget and rapidly executed proof of concept experiments could be devised around paradata and the node model that would get us to a more tangible decision point regarding value to the teaching and learning community and wider affordances. These need to take place urgently before we loose track of achievements to date.

Ironically, talking of wider potential, the library community and particularly aggregations such as the Mimas-managed Copac service ( have use cases and data to support both investigations.  Furthermore the JISC Activity Data programme ( and at least one very large ongoing European project (Open Discovery Service running to 2015 – may be poised to address mutually interesting requirements in this space.

Finally, as a sanity check, it may be of value to undertake an analysis of the space addressed by the LR based on the California Digital Library micro-services approach ( in order to determine the necessary working parts and how they might be sourced, without the presumption of building and maintaining a single end-to-end system.

Rounding up the JLeRN Experiment

We have reached the end of the JLeRN experiment, at least the end of the current JISC funding for Mimas to set up a node on the Learning Registry and examine its potential. One part of the process of rounding up ideas and outputs generated through the experiment was a meeting of those who had engaged with it, held on 22nd October. This post providers pointers to two sets of resources associated with that meeting: the blog posts etc. that people who attended it wrote after the event, in order to summarise what had been discussed, but first a quick round-up (mostly from Sarah Currier) of posts that describe what the people who attended had been doing with the Learning Registry.

The Story So Far?by David Kay
A summary of some of the “headline ‘findings’” of a series of conversations that David has been having in an attempt to pin down the nature of the Learning Registry and its potential.

Understanding and using the Learning Registry: Pgogy Tools for searching and submittingby Pat Lockley.
Pat has been very involved in the Learning Registry from the start. This blog post gives you access to all four of his open source tools that work with the Learning Registry, set up to work with our JLeRN node. They are very easy to install (two of them plug very easily into Chrome) and try out, plus Pat has made some brief videos demonstrating how they work. The tools use the Learning Registry to enhance Google (and other) searching, and support easy submission of metadata and paradata to a node. There is also a sample search you can use with the Chrome tools that should show you how it works pretty quickly.

Taster: A soon-to-be released ENGrich Learning Registry Case Study for JLeRNby the ENGrich Project.
The ENGrich project are working on a case study on why and how they have implemented a Learning Registry node to enhance access to high-quality visual resources (images, videos, Flash animations, etc.) as OERs for engineering education at university level. Their work has involved gathering paradata from academics and students; this taster gives you an overview. A really interesting use case. Please pass this one on to anyone working with engineering resources too!

Taster: some ideas for a use case on paradata and accessibility opportunities -by Terry McAndrew
Terry is an accessibility expert from JISC TechDis, he came to our first Hackday and got us thinking about how the Learning Registry model for capturing and sharing paradata might be useful for people to share information about how accessible resources are. We commissioned him to write up a use case for this; look here to see his beginning thoughts, and add any of your own.

How widely useful is the Learning Registry?: A draft report on the broader contextby David Kay
The JLeRN team have been keeping half an eye from the start on the potential affordances the Learning Registry might offer the information landscape outwith educational technology: what about library circulation data, activity data and activity streams, linked data, the Semantic Web, research data management? And what if we are missing a trick; maybe there are already solutions in other communities? So we commissioned a Broader Context Report from David Kay at Sero Consulting. This is his first draft; we’re looking for feedback, questions and ideas.

I reported on some information about the current status of the Learning Registry in the US and some other related initiatives (slideshare) based on information Steve Midgely had sent me in reply to an email.

Summaries/reflections from after the meeting

Registryingby Pat Lockley
Pat’s summary of his presentation on how the Learning Registry affects the interactions between developers, service managers and users.

Experimenting with the Learning Registryby Amber Thomas
A round-up of the meeting as a whole, pulling out a couple of the significant issues raised: the extent to which the Learning Registry is a network, and the way some real tricky problems have been pushed out of scope…but are still there. Some really useful comments on this post as well, notably from Steve Midgley on increasing adoption of the Learning Registry in the US.

JLeRN Experiment Final Meetingby Lorna M Campbell
Another summary of the meeting, summarising the uses made of the Learning Registry by projects represented at the meeting, mentioning some subject areas where the use of the Learning Registry to store information about curriculum mapping may prove useful and questions from Owen Stephens about alternative approaches to the Learning Registry.

At the end of the JLeRN experimentby Phil Barker
My summary of the meeting, covering the issues of whether the Learning Registry is a network or just isolated nodes (-not much of a network, yet), whether it works as software (-seems to) and why use it and not some alternative (-it’s too early to tell).

Watch this space for more information and case studies from some of the people mentioned above, and for the official JLeRN final report.

The Story So Far?

I’ve recently had a number of interesting and informative conversations in my attempts to pin down the nature of the Learning Registry (LR) and its potential significance within and beyond its US origins. I thought it might be interesting to summarise some of the headline ‘findings’ ahead of the Mimas JLeRN workshop on 22nd October, all of which are of course subject to learning more on the day. These are best presented as a sort of historical narrative …

1) The LR project originated in response to particular needs identified in US education and training, particularly relating to surfacing, characterizing and sharing / reusing learning resources.

2) Whilst those challenges may have had particular priority in the minds of DoD and DoE stakeholders, they were symptomatic of the much-discussed difficulties of describing learning resources in manners that will be enable potential uses and users (from course designers to learners) elsewhere; to put it bluntly there is no consensus after all these years that enables us to homogenize / harmonize learning resource metadata – it’s like a muddy pond containing fish, plant life, shopping trolleys, industrial byproducts, children swimming, others fishing  … it feels like a random ‘mess’ not an ecosystem.

3) Paradata (i.e. usage data with context) may be a vital part of the jigsaw – allowing resources to become increasingly ‘well-described’ on the basis of their utilization (Who, Where, When, How, etc…); however it is only a format that is subject to the quality of the data itself, particularly re- the use (or not of) consistent and persistent identifiers to ‘link’ paradata.

4) Paradata has the potential to be more powerful at scale (introducing statistical reliability and exposing the long tail of resources and of usage) and may therefore benefit from the ability to network datasets across the community (subject, national, international).

5) The LR project developed an ‘approach’ (model, architecture…) that addresses the key issues of mess (see 2), context (see 3) and scale (see 4). In my simple terms, the LR approach proposes that the mess is addressed by a flexible approach to data attributes (anything goes), context is evidenced by paradata, and scale is enabled by orchestration between networked nodes.

6) In organizational and temporal terms, LR is only a project and therefore the potential has yet to be realized; however, it is already valuable that practitioners intuitively recognise both the relevance of this response and the possibilities it may open up.

7) In solution terms, LR is only a configuration of plumbing, of machines talking to machines (that’s all it set out to be) and therefore it is emphatically down to the community (who’s that?) to build both local and larger scale services, interfaces and applications on top.

8) In technology terms, LR made some choices at a moment in time (e.g. to use the Couch noSQL database); whilst these are far from out-dated, it may be that it is the approach that is more significant going forward than the architecture or the code.

9) The potential of learning paradata raises issues about the divergence (or is it convergence, glass half full?) of approaches to usage data / activity streams; in one dimension this data represents part of the personal learning record (“I did this”), whilst ‘at the other end of the triple’ (thanks, Phil) it is about the resource (“It was used in this way”); that sounds exciting till we dig deeper in to issues of storage / retrieval, privacy / access and more.

10) At this point in the history (Is that an end point? I think it was Simon Schama who asserted that the French Revolution is still ongoing), we are faced with a familiar dilemma concerning the investment relationship between rapid innovation (technology and tools are always moving on), embedding in the community and reliable productisation … Is the target audience too narrow and the governance too uncertain to deliver the power of the LR approach? Is there a wider value in the LR approach that would bring critical mass and a sustainable trajectory?

Taster: a soon-to-be released ENGrich Learning Registry Case Study for JLeRN

One of the tasks we’ve commissioned from our informal task group is a case study from Liverpool University on why and how they have implemented a Learning Registry node to enhance access to high-quality visual resources (images, videos, Flash animations, etc.) for use in engineering education. The ENGrich team will be presenting their case study at the JLeRN Final Workshop on Monday 22nd October and completing it based on feedback and discussion there; but we’d be interested in any questions or points you have as well prior to the final version being published here.

So, here is your taster:

Ins and Outs of the Learning Registry: An ENGrich Case Study for JLeRN – draft

By the ENGrich Team at Liverpool University. This work is licensed by The University of Liverpool under a CC BY-SA 3.0 Unported License CC-by-sa icon

A brief summary of the ENGrich project

ENGrich, a project based at the University of Liverpool, is both designing and developing a customised search engine for visual media relevant to engineering education. Using Google Custom Search (with applied filters such as tags, file-types and sites/domains) as a primary search engine for images, videos, presentations and FlashTM movies, this project is then pulling and pushing corresponding metadata / paradata from and to the Learning Registry (LR). A user-interface is also being developed to enable those engaging with the site (principally students and academics) to add further data relating to particular resources, and to how they are being used. This information is published to LR, which is then employed to help order any subsequent searches.

ENGrich process flow

This section will detail the process flow of the proposed service. Effectively, the LR plays a central role in ‘engriching’ visual engineering content, above and beyond the basic data returned by Google Custom Search.

ENGrich Process Flow Diagram

ENGrich Process Flow Diagram

Working with documentation

This section will cover how we worked through the available LR guides and documentation, from the very basic methods (publish, slice etc) to the more customized data services (extract).

The list includes:

Learning Registry in 20 Minutes or Less

Learning Registry – Quick Reference Guide

Learning Registry – Slicing

Paradata in 20 Minutes or Less

Modeling Paradata and Assertions as Activities

Paradata Cookbook

LR Data Services

Setting up a Learning Registry node at the University of Liverpool

This section will explain the rationale behind the decision to set up an institutional LR node (common node) at the University of Liverpool, and challenges and issues we faced while doing it. This node is to be utilized by the project, as well as providing a means of highlighting the wider potential of the LR to other centres / services across the University.

Summer students identifying learning resources

This section will describe how the project employed undergraduate engineering students over the summer of 2012 to classify visual media available online that are relevant to the University of Liverpool engineering modules. The project relies on their experience as engineering students to provide insights into learning techniques of how to identify resources that will aid future students. 25,000+ records were linked to the appropriate modules, and are ready to be published in the University LR node using the paradata templates described below.

The students blogged every week on their tasks and progress.

Paradata statements templates

This section will report on how we went about creating the required PHP templates to publish the students’ data into our Learning Registry node. We have constructed a set of contextualized usage paradata statements for different types of actions (e.g. recommended, aligned, not aligned) and so far have published a couple of test documents to our University LR node.

Slice, harvest, obtain methods and data services (extract)

This section will summarise our experience with different methods the LR has in place for accessing data from the LR. We report on using slice, obtain, harvest and extract methods, explaining why we have chosen one over the other.

University of Liverpool student portal – using data directly from LR

This section will demonstrate how an iLIKE ‘widget’ (a portative version of ENGrich visual search) could be implemented within the University of Liverpool student portal. The iLIKE prototype gets a unique listing of identifiable University of Liverpool engineering modules (titles and codes) directly from the Learning Registry as the user types into the text field, and then fetches the latest resources relating to that module from the LR. It dynamically generates the thumbnails corresponding to the resources.

Working with Mimas

This section will talk about how we collaborated with the JLeRN team at Mimas to resolve some initial bugs in the slice service; to draw on their experiences in setting up a new LR node at the University of Liverpool; to develop a set of customised extract data services; and also about the possibility of joining the LR nodes at Liverpool and Mimas using the LR distribute service.

Taster: some ideas for a use case on paradata and accessibility opportunities

Note from the JLeRN Team:

One of the tasks we’ve commissioned from our informal task group is some ideas around use cases, from JISC TechDis, on the potential of the Learning Registry and paradata for enhancing accessibility of OERs. Terry McAndrew, the author, will be presenting at the JLeRN Final Workshop on Monday 22nd October and completing it based on feedback and discussion there; but we’d be interested in any questions or points you have as well prior to the final version being published here. The taster below is Terry’s initial thoughts; he’ll be updating them between now and Monday.

So, here is your taster:

Paradata and Accessibility Opportunities – Draft

By Terry McAndrew, JISC TechDis.

The opportunities for the use of paradata for accessibility information regarding resources should not be missed. However, getting all the actors to produce and consume this information effectively clearly requires some thought for a realising implementation. Here are some of the issues we need to consider.

Firstly, the ‘actors’ in these scenarios need to be aware of what is important to capture and share, to enable all potential participants to make the best of the learning and teaching resources available. Generally all actors need to understand that multiple formats for information are necessary to communicate information effectively – it’s just that many omit to produce multiple outputs when delivering to an audience.

These actors can be thought of roles in the learning and teaching cycle and are the Creator, Tutor, Student, Publisher, Curator, Librarian, Accessibility Specialist, Manager, and Technical Infrastructure Manager. An actor can have multiple roles.

If you are thinking all these actors will have to co-operate to use paradata in the same way, this would be a mistake. You would be right in thinking though, that these should be aware of the others’ perspectives.

The actors and their roles

Looking at how the data is modelled in the JSON representation it may be that these actors can be aggregated under ‘educator’ or ‘student’ with additional attributes for the roles they are undertaking at the time of the interaction with the resource. For now, let’s use actors in the looser scenario modelling sense.

Creator: Produces original resource but may be unaware of the accessibility needs e.g. a diagram that lacks quality descriptive information for a visually impaired user in effect needs a ‘radio’ quality description; or a video which needs a transcript for users with hearing difficulties – the transcript provides more usability though; discovery, indexing and usually a better output if the speaker works from a script. The creator however mat wish to declare the purpose or intention of the resource.

Tutor: Uses resources to teach but encounters many different student needs and learning styles; if only they could feed these experiences back into the network to remind the creators of the issues, and how other students with various needs resolved problems. Tutors need to understand that disabilities are by nature often discrete – students do not have their disability “tattooed on their forehead” as many systems would like, just to make it easy to assign the appropraite data. Tutors therefore need to be informed of the nature of the disability.

student: Receives a resource in a context often defined by the tutor, but also as an independent browsing learner. The student need to be able to manipulate a resource independently if they have a disability, i.e. without tutor/demonstrator assistance to engage if possible (mobility impaired: the last thing they want is for someone to move the mouse for them). How, as the consumer, do they feed back this experience in the paradata? Is it gathered by the tutor or given directly – and do they have to declare the nature of the disability (glossory of learning issues) to validate it? Students with a support statement will usually be more active and highlight issues they could have done without. Students without support tend to be more passive, having lower expectations. Paradata could be liberating!

Publisher: Promotes resource into a community, probably using a generic description from interpretation of a complex one. If the paradata was tightly coupled to the resource such that users can easily access the usage information – including the accessibility issues – then the value of the resource may be more quickly assessed. The publisher may be the repository cataloguer.

Curator: Someone who collects interesting information to forward to others with similar interests. They tend to highlight the ‘back-story’ first, as often demonstrated on or, but the same resource will appear in different collections as it has different aspects to be highlighted. How useful would it be to be able to collect these views (why they qualify each other) and feed this into paradata?

LAMs: Library (and museum) staff are taking a greater interest in social metadata — content contributed by users — as it can assist with discovery and description to both augment and recontextualize the content and metadata created by LAMs. They need to be aware of the opportunity to collect accessibility issues and promote the benefits of online alternative where necessary.

Accessibility specialist: Translates information into advice and technologies. They would be ideally placed to bind or interpret common issues. May be occasionally working with staff developers.

Managers of professionals: Who need to recognise that engagement with paradata is part of professional scholarship. They allow time and resources to be made available to use paradata.

Infrastucture managers: Who enable the network and facilities to be utilised most effectively – if paradata recommends a resource in conjunction with a technology solution then that can make anticipatory provision based on this evidence.

So, how can we realise this interaction?

In oder to better understand the work going on with paradata I chose to inform myself, as many do, with the explanatory videos released by the LR team. Technical text needs context in order to understand it better adn get the impression of where it is heading, what elements are ‘concrete’ and others which are still under debate. From these I understood that ‘slice’ waas a practice which could enable the subselection of relevant information. I took this to mean that tools which only needed a smaller proportion of the data could be available to display or capture only the necessary information and therefore reduce confusion. In effect, not to be scared of the complexity that the system can cope with by any given actor role. For those of you without a nervous disposition…

On LR data services

I was struck by an example given for what was assumed to be a typical use – “state recommends a resource from the khan academy for teaching…” and became concerned that the state may not have considered the accessibility issues it has raised, and whether they would be one of the actor roles I envisaged above? Would they be ones to utilise accessibility paradata when making these recommendations? What tools would they need?

Not too long ago in the subject network (<10 years) we had many discussions about the metadata standards for cataloguing and interoperability – the dreaded ‘i’-word. Committment withered away eventually because all the actors who needed to use them would/could not significantly engage with them, and the supporting infrastructure to aggregate the services through each subject discipline needed consistent management buy-in. This needed to be better informed technically to appreciate the network solutions benefits. However, it was clear that significant work would have to be put into evangelising good practice with interoperable resource database to in effect, establish usage as a professional digital literacy. The community was shifting to less complex solutions – social networks and folksonomies.

Metadata itself maps to a procress already embedded in the business of cataloguing resources – it’s another way to capture and manage classification information so it is a small step to conceptualise this to operate the same processes in another landscape; what was done in libraries and understood by the lay person (who could physically observe books being catalogued and ordered), moved online. Those using online catalogues to search their library could visualise the translation even without technical knowledge, and retained confidence in the process. Those who are more aware of the ­­power of metadata could federate their searches through other agent software, and in effect be in many libraries at once. Paradata is a little more difficult to communicate as a concept; its the library of the conversation about all those books from the library, and more besides.

However, I don’t have to be a developer to get to grips with this – I just have to be able to understand some potential for development.

There is a problem when new technical solutions are facing a new attention challenge; it’s another practice to learn and the community that could benefit from them needs convincing there is a significant gain to be achieved. These potential users may have shifted attention to establishing their own online communities of interest through other free innovations which may be more satisfying to use e.g. curation tools and social bookmarking. They may be far from perfect but they can qualify information about resources and help one find allies with similar teaching problems. This shrinks the gap between the problem and the solution, making paradata capture and exchange less of a vital solution. Now that we have professional CPD from mailing lists to twitter, other innovative solutions need to have similar community benefits; finding similar others etc. Does the paradata networking promote this opportunity? If there are less efficient but far more effective alternatives then these will win out when the perspective of effectivness is from an individual’s perception, not an organisational one.

I followed the technical information to a something called Twitcurl – is this a potential method of harvesting the collection of tweets around a specific hashtag, or quickly poking information into a paradata network from ones own tweets? Could a hashtag and a shortened URI, coupled with account registration (to capture the twitter ID and map it to ones actor roles), be enough to record an interaction with a resource? This may be appealing to educators and students in the same way that using #fb posts status updates.

Another alternative may be a simple Widget to gather reflection by professionals  – something that could be available on all tutor’s and student’s devices to share interaction comments with a resource as it happens – but a context tag of accessibility info, would probably help. At this point a controlled vocabulary for disabilities may be required e.g. V.I., Blind, Deaf, Hearing Impaired, Print impaired, Mobility impaired etc. We are drifting back to metadata standards again. Oh dear!

It appears the accessibility content could be expressed as an ‘activity stream’ when sharing specific actions on what people did with an object, but the accessibility information is not simple transactional information – ‘teacher posted a transcript’ is not ‘teacher found it necessary to post a transcript to engage hearing impaired students’.

A resource could have accessibility paradata information utilised in parts of the JISC curriculum lifecycle – design, (re)develop, deliver, support, evaluate. Being able to collate evaluations from students and teaching staff could feed into the next design loop – being able to share a response to the evaluations would promote developments to other potential users (you asked for ……., we did ……..). Being able to filter this by disability may help refine the flow of information between actors and show how a resource is responding to feedback, but a culture change would have to occur to make this commonplace.

Our TechDis Accessibility Passport approach has been available for designers and developers to reflect on the features that may present issues so these can be addressed before a resource is released to the outside world. It works, but has not attracted large numbers of users as they are not finding a need to create a passport for their resources to be used elsewhere – when it is someone else’s risk.

‘One of the key challenges… is how to engage students, peers and tutors in creative and mutually beneficial dialogue characterised by innovative and reflective critical thinking – both in face-to-face, distance and work-based flexible learning contexts.’

Professor Peter Chatterton, Critical Friend to the JISC Transforming Curriculum Delivery through Technology programme

Should we look for accessibility information could be captured by proxies? Colleagues or attendees to presentations who could also see accessibility issues/opportunities – where do they pass comment? How could that be useful for paradata on the same topic? Where do you record your thoughts against this presentation? Does it have an event ID?

Video output online suffers from often suffers from poor quality comment streams – would reflections of its teaching potential be recorded through the nodes and then be harvested against the video by some other merged output.

Finally, I tried to think about how it might be expressed, just so I could visualise what tools might be available to create and use this paradata. Perhaps

“activity”: { “actor”: “email address”,”verb”: “recommends”, “disability”: “dyslexia”
“object”: http://resourceurl/resourceX/,
“content”: “individual recommends Xerte LO for dyslexic users to organise and plan”}

Some degree of identifer would help collate suitable resources and experiences with them. Does it have to be repeated for each type of disability? that seems wasteful and probably burdensome (repeated entries need repeated human input?)

Using appropriate attributes to capture roles, information could be coupled with type of disability and the purpose for the use of the resource. Here’s a modified example:


“actor”: {“role”: “teacher”,
“attributes”: ["elementary", "math"],
“context”: ["site", "NSDL"],
"action": {"action-verb": "viewed", "count": 2200, "context": ["detail page"]},
“time-frame”: {“start-date”: “2011-05-01″, “end-date”: “2011-05-31″},
“object”: {“uri”: “xyz”, “attributes”: ["volcanoes"]}


The flexibility in the system exists, and it’s not my role to specify how this can be defined, but I see it can be. I believe it is worth attention to do so to capture an ‘accessibility context’ for any experience.

We have to think about how to engage all the actor “roles” with the process and visualise some scenarios. Perhaps pre-prepared widgets that may be available to all class participants as an experience review (quick feedback form) or class activity review. At primary school it could be part of the leveling processes that are recorded against curriculum performance, perhaps a tool for special needs support worker. In Higher Education it may work best as a reflective tool used by the student to reveal the issues each resource creates. The teaching staff (tutor and demonsrators) may add other observations against the resource but would need to be aware of the disabilities in the class, and how they manifest themselves – input into the accessibility comment may benefit from some professional qualification.

I am less of a fan of the ‘build it and they will come’ approach than I used to be: there are too many competitors for attention. The value of paradata needs some appealing tools for all sectors to slice this information they wish to use, in the way that suits them best. However, each of these interface tools should recognise the value of capturing accessibility information at the same time for the benefit of every potential user.

Understanding and using the Learning Registry: Pgogy tools for searching and submitting

I started with the Learning Registry as part of Plugfest 1, which was just over a year ago in Washington D.C. (see my CETIS blog post reporting on it here).

Pgogy logo

Stuff about Pat Lockley’s tools noted here, plus other thoughts and projects of his, are available on his Pgogy website

Part of the thing I think people don’t get with the Learning Registry, is that as the Internet became the Web, then the Web became a series of distinction destinations – Facebook, Google, Twitter, etc. So although the Learning Registry exists – it doesn’t have a front page, or a “Tweet this” button, but it exists like HTTP exists – you can build using the Learning Registry, but you might never see it.

So that is poorly explained, but that is an innate part of the problem – and it’s a problem I sought to rectify at the first Plugfest. If I can’t take people to the Learning Registry, then I should take the Learning Registry to people. How? …

A Chrome tool: the Learning Registry enhancing Google searching

Google is the biggest store of links in the world (I shy away from the ‘R’ word), and so it is where people go to search. Some of the links returned via a Google search will also be links stored in a Learning Registry node – so you can, via one of my Learning Registry tools, check to see if it knows anything about a website.

So you can do a Google search, click on a button, and really quickly (the Learning Registry technology is state of the art) you can see which pages the Learning Registry knows about. Sometimes this knowledge will be just keywords, authors and descriptions, but could also be educational levels and the type of interactivity supported by the page.

You can download the Chrome plugin here – and watch a short demo video below showing you how it works.

Note on trying this from Sarah Currier: Once you’ve installed this in Chrome, you can try it out by running any search in Jorum - nearly all of Jorum’s OERs have metadata in the JLeRN node. So you should see lots of small crosses (+) to click on next to your search results. If you want to try it in Google search, try a search for something you know is in Jorum – here’s an idea in case you want a quick win: “SEA and good governance” governmentality - look for the little cross in your Google search results against a Jorum DSpace result, and note that the same resource appears in the search results in other repositories but without the cross as they don’t publish to the JLeRN node. If they all published to the nide, along with usage data (paradata) ab out that resource, you’d be able to use one of the other tools to look at *all* the paradata for this resource, even when accessed in different places.

Paradata: information about how learning resources are used

As well as this information to describe webpages (in this case, metadata about learning resources), increasingly Learning Registry nodes are storing what is called paradata, which is information on how a resource is used. Imagine the same Google search as above, but this time you can see how popular resources are. So what would normally just be a page, now becomes a page used by 500 teachers in your subject area. Once a resource becomes used (and as long as someone tells the Learning Registry about it) other people can find this data and pick out the resources most suited to their needs.

So paradata, another great mystery to explain? Not really, it’s like seeing how often a book is cited, a link linked or a tweet retweeted. All that is different is the data on reuse is in a slightly different format, and is shared outside of the silos where it lives right now.

How can you share paradata for your resources? Well a lot of people use Google Analytics data, and a page visit (one that Google Analytics tracks) is paradata, it just needs some tweaks before it can become data in a Learning Registry node. How can you do this? …

Pliny: submit your Google Analytics data to the Learning Registry

You can use Pliny, another tool developed by me, to share your Google Analytics paradata. Pliny uses Oauth to sign you into Google Analytics. It then accesses your analytics data, and submits it to the Learning Registry for you (currently the tool at the above link submits to the JLeRN Alpha Node). It does all the hard work for you, all you need to do is click your mouse a few times.

You can watch the short demo video below to see how Pliny works:

Ramanathan: submit metadata from your RSS feed to the Learning Registry

So we’ve looked at the benefits of the Learning Registry for teachers in terms of finding appropriate resources; how can people contribute? Well, Pliny allows paradata to be submitted, and you could use another tool I developed – Ramanathan - to take an RSS feed and use it to submit the metadata in a feed into a Learning Registry node.

You can watch a short demo video below to see how Ramanathan works:

In the same way that there isn’t a place to go to search for teachers, how do cataloguers work with such a decentralised model? Both Pliny and Ramanathan help data to be submitted, but there isn’t as of yet an easy tool to remotely manage metadata on your resources in a Learning Registry node. If you use either of these tools, you will be given permanent links to your documents on the Learning Registry node – but this is only to see – not to delete or revise.

Learning Registry Browser: find Learning Registry paradata for a web page

I’ve developed a second browser plugin for Chrome – the Learning Registry Browser - which will tell you if the page you are on has data in the Learning Registry, show you the documents that have been submitted, and when they where submitted. Remember that as people might be using your resources, there may be documents not submitted by you; this is one of the benefits of the Learning Registry – to track others’ use of your resources outside of your own silo.

You can watch this brief demo video to see how it works (and see the note above in red, which will give you a sample search to try):

I appreciate this is lots of new things, but I hope that you think you’ll find these tools helpful and are encouraged and hopefully curious enough to consider submitting data to the Learning Registry.

How widely useful is the Learning Registry?

Mimas has been working with a variety of UK collaborators on the JISC-funded JLeRN Experiment, an exploration of the Learning Registry (LR) as an application, as an architecture and as a software toolset. There have been a number of valuable developments, including LR nodes, and a range of use cases has been identified and some tested in areas relating to learning resource discovery, utilization and associated paradata.

The JLeRN project completes at the end of October 2012 and Mimas is authoring a Final Report looking at appetite, demand and sectoral capacity with some use cases and case studies.

To supplement this core work, I’ve been asked to produce a complementary report examining the wider potential affordances of The Learning Registry as an architecture or conceptual approach, looking beyond the core educational technology focus to the broader information environment and the associated JISC community.

This will be a short report – but there will be plenty of room to highlight suggestions and observations. So you may wish to comment here on any of my THREE questions – or to add another! Throughout, bear in mind that we’re interested in the potential of the LR and its reuse as an approach to a problem space, as an application, as an architecture or simply as a bunch of reusable Open Source software.

ONE – What functions should the LR provide to be useful? For example …

  • Storage, indexing and retrieval at scale
  • Distributed / federated data store management
  • Authentication, authorisation and other security features
  • Provision of open APIs / service interfaces for ingest, publishing and discovery
  • Range of ingest / submission / output data formats
  • Reporting and visualisation
  • Support for annotation, rating and other user contributions

TWO – In what applications or domains could the LR be useful? For example …

  • Library activity data
  • Research data
  • Repository
  • Profiles of people, places
  • General analytics data
  • Heterogeneous / specialised metadata

THREE – Are there alternative/complementary approaches to the same requirement/problem space? For example …

  • Repositories such as DSpace, Fedora, etc
  • California Digital Library Micro-services
  • Apache Jackrabbit based document stores
  • Enterprise data warehouse applications
  • Other approaches using combinations of NoSQL databases and indexing

Any comments via this blog will assist in shaping a presentation of iideas at the JLeRN project closing event on 22 October and my public report due by 31 October. Regardless of whether you wish to comment, thanks for your interest!

PS – My report and anything else coming out of this will be published under a Creative Commons CC0 licence.

Post Navigation


Get every new post delivered to your Inbox.

%d bloggers like this: