Using Adaptive Assessments in Learnosity

This page provides an overview of how adaptive assessments can be set up in the Items API, so that the Items presented to the students are chosen dynamically based on their performance in the test.

Four types of dynamic Activities are available:

  • Item Branching — Items are dynamically selected based on pre-configured paths and branches depending on the student's score,
  • Item Adaptive — Items are dynamically selected, one by one, based on psychometrically calibrated difficulty levels,
  • Testlet Adaptive — similar to Item Adaptive, but instead of selecting each individual Item in sequence, small groups of Items ('testlets') are selected at a time and fetched together as a batch. Each testlet may be a simple Activity, with a static set of Items and a nominated difficulty rating, or a dynamic Item Adaptive Activity, or
  • Self-hosted Adaptive — this type allows you to implement a fully custom algorithm for selecting each Item, based on the student's inputs, your own Item metadata and bespoke logic.

Each type of dynamic assessment is introduced below. For demos and the full technical details on configuring and initializing dynamic assessments, see the demos and technical reference section.

Note: the Item Branching/Adaptive modes are not compatible with Dynamic Content.

In branching Activities, Items are presented to the student according to pre-configured paths. The decision of which Item (or branches containing multiple Items) can be based on the student's performance, such as correctness of the last answer, or overall score so far. Each branch can rejoin the main path, or not, depending on the assessment requirements.

You can create the path configuration as part of an Items API activity definition. Any Item from your Learnosity Item bank can be used in an Item branching Activity.

The configuration is simply a number of steps. Each step either provides the reference of an Item to be presented, or a decision to be taken based on the student's performance.

Item steps provide the reference of an Item to present to the student. They also indicate which step to visit next, once the student has provided an answer. It is possible to express simple branches by providing two next steps, depending on whether the answer was correct or not.

Global score decision steps can select between two steps to visit next depending on whether the student's score in the assessment so far is greater-or-equal, or lesser, than a percentage of the currently achievable maximum score.


Note: when designing the tree for Item branching, you cannot use Items more than once, i.e. Items cannot be re-used.

In adaptive Activities, Items (or groups of Items, known as testlets) are selected dynamically such that the difficulty of each Item is adapted to the estimated ability of the user. The first requirement for an Item level adaptive Activity is a pool of Items in your Learnosity Item bank. The Items must be calibrated with a difficulty measure according to the Rasch model.

You can then create an Items API Activity definition for an Item or testlet adaptive Activity. The Activity definition allows you to tweak how the Item selection algorithm behaves. It also allows you to use Tags to restrict the Activity to use specific sets of Items from the Item bank.

The key to selecting Items during an Item adaptive session is the user's ability estimate. It is calculated after every response and is used to create a target difficulty for the selection of the next Item. You may specify an initial ability estimate to be used for the selection of the first Item in the session.

The primary method used for generating ability estimates is the Maximum Likelihood Estimation method. A limitation of this method is that it can only provide a finite estimate if the user has given a mixed response pattern i.e. they have given at least one incorrect response, and at least one correct response.

You can configure the Items API to use the Expected A Posteriori method to estimate abilities when a mixed response pattern is not available at the beginning of a user's session. If this information is not provided, the Items API will, in lieu of a finite ability estimate, increase or decrease the target difficulty for each Item in fixed increments until a mixed response pattern is available.

  1. Search for all available Items within the required Tags.
  2. Remove excluded Items, and Items from excluded Tags.
  3. Remove Items the user has already seen in the current session, as well as any Related Items defined in the Item bank.
  4. Calculate an ability estimate for the user, as above, to use as a target difficulty for the next Item.
  5. If an offset is defined, apply it to the target difficulty.
  6. From the target difficulty, use the specified difficulty tolerance to determine a range of difficulties from which to select an Item.
  7. Select an Item at random from within the desired range of difficulties.

The session will end when one of the specified termination criteria is met.

The Items API also allows you to embed uncalibrated seed Items during the session. These Items do not need to have difficulty measures, and the responses to these Items will have no effect on the user's ability estimate in the session.

This feature can be used to gather data on new Items so that they can be calibrated and used in future assessments.

Learnosity keeps track of how often Items are used in an adaptive session, either as a live Item or a seed item.

In a testlet adaptive Activity, Items are selected in groups, called testlets. This makes it easy to combine multiple, stand-alone activities into a single, seamless session for the user.

The testlet Activities must first be created in your Learnosity Item bank via either Author Site or Data API. The Activities must also be calibrated with a difficulty measure, for example the average difficulty of the Items within the testlet.

The configuration and selection algorithm is exactly the same as for an Item Adaptive activity above, except instead of selecting Items, testlets are selected based on the respective difficulties and Tags of the testlet Activities.

Item adaptive Activities can also be used as testlets inside a testlet adaptive Activity. In this scenario, the ability estimates used for selecting Items during the Item adaptive testlet will include responses to all Items in the branching session, including previous testlets.

Self-hosted Adaptive

Self-hosted adaptive provides a way to implement your own custom adaptive engine. As the student answers each Item in a self-hosted adaptive test, Learnosity passes the learner's result over to your adaptive engine, so you can dynamically select the next Item for the student to see. The Items are delivered from your Learnosity Item bank, so you still get the benefits of cross-platform reliability, scalability and accessibility that come with Learnosity.

To deliver a self-hosted adaptive test, you'll need to encapsulate your adaptive engine in a public web service that implements our self-hosted REST API (including the OAuth 2 security pattern).

As the student responds to Items, Learnosity will call your adaptive engine via the REST API to get the next Item(s) to show the student. Your self-hosted backend will be provided with information about the progress of the student's session so far, including:

  • Static information such as the user_id, activity_id and session_id of the student's test, and
  • Dynamic information such as the list of Items already presented and the student's score for each response.

In response to this request, the backend can provide one or more Item references that the Items API will fetch from the Item bank and present to the student.

In the self-hosted adaptive flow, Learnosity can persist arbitrary custom state against each of your adaptive sessions. The custom state data is automatically passed to your engine during each request for new items, and your response payload can rewrite this data on the fly. This keeps your engine  lightweight and stateless with minimal database dependencies, so that that the student has a reliable low-latency adaptive experience.

If you need to retrieve the custom state data outside of your adaptive algorithm, you can do so via Data APIs GET /sessions/reports/adaptive endpoint.

Reporting on adaptive sessions

Due to the nature of Adaptive, where students see content tailored to their estimated ability, sessions cannot generally be compared side-by-side, as they would contain different sets of Items. As a result, not all the reports from Reports API are meaningful, or even functional.

Generating reports

It is always possible to use reports that target a singe session, such as session-detail-by-item. All reports that only present aggregate-level data (overall scores), such as sessions-list or last-score-by-activity-by-user.

Reports that will not cope well with Adaptive sessions are those that show breakdowns of multiple sessions by Questions or Items, such as session-summary-by-question or last-score-by-item-by-user. While those reports render but are unlikely to provide very useful data, some other will fail to even initialise. This is the case of the response-analysis-by-item report, which requires all the sessions reported on to have exactly the same set of Items.

The most appropriate reports to compare Adaptive sessions from multiple users are those that rely on Item tags, rather than Items themselves, such as the lastscore-by-tag-by-user. The item-scores-by-tag-by-user report (Learning Outcomes) could also prove very effective.

Getting raw data

Raw data for adaptive sessions can be retrieved via Data API's GET /sessions/reports/adaptive endpoint. The endpoint returns various parameters and estimates specific to the Adaptive algorithm for the session, including the adaptive custom state data for self-hosted adaptive sessions.

Caveats

  • Adaptive Assessments cannot use Dynamic Content Items or the "Try Again" feature.
  • If the client developers allow rewinding to a previous Item in their app, we advise saving the session for the assessment whenever the page is resumed or refreshed. The reason for this is: without a saved session, race conditions may occur, which can cause data loss.

The demos take you through the available dynamic and adaptive Activities, and allow you to configure some of the parameters discussed above.

Go to Item Branching demo  Go to Item Adaptive demo  Go to Testlet Adaptive demo 

For details on configuration options, see the technical reference documentation:

Was this article helpful?

Did you arrive here by accident? If so, learn more about Learnosity.