Understanding Submission and Scoring

This article outlines how Questions API's (and as a result, Items API's) submission works and how responses saved in assessments are scored. For an article detailing this from an authoring point of view, see the Author Guide article "Understanding Question Validation and Scoring types".

When an assessment session is saved or submitted with the Questions API by clicking Save Answers or Submit Answers buttons, or by calling the methods save() or submit(), the response of each Question in the assessment will be stored inside the Learnosity database. The getResponses() method can be used to obtain the response objects.

Question example: Shorttext

getResponses() JSON example

    "value": "Example",
    "characterCount": 7,
    "type": "string",
    "apiVersion": "v2.197.0",
    "revision": 1

When the assessment is loaded in review mode or when the learner clicks the Check Answer button of a certain Question, the Question’s stored response will be evaluated and scored. Each Question type will use different logic to score the Question.

Question example: Shorttext

In this example, the shorttext Question has four possible answers. So in this case, if the learner answers the Question with "Los Angeles", the Question will be marked as correct and the learner will receive 1 Point. In order for the learner to get the highest score, the answer that the they need to provide is "Washington, D.C." which will give them the highest score of four.

For shorttext Question type, the stored response will be evaluated against all the possible answers and if the stored response matches one of the possible answers, the final score will be returned based on that matched possible answer. Developers can call the getScore() method to get the score object with max_score and score values.

getScore() JSON example

    "max_score": 3,
    "score": 2

We have auto-scorable or manually scored Questions, but for those who want to manually score the auto-scorable Questions, we also provide a flag called automarkable in the validation object of the Question JSON to support this functionality.

For auto-scorable Questions, when you have the state automarkable: false, users can input the max_scorevalue. When you have the state automarkable: true, the Questions scoring method will go back to auto-scoring.

Question JSON Example - with automarkable flag

    "options": [{
        "label": "[Choice A]",
        "value": "0"
    "stimulus": "<p>[This is the stem.]</p>",
    "type": "mcq",
    "validation": {
        "scoring_type": "exactMatch",
        "valid_response": {
            "score": 1,
            "value": []
        "automarkable": false,
        "max_score": 2

When you have the state automarkable: false and max_score has a value, calling the getScore() method will return a score object with max_score and score: null.

getScore() JSON example

    "max_score": 3,
    "score": null

Advanced usage: revision attribute

A revision value is stored in each response and is set by Questions API. Each time a learner changes their response and saves or submits their session, this number is incremented. This value is used on Learnosity servers to ensure that the latest saved response from the learner will be used to calculate their final scores. This helps prevent data loss when newer responses have already been saved for a session.

When a response is saved, and the revision number is less than one that is saved previously, the save operation will fail since a newer response has already been provided.

When manually processing scores using the Data API session endpoints, specifically the template submission endpoint, you can choose to ignore the revision value inside a recorded response. The ignore_response_revisions allows you to disable the above behavior. When set to true, all responses sent to this endpoint will be saved, regardless of the revision numbers on any existing saved responses.

Was this article helpful?

Did you arrive here by accident? If so, learn more about Learnosity.