What is Feedback Aide API?

Overview

Feedback Aide is an embeddable API for assessing a learner response against a rubric using Learnosity’s AI engine. Feedback Aide is a safe and efficient assistant for evaluating and providing quality feedback on essays and short responses.

Request a demo

Learnosity's AI engine generates an initial score for a learner response, given the question stem and a grading rubric, as well as actionable, written feedback. Learnosity's FeedbackAide API then helps graders easily edit the generated scores and feedback.

Screenshot 1: Feedback Aide showing the rubric / assessment criteria

 

How it works

Feedback Aide takes an essay 'stimulus', a scoring 'rubric' and a user 'response'.   It then evaluates the 'response' to the 'stimulus' against the 'rubric' and identifies the appropriate rubric levels for the essay and provides constructive feedback based on the essay.

This can then be reviewed, and modified by a human grader prior to be saved.

Once the feedback has been finalized it can then be presented to the learner in a simple read-only interface.

Feedback Aide can also handle short response questions, taking a short 'stimulus', a 'rubric' of correct answers and a user 'response'. It then evaluates the 'response' against the 'rubric', scoring the student and providing them constructive feedback based on which parts of the correct answer they achieved.

Screenshot 2024-07-08 at 19.29.02.png

Screenshot 2: Feedback Aide showing the automated grading decisions (in orange)

 

Screenshot 2024-07-08 at 19.30.32.png

Screenshot 3: Feedback Aide showing the manual grading decisions (in blue)

 

Screenshot 2025-03-27 at 2.21.33 pm.png

Screenshot 4: Feedback Aide grading a short response question (teacher and student views)

 

Features

Essays

  • AI enhanced feedback and grading of essays between 100 and 15,000 characters.
  • Rubrics that cover different grading types:
    • analytic: assess learner work across multiple different criteria
    • holistic: assess learner work as a whole
  • Support for up to 5 sources alongside the stimulus, each with a limit of 30,000 characters.
  • Two different models depending on use case:
    • standard-essay - flexible and efficient model with good accuracy and feedback over a wide range of essay types and rubrics.
    • advanced-essay - our most advanced model with excellent accuracy and feedback over a wide range of essay types and rubrics.
  • Topic Detection - identify essays that are not on topic and explains why it is considered off topic.
  • Moderation & Safety - optional AI-driven moderation helps ensure that sensitive, inappropriate, or crisis content is flagged and managed efficiently. Content moderation with Feedback Aide
  • Secure and private - responses are not used to train LLM foundation models.
  • Designed for localisation and internationalisation: internationalization - Feedback Aide API

Short Responses

  • AI enhanced feedback and grading of short response questions up to 2,000 characters.
  • Rubrics that cover different question types:
    • sr-general: questions that require a short answer or explanation.
    • sr-categorize: evaluate the learner's ability to categorize or classify information.
    • sr-keypoints: evaluate the correctness of the key points in the response (Screenshot 4).
    • sr-xfromy: use when a student must provide 1 or more (X) correct answers from a longer list of (Y) correct answers.
  • Support for up to 5 sources alongside the stimulus, each with a limit of 30,000 characters.

Full Grading and Review workflow:

  • Grading state. This is meant for the first scoring of a learner’s response. It initializes with a blank rubric and a score of –/max.  If this session has previously been initialized it will restore any feedback that has previously been saved.

  • Learner review state. This is a read-only view of the populated rubric, overall score, and written feedback. In this state, the learner will not be able to see which scores and feedback were originally generated by AI and which were human-generated.

Screenshot 5: Feedback Aide in learner review state

 

Essay Models 

The following models are currently available for Essay types:

Model Description
standard-essay Flexible and efficient model with good accuracy and feedback over a wide range of essay types and rubrics - currently points to standard-essay-2025-03-14
standard-essay-2025-03-14

March 2025 release which adds model options support for grading.learnerlevel and grading.markingstyle and improves accuracy and feedback.

standard-essay-2024-11-22 The November release improves accuracy for most use cases, and also adds model options for topic detection which flags essays that are completely off topic.
standard-essay-2024-08-05 The August 2023 models are no longer recommended for use as the newer models have improved performance.
advanced-essay Our most advanced model with excellent accuracy and feedback over a wide range of essay types and rubrics - this will point to the best performing advanced model - currently mapped to advanced-essay-2025-03-14
advanced-essay-2025-03-14 March 2025 release which adds model options support for `grading.learnerlevel` and `grading.markingstyle` and improves accuracy
advanced-essay-2024-11-22 The November release improves accuracy for most use cases, and also adds model options for topic detection which flags essays that are completely off topic.
advanced-essay-2024-08-05 The August 2023 models are no longer recommended for use as the newer models have improved performance.

Short Response Models

The following models are currently available for Short Response types:

Model Description
advanced-shortresponse Our evergreen advanced shortresponse model which points to the best performing model.  Currently mapped to `advanced-shortresponse-2025-02-12`
advanced-shortresponse-2025-02-12 First release of the advanced shortresponse model which has support for 4 different rubric styles.  sr-general, sr-keypoints, sr-categorize and sr-xfromy

 

Troubleshooting

  • Manual grading: If AI grading isn't enabled or is unavailable, a human grader can manually score the response using the rubric.
  • Unsupported essay: If the response is less than 100 characters or over 15,000, Feedback Aide will not grade the essay. Graders will be shown a warning message, and can manually grade the response.
  • Unsupported short response: If the response is over 2,000, Feedback Aide will not grade the response. Graders will be shown a warning message, and can manually grade the response.
  • Disruption: If there is an interruption in service from the AI engine, the grader will see an error message informing them of the disruption. They have the option to regenerate, resend the information to the engine, without using additional credits, or to manually grade.

Release Notes

The detailed release notes are available here:  Feedback Aide Release Notes

Was this article helpful?

Did you arrive here by accident? If so, learn more about Learnosity.