Preview API Documentation - Subject to change - Release Notes
Overview
Feedback Aide is an embeddable API for assessing a learner response against a rubric using Learnosity’s AI engine. Feedback Aide is a safe and efficient assistant for evaluating and providing quality feedback on essays.
Learnosity's AI engine generates an initial score for a learner response, given the question stem and a grading rubric, as well as actionable, written feedback. Learnosity's FeedbackAide API then helps graders easily edit the generated scores and feedback.
Screenshot 1: Feedback Aide showing the rubric / assessment criteria
How it works
Feedback Aide takes an essay 'stimulus', a scoring 'rubric' and a user 'response'. It then evaluates the 'response' to the 'stimulus' against the 'rubric' and identifies the appropriate rubric levels for the essay and provides constructive feedback based on the essay.
This can then be reviewed, and modified by a human grader prior to be saved.
Once the feedback has been finalized it can then be presented to the learner in a simple read-only interface.
Screenshot 2: Feedback Aide showing the automated grading decisions (in orange)
Screenshot 3: Feedback Aide showing the manual grading decisions (in blue)
Current Features
- Feedback and grading of analytic rubrics with essays between 100 and 10,000 characters.
- Two different models depending on use case:
- standard-essay - flexible and efficient model with good accuracy and feedback over a wide range of essay types and rubrics.
- advanced-essay - our most advanced model with excellent accuracy and feedback over a wide range of essay types and rubrics.
- Secure and private - responses are not used to train LLM foundation models.
-
Two modes:
-
Grading mode. This is meant for the first scoring of a learner’s response. It initializes with a blank rubric and a score of –/max. If this session has previously been initialized it will restore any feedback that has previously been saved.
-
Learner review mode. This is a read-only view of the populated rubric, overall score, and written feedback. In this mode, the learner will not be able to see which scores and feedback were originally generated by AI and which were human-generated.
-
Screenshot 4: Feedback Aide in learner review mode
Models
The following models are currently available.
Model | Description |
standard-essay | Flexible and efficient model with good accuracy and feedback over a wide range of essay types and rubrics. |
standard-essay-2024-06-30 | Specific version of standard essay - standard-essay currently maps to this. |
advanced-essay | Our most advanced model with excellent accuracy and feedback over a wide range of essay types and rubrics |
advanced-essay-2024-06-30 | Specific version of advanced essay - advanced-essay currently maps to this. |
Troubleshooting
- Manual grading: If AI grading isn't enabled or is unavailable, a human grader can manually score the response using the rubric.
- Unsupported essay: If the response is less than 100 characters or over 10,000, Feedback Aide will not grade the essay. Graders will be shown a warning message, and can manually grade the response.
- Disruption: If there is an interruption in service from the AI engine, the grader will see an error message informing them of the disruption. They have the option to regenerate, resend the information to the engine, without using additional credits, or to manually grade.