Optimizing Student Response Capture During Unplanned Disruptions


Three areas of focus are critical to optimizing student response capture:

  1. Keeping saved sessions up to date. Persisting data as often as is practical to reduce the chance of losing student responses.
  2. Detecting any save or submission failure and responding accordingly.
  3. Saving or submitting a session later if communication between assessment and server is impacted for a prolonged period.

This article will cover the steps developers should take to minimize the impact of a temporary loss of internet connection, power cut, service provider problem, or server issue on a student assessment submission (also known as a session). The most important thing to remember when designing an assessment workflow is that technical and physical constraints prevent a student computer from reaching, or being reached by, a server when the systems connecting them have been affected.

Once initialized, assessments are loaded to a student’s device to minimize the need for traffic between the student machine and server. As a student adds responses to Questions, these responses also remain local until a save or submit action sends that data to a server. If an intermittent connectivity issue occurs, there's no way to get the data to the server until the connection is restored. 

One way around this is to build a siloed application that (using local storage or similar means) automatically stores data locally when connectivity is compromised, detects when a connection is restored, and automatically saves or submits to the server. This would be convenient and reliable in some ways, but has ramifications when it comes to shared machines, starting and resuming an assessment on different devices, possible manual clearing of storage by the end user, and other potential risks.

To mitigate these risks, robust save/submit actions and event handling can be used to keep local student responses, and data persisted to the server, as in sync as possible.

Default save and submit behavior

Because each customer prefers to handle server traffic differently, our assessment API presents save and submit buttons by default for the student to use at will, but assumes no other save or submit behavior (It’s possible for a developer to remove these buttons but, for the sake of this discussion, let’s assume they are present and that the assessment type is “submit_practice,” able to save and submit to the server).

If a customer uses the default behavior when presenting an assessment, and (for the sake of argument) the student never uses a manual save or submit button, the student’s responses will not be saved or submitted, the session will not exist or be updated, and the student could work for a while in the assessment and Learnosity would still not know anything about the session.

If all goes well, the student will submit the assessment when complete, and the session will be correctly persisted. Further, if the submission fails, our API will automatically show a message to the student that the submission failed and give them an opportunity to save the session data, encoded as a Base64 string for encapsulation, so it can be submitted later. Adding save behavior (such as autosave and event-driven saves), and possibly retrieving session data proactively for the student using the getSubmissionData() public method can help ensure that the session remains as current as possible even if there is an internet connection loss or server issue.

Keeping saved sessions up to date

The best way to reduce potential lost responses is to keep the session as current as is practical, synchronizing between the student machine and server.

The most important thing to remember is that a session does not exist in our records until it is saved or submitted. Similarly, it will be as current as the most recent save.

So the first question is, how can student responses be saved?

  • Students can manually save at will if the default save button, or a custom button controlled by a developer, is available in the assessment UI.
  • Developers can programmatically save at will using the save() public method.
  • Developers can enable auto-save behavior in the assessment.

The next question is, how early should a session be saved? Because a session isn’t persisted in the database until a save or submit occurs, when a developer first chooses to save can impact customer workflows.

  • If a student should be allowed to fully abandon a session prior to saving, you don't need to take any action. Just initializing the assessment will not persist the session.
  • If you want to save a record of the assessment as soon as the student starts it, you can use the save() public method whenever you like (for example after hitting the Start button when the test:start event fires). This will save the session even before the student adds any responses.

And the last question is, how do you save the student's work as often as is practical, while still being efficient?

  • The common approaches here are to use autosave, save on every Item navigation, and save every time the student interacts with a Question.
  • It may be instinctual to use autosave at a very frequent interval, or to save every time the student changes a response. But these options can quickly overwhelm your servers with unneeded traffic. We recommend not setting autosave to less than 30 seconds. We also strongly recommend not saving every time a student interacts with a Question for the same reason.
  • Instead, we recommend the following:
    • Save every time a student navigates to a new Item. This typically means the student has entered a response and moved on to another Item.
    • Autosave every five minutes. Saving upon every navigation is the mainstay for this task, but autosave is still helpful when a student has to step away and loses track. Let's say a student left an essay partly finished and steps away. After five minutes, that work will be saved even if there was no navigation.
  • You can also try to tie into browser events (such as onBeforeUnload) to force a save if the student tries to leave the assessment. Our APIs will attempt to do that but leave the option up to the student. However, browsers won't allow you to be rigid in this area, such as preventing the user from quitting/navigating away, so you can't always guarantee a success.

Detecting a save or submit failure

The above measures will keep the session as up to date as the last time the student navigated, manually saved, or let five minutes go by. However, without one more configuration, it's possible for a student to continue working without realizing that connectivity has been lost. Where this is a concern, we recommend adding another approach, which is to detect when a save fails by using the test:save:error event.
Your developers will likely want to design a behavior here that best suits your application. If connectivity is permanently lost, this can fire repeatedly, and if the loss is fleeting, you may not want to take drastic action because the connection may be quickly restored without intervention. You may, for example, want to only take action after three consecutive failures, and resetting the counter after every successful save.
Taking this small extra step will allow you to display an alert that says something like, "Your connection may not be stable. Please consider returning to complete this assessment another time." or something to that effect. This will reduce the need to try to recover from a submission failure by reducing the amount of response data at risk.

A developer can further instruct the students to save work to that point any way the customer prefers. Based on the intended audience, this could range from taking screenshots or seeking help from a teacher or parent, all the way up to retrieving session data manually using the getSessionData() public method. That data could then be saved to a customer server if the connection loss is intermittent or partial, or saved by the student to the testing machine or removable media. In either case, the encoded session can then be submitted later by the customer.

How to "submit" a failed submission when needed

Up to this point, we’ve done our best to keep the server side synchronized with the student data. Based on the options described, the risk of loss should be limited to the last response entered by the student, and that risk is only if the student or developer hasn’t saved thereafter. All other Items have been covered through navigation, autosave, and a warning not to proceed if a connection can’t be detected. Now we need to decide what to do, knowing that the session can’t be submitted.

The first approach is to make no special additional effort, because you’ve taken precautions to ensure the student data is as accurate as the stability of the student’s connectivity allows. Our default failed submission recovery behavior will automatically present a user with a dialog that can be used to save responses added thus far.

While not required, you can also customize our failed submission behavior:

  • You can configure the assessment to mail the results to an administrator. If your customers are remote, this may not be helpful because a loss of connectivity will also likely prevent email from being sent. However, if the issue is not strictly connectivity (if the submission fails for a different reason), or if the issue is outside the customer’s intranet, for example, email may still be an option. Or:
  • The student can download the data manually, save it, and get it to an administrator, you, etc., in a different way.

A second option is to add custom measures in addition to the default behavior. Similar to the previous discussion about handling save errors, you can create an event handler that is invoked when the test:submit:error event is received, and take your own actions. These might include displaying your own dialog and using the aforementioned getSubmissionData() public method to grab response data. (This can be helpful if you suspect students may not heed instructions shown in the failed submission recovery dialog.)

In both cases, you can then submit that session using the Data API. Finally, if circumstances require that you submit a session in the absence of recovery data, you can use the Data API to force a submission and update the session status to Complete.

Planning for success when there’s no dial tone

The challenge caused by a prolonged save or submit interruption is that there’s no way to get assessment-side responses to the server. As a real-world metaphor, say an assessment is given to a student and answers must be phoned in by the end of the day. In the interim, however, phone service was interrupted. The student still has the responses, but can’t phone them in, and the instructor can’t call the student to get the responses. The only way to remedy the problem is to transmit the data another way (such as via email), or call when phone service is restored. Until that can happen, the responses are unknown to the instructor.

This is equivalent to assessment data being client-side and something such as an intermittent connection loss, firewall, or server issue affecting transmission. There is no way for the assessment to save or submit to the SaaS server, or for the server to reach out and grab the data from the student. By implementing the best practices discussed in this article, however, you will arm your assessments with the best protection against unforeseen connection issues from the student device all the way to the server.

Related Links

Was this article helpful?

Did you arrive here by accident? If so, learn more about Learnosity.