• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
0

Captivate 12 - Scoring Logic

Participant ,
Feb 01, 2024 Feb 01, 2024

Copy link to clipboard

Copied

I am creating an ergonomics training for a client, and instead of a quiz at the end, they want a self-assessment where the user answers a few questions about their work tasks and setup. The idea is that we can identify people who need a health and safety professional to follow up with them and do a more in depth analysis of their situation.

 

I am trying to create scoring logic for how this would work. There are three questions in the self-assessment, each provisioned to not show the end user if the answer is right or wrong (I overrode the right/wrong messages with a white background and a generic message). I thought I could assign point values of 30, 30, 40 to the questions... then if someone gets a 100, we know there is no need for a followup. If they get anything below 100, they need followup, with a score of 0 being the most urgent. 

 

My question is - will these scores reflect badly on the learner taking the quiz in the LMS reporting? Can you provision the training so that everyone passes, but it still generates a score that isn't tied to performance?

 

Current Settings: I have the mandate set to "Required - the user must take the quiz to continue." The status representation set to "Incomplete to Complete" and the success/completion criteria set to "Quiz is attempted." 

 

Any thoughts on if these settings look right or if there is anything you'd change in the preferences. My client is testing next week, and I don't want any surprises for how this will work!

 

Screenshot 2024-02-01 130915.pngScreenshot 2024-02-01 130929.png

 

Views

164

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Feb 02, 2024 Feb 02, 2024

Copy link to clipboard

Copied

Setting the Required option to what you choose has often caused problems in the past versions, you need to test this out thoroughly.

Since you choose scores to report, I fear that the score will indeed get into the gradebook of the LMS. For older versions of Captivate I would not choose that workflow but those versions had a lot more possibilities like shared and advanced actions. I have several blog posts with workflows for similar use case which cannot be used in version 12.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Advocate ,
Feb 02, 2024 Feb 02, 2024

Copy link to clipboard

Copied

Hi, I can't tell you if all your chosen quiz settings are alright for your scoring model, many things depend on under which LMS your training will run. But for the most important part, that practically all participants pass the self assessment and there's no negative outcome like FAILED, you can use the pass/fail criteria to achieve that.

Those are not as in older versions of Captivate under Preferences, now you access them via the Result slide which is automatically added to a quiz.

Go to the Result slide and select on the right the Interactions panel. I would under Pass Criteria select Based on Points Scored. Lets say you assign 20 / 30 / 40 points for answers and the lower the score the more the particular participant is in need of a health and safety professional follow up, then with e.g. 3 questions 60 points indicate such an urgent follow up need. 120 points indicate all is hunky dory. Everything inbetween would indicate different levels of self assessment states. However - under Pass Points you enter 60 (the lowest possible amount with 3 questions) whereby Total Points would be 120. This way nobody can fail but the reports from an LMS on the test results will give a picture about who needs 'support'.

 

 

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
New Here ,
Feb 05, 2024 Feb 05, 2024

Copy link to clipboard

Copied

What a great suggestion. I'm adding that to my notes! Thanks!

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Participant ,
Feb 05, 2024 Feb 05, 2024

Copy link to clipboard

Copied

This is a good idea. One problem I can see with how I've set this up. The final slide for the learner changes depending on whether they've passed (answered all questions in the positive) or failed (answer 1-3 questions in the negative). If they get all questions "right," then they are routed to a slide that says "no further action required." If they get one wrong, they are routed to a slide that says "you should file a followup with EHS." If everyone passes, I can't think of a way to route them to the right message. Any thoughts on that?  

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Feb 06, 2024 Feb 06, 2024

Copy link to clipboard

Copied

That was the reason from my understanding of your question that you are missing the Branch aware functionality.  That made the quizzing system variables dynamic, they adapted to the part of the slides visited and answered. Score would be only for the that part, and the gradebook in the LMS would report that correct score. Nothing is yet dynamic in the new version, with the exception of the values of variables.  Without JS I fear you'll not be able to solve the problem. 

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Advocate ,
Feb 06, 2024 Feb 06, 2024

Copy link to clipboard

Copied

Lilibiri is correct, best is to use a bit of javascript. Here I stick to my model of 20 / 30 / 40 points for three questions. And still also to the intention that the scores should not reflect badly on the learners (nobody fails). Hence 60 points the lowest, 120 points the max. I'll make it simple and split the score range right in the middle at 90 points. Those with more or equal 90 points will be routed to a slide with "no further action required", those with less than 90 go to a slide "you should file a followup with EHS.". If you want a more differentiated or different handling of the score and response you'd have to adapt the following.

After the assessment (or quiz in Cp terms) you install a slide with the following or very similar javascript:

Open the Interactions panel, select Slide Interactions +, select under Timeline Slide Enter and then under Action select ... MORE. This will open a dropdown list and you scroll down to Run JavaScript which you'll select. This will open a grey javascript panel and you enter (copy/paste if you like) this code:

 

(function(){
	let assessScore = cpAPIInterface.getVariableValue("Quiz.Score");
	if (assessScore >= 90) {
		cpAPIInterface.goToSlide("no further action");
	} else {
		cpAPIInterface.goToSlide("action required");
	}
})();

 

The strings "no further action" and "action required" are slide names and you can adapt them to your liking. But of course they must be identical to your particular slide names. The let = "assessScore" is a function-internal variable and you can name it differently but be sure to adapt that as well in the following condition "if (assessScore >= 90)". The whole construct is a so-called IIFE (immediately invoked function expression) which will self-trigger on Slide Enter and allows us to use internal variables (let) without polluting the so-called global name space. In short we can't harmfully interfere with javascript processes of Captivate itself.

 

I haven't tested specifically this myself because I'd would have to build the entire test context. Please forgive me if something goes wrong, nobody is perfect. But it should work. Take it as a practise suggestion how your intended goal could work.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Participant ,
Feb 06, 2024 Feb 06, 2024

Copy link to clipboard

Copied

Thank you for the suggestion. I have built the blank slide and added the javascript, tailored to my slide names and scoring approach. However, I can't seem to get the scoring to work. For simplicity's sake, I assigned each question to get 10 "points for the correct answer." When I go to the scoring slide to assign a "pass" value, the system totals the "total points" based on what I've entered for each question. It automatically sets total points to 30 and requires a minimum score of 1 to pass. I can't see where to change the "total points."

 

Screenshot 2024-02-06 051704.png

 

Screenshot 2024-02-06 051636.png

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Advocate ,
Feb 07, 2024 Feb 07, 2024

Copy link to clipboard

Copied

As I said I kept thinking about this. If you, the AUTHOR of this thread, are still interested in further suggestions:

A) It might actually not matter too much if a learner 'fails' the assessment-quiz if you don't emphasize it on a result slide. It would be just a 'statistical figure' without being publicised.

B) If it still matters and the intention is that nobody can fail, how about this: Directly before your first assessment slide you add a normal blank slide with maybe an appropriate grafic/illustration, a welcoming text like "Now follows a short questionaire blah blah blah" and one button to CONTINUE (Go to next slide). This button you add in the Visual Properties panel under Reporting as Include in quiz. Assign i.e. 10 points to the button click/tap and also check Add to total and Report to LMS. You must make sure that a learner can proceed to the first assessment question only with this CONTINUE button. Now your Total Points have increased to 40 (in your setup) whereby 10 points are 'unavoidable'. Set those 10 points as Pass Points in the Result slide interactions.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Feb 07, 2024 Feb 07, 2024

Copy link to clipboard

Copied

LATEST

 Just a comment. Youcan hidethe score slide but the score will be stored in the gradebook of the LMS. To avoid this you could switch from graded to survey questions but the variable storing the chosen answer still need to be fixed first.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Advocate ,
Feb 06, 2024 Feb 06, 2024

Copy link to clipboard

Copied

Heck, I fell victim to an elemental thought error. Assuming one could assign different scores (20, 30, 40) to different answer options of the same multiple choice question, this of course is not possible in Cp standard question slides. Cp defines it more like one answer option is either correct or wrong. So one scores or scores not at all. That is the default question schema (not considering here the Penalty points). What you need for your purpose is a custom questionaire with customised javascript evaluation, not a Cp standard Quiz. I'll keep thinking about it ...

 

You are right about the Total Points. They are always an addition of the points one assigns to  each question.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Feb 06, 2024 Feb 06, 2024

Copy link to clipboard

Copied

Indeed, one of the first requests I entered was getting back Branch aware and Question pools. CP12 is still so limited. Support for JS has decreased as well, reason why that wonderful CPEXtra widget is not functional any more. One of its multiple features is the ability to build a score in a user variable and transfer the value the quizzing variable for the score. 

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Resources
Help resources