Copy link to clipboard
Copied
Good day, all;
I am looking for some help with a reporting issue that I can't quite figure out. I'm hoping someone may be able to point me in the right direction with some solutions on where to start looking.
I created three simulations in Captivate 2019 (11.5.4.613 for Mac) that have tasks for quiz points (no negative points). All three simulations have the same quiz preferences for reporting, settings and pass or fail, and as you can see from the attached PDFs of Advanced Interactions, all the quiz questions have points and report to the total. Further, all three have the same settings in the LMS (Blackboard): unlimited attempts and a requirement of 80% to pass to further the tasks via adaptive release.
The issue is that one of the simulations is not reporting 100% even if everything is done correctly. I have done this myself to test after a participant alerted me to the issue, and what's more, it wouldn't be a coincidence that everyone is getting 100% in two simulations and the same 91.67% in the third (I also got 91.67%). For the simulation that isn't reporting 100%, there are 24 points available so I guess that somewhere two points aren't being recorded...?
I've checked that all required scorable items are included in the quiz and are added to the total and I've double-checked the quiz settings and the Blackboard settings, but I can't quite figure the issue. Any ideas or suggestions of where to look and/or what to check would be appreciated!
Thanks,
samamara
There are a couple of ways around the problem you describe (where only one of two scorable objects will be clicked but you don't want the missing score to affect the user being given a perfect passing grade).
One easy way is to have your Completion and Pass/Fail awarded on the basis of scored points rather than percentage. If the user only has to gain a specified number of points to pass, then even if they only click one of the two buttons on each of those slides you mention, then the missing
...Copy link to clipboard
Copied
I regret that you cut off the header section in the Advanced Interaction panel, which is very important: it shows which interactive objects have been selected and the Total score as well. Is it possible to post them, and indicate which panel is for the 'wrong' simulation?
BTW you can include images which save us the time to open each attachment and makes it easier to compare directly.
Copy link to clipboard
Copied
Good day, Lilibiri;
Thank you for your reply, as always 🙂
I believe I have figured out the problem, and it was a rather simple issue: two of the slides in that 'wrong' simulation have two target options (as there are two possible correct answers). Each of the targets has a quiz point allotted and since the learner can only click on one (and indeed only needs to click on one) there are two questions that are never accounted for (hence the 91.67%).
So now that changes my question from "What is the issue?" to "How can I solve the issue?". Currently I can only think to remove those targets/scorable items from the quiz, though if you (or anyone) can think of a solution I'd appreciate the input.
TIA!
samamara
PS: I'm sorry about the attachments. I know you can insert images - I did with that clip from the grade centre - but I couldn't take a clip of the entire advanced action panel and thought the attachments would add more information. I'll do the images next time 🙂
Copy link to clipboard
Copied
You could have detected this in the Advanced Interaction panel...
Since it is a simulation, is it possible to show me a screenshot of the slide, and indicating the two possible targets? I have workarounds for such a situation, but am not sure which one applies in this case. It is clearly a customized slide, you cannot have two targets with a default simulation in assessment/training mode. You'll probably need an advanced action and - if possible - two Next buttons, one without and one with a score. I hope it is not a responsive project?
Copy link to clipboard
Copied
Hi Lilibiri;
Thanks again for your note. Yes, it's a customized slide with two targets that are blank shapes (no fill, no line) used a buttons. And indeed it was a closer look at the Advanced Interaction panel that made me realize that was the issue. I mean, I put the targets there so knew they were there, lol, but was going through and it just hit me when I saw them that that had to be the issue.
Below is a screen shot of the slide. On target success, the project moves to the next slide.
If you have an idea for a workaround that you would like to share, I'd love to hear it.
Thanks again!
Copy link to clipboard
Copied
Forgot to ask about the number of attempts for the shape buttons, and if it is not Infinite, what is the action for Last Attempt? You told me that the Success action is Go to Next Slide. You probably realize that clicking one of the targets means a Failure action for the other target? I suppose you do allow failure?
This may seem a simple question but that is not at all the case. The workflow which I proposed already would be:, if there is only 1 Attempt:
Copy link to clipboard
Copied
Hello again, Lilybiri;
Many thanks for your continued feedback!
Ok, this is the project I had asked for support on before - where I used your workflow of having a large blank button covering the slide triggering a failure ad... - with one small correct target on top that triggers a success action.
On each slide there is an instruction and a target. If the target is clicked, the project advances. On the first click on the wrong place, a hint pops up. If the user gets it right the second time, the project advances. If the user gets it wrong for a second time, they get a failure message and a button to continue to the next slide. They get a point in the quiz for clicking correctly either the first or second time; none if they get the failure message. On two of the slides in that simulation, there are two targets (each with the success action of go to the next slide) and each worth a point since success gives the user a point in the quiz. But the reason that each successful user was only getting 91.67% rather than 100% is because those 'extra' (i.e. not clicked targets) were each an available point that users could never get (if they click one, they never have a need to click the other).
I'm not sure if that is clear...? Everything in the simulation "works" the way it should. It's just these two slides that have the extra phantom quiz point that are messing up the final score and not allowing for a 100% in the grade centre. I don't know if there is a solution for this. I'm not sure that all the other steps you mention above are necessary since the issue is just the score, but perhaps what you said about "adding a score for the slide" may be a possible avenue...what do you think?
Thanks again!
samamara
Copy link to clipboard
Copied
Sorry that I didn't remember the previous thread. I perectly understand your present problem, because it is a situation which happens often.
So you have two attempts for the targets. You still can use the Hint (which is triggered by that big shape button in the background).
It is not possible to attach a score to a slide, you can only attach it to an interactive button, hence my suggestion of using a Show/Hide workflow for a scored Next button. Can you confirm that this is not possible to have such a Next button? Will a shortcut key be acceptable? After your answer I will try to figure out a workflow and describe it here.
Copy link to clipboard
Copied
There are a couple of ways around the problem you describe (where only one of two scorable objects will be clicked but you don't want the missing score to affect the user being given a perfect passing grade).
One easy way is to have your Completion and Pass/Fail awarded on the basis of scored points rather than percentage. If the user only has to gain a specified number of points to pass, then even if they only click one of the two buttons on each of those slides you mention, then the missing scores will not cause issues. If you have pass/fail determined by percentage (not points) then the problem becomes that Captivate calculates the final pass/fail based on ALL of the scored objects, and your implementation then means that the user can never click all of the buttons to reach 100%.
I've suggested the method above (using points) because it can be done with stanard Captivate functionality. Another way to achieve what you want is to use the CpExtra widget and a Conditional Advanced Action to force extra scored points to be reported to the quiz. Basically, when the Conditional Action detects the user click one of the two buttons on the slide it uses CpExtra's xcmndScore command variable to increase the quiz score to compensate for the user NOT clicking the other button. This then allows you to set the quiz completion and pass/fail to be determined by percentage score instead.
Command Variables | CpExtra Help (widgetking.github.io)
Please note that the widget is not free. But if the first option I suggested is not acceptable, then perhaps the cost of the widget will be worth it for your project.
Copy link to clipboard
Copied
I agree with Rod's proposal about CpExtra completely, although I didn't mention it in my answer.
However the solution with changing the pass criterion is fine only when you don't want to show the complete results slide to the learner. It will never show the percentage as being 100%, even if the learner did complete each slide correctly. Same with what will be tranferred to the LMS. That is the reason why I don't like that solution if you want to keep to Captivate's functionality and not use CpExtra nor JavaScript.
Copy link to clipboard
Copied
@RodWard thanks so much for this! Yes, I think changing the pass/fail to points rather than to percentage may be the trick I'm looking for. I don't mind paying for widgets if they are needed, but I think the solution you suggested first is the simpler, more straightforward one. I cand fix it simply and quickly and still ensure that slide task contributes to the overall quiz results.
@Lilybiri thanks for your ongoing support and feedback. I think Rod's pass/fail solution is going to be the easiest there. There is no results slide needed for the learner. In fact, users are not really concerned with their score, for the most part, as it doesn't contribute to the overall grade for the course; the simulation is merely a task for which an 80% score or higher is needed to unlock the next item (via Blackboard's adaptive release system). The question was raised, though, since learners were sure they got everything correct, and they did, but their grades showed less than 100% and for them the perception mattered. Going the pass/fail route I think will suffice.
Thanks again to both of you!
Copy link to clipboard
Copied
Fine for me, but I hope Blackboard works nice with a criteria based on points instead of percentages, because it didn't well in the past when I used it in the university college. Moreover: if the learners have access to their gradebook, they'll still see a percentage which is not 100%. You may remediate that with a workflow I explained in this blog post for another user:
https://blog.lilybiri.com/quiz-replace-score-by-100-percent-or-0-percent
Copy link to clipboard
Copied
Thanks @Lilybiri - I think what I've decided to do for the short term (keeping Rod's idea in place for the next iteration of the course) is to create a new grading schma in BBL where 0-79.9% will show 'fail' and 80-100% will show a pass, and that's what the learners will see in their grade centre. Also, each simulation comes with explicit directions that they need to get 80% (as well as the specific number/number they need to get correct in order to pass). That way I won't have to go in and mess around with any of my quiz reporting settings in my completed Captivate projects just now - not that I'm adverse to it but I just don't have the time at the moment. My only issue with this, though, is that if learners do see just 'fail' in their grade centre, they won't know how much more they needed to pass. Hmm. That concern just came to me now. I could just hide the simulation columns from the users, though. Again, the simulations don't contribute to their grades; they are just tools in the adaptive release path to get them to the final assessment. Anyhow, just thinking out loud here. I think I will come back to these simulations when I have more time and think about the solutions you both offered and see what else I can do. Of course the quickest and simplest would be just to remove those targets from the quiz all together, but I would like them to be a part of it.
Thanks again!!
Copy link to clipboard
Copied
If the Pass is not important for their grade, why not use a variable to store the 'correct' score and calculate the percentage yourself (subtracting the extra target score from the maximum score) and show that percentage in the simulation? You could still report the Pass/fail to the LMS, but at least the learner will have seen a correct result of the assessment. Sorry for being so pushy but I have been teaching for decades and never felt good about not being completely transparant to my students.
This workflow with a user variable would not take a lot of time...
Copy link to clipboard
Copied
Hi @Lilybiri - thanks! No worries...I don't feel it's pushy. I've also been an educator for a very long time and I also believe in transparency. This is why I was thinking out loud about having the score more visible. It's not just that the pass is not important for their grade; it's doesn't contribute to it at all. They are told in the instructions by text before the start the simulation and by text and voiceover once they start the simluation that there are 24 clicks and they need to get 20 of them correct to open the next item. The pass/fail solution in the grade centre is a good one for now for the active sections but in future I'd like to change it. They have unlimited attempts to get it right, but if they get it wrong it would be nice for them to know how much they got it wrong by, etc.
I like your idea of having the percentage show at the end of the simulation (with the pass/fail still showing in the LMS). If you could help me understand how to do that, I'd be grateful 🙂
Thanks!
Copy link to clipboard
Copied
OK, I will write out a blog post explaining to use a user variable to calculate the correct percentage acquired by the learner, if some of the simulation slides have multiple targets which should lead to only one score. Please, be a bit patient. I wonder if it is a good idea to keep this in the same thread.
Copy link to clipboard
Copied
Hi @Lilybiri - thanks a bunch! No worries...please take your time. I'm not in a hurry and surely you're busy. I think you could put the link in this same thread and maybe a separate thread with an appropriate title to share the blog post as well...? That way anyone searching for the same or similar issue has a better chance of stumbling onto your post 🙂
Thanks!
Copy link to clipboard
Copied
Copy link to clipboard
Copied
This is fantastic @Lilybiri - I'm really looking forward to sinking my teeth into this. Thanks, as always, for your great support 🙂
Copy link to clipboard
Copied
You're welcome. I hope it will solve your problems.
BTW: you may ignore that it is possible to mark multiple answers as being correct. Especially for this type of long thread with mixed questions. The question here is bit different from the one in the title, reason why it would have been better to start a new thread (like you did before).
Copy link to clipboard
Copied
Thanks @Lilybiri!
Copy link to clipboard
Copied