-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NAP (Sub)Problems and their effects on the LM #2
Comments
Ok, so here is the information iMMT has (because it's given in sTeX)
It also knows
The interaction with the LMP would be handled by ALeA; which makes up the majority of the metadata above. Shtml-viewer would interact with ALeA by updating it on user-interactions and providing it with the feedback. The open questions are therefore: What else gets computed how and where? e.g.
|
It's supposed to be a measure of how well the learner performed with respect to the associated concept and cognitive dimensions. In the naive model we're using now, this is the value the entry in the LM will be adjusted towards (i.e. if it's lower than the quotient it will be raised and if it's higher it will be lowered). This is done so that an answer class can specify things like "The learner has understood quicksort very well but Python syntax very poorly" and such. How exactly answer classes do/would specify this, I do not know. Probably a #NeedsDesign.
I feel uncertain of my footing in answering this question. My intuition is that everything that touches the learner model is only ALeA and not SHTML viewer, but I don't feel secure in that hunch. Can you speak a bit more on the border of the two or point me to a place where I can read up? |
^ that's the part of the answer that is relevant to me :D But I'm guessing ALeA is already doing something like that, so for version 0, it would be good enough to know what that is
That's my intuition as well, but "enter free text here" does not involve the learner model, nor does "forward the response to somewhere"... |
That would be a question for Abhishek, but as far as I understand it, it does the following (At least for APs. For NAPs it does nothing which was the original point.):
I struggle to envision what "submitting an answer to an NAP" would even mean outside the context of ALeA. Submit to where? To be graded by whom? |
That needs specifying. I'm guessing for single choice, it's straight-forwardly all-or-nothing. Answer options in fillinsol I think take an optional Prerequisites are currently not considered, I gather?
it would mean "forward the response to somewhere"; probably an arbitrary optional callback function configurable in the SHTML viewer ;) The alternative would be a callback function for ALeA (or other systems) to insert the reponse field itself. |
IIRC we take the maximum achievable points divided by the amount of choices and then multiplied by the amount of boxes that are ticked (or not ticked) correctly. So in a problem with 4 choices, two of which are correct, you'd get half points if you tick all boxes and three quarters if you tick both correct answers and only one false one.
Not in updating the LM after someone submits an answer to a problem, no. At least not that I know of. They are considered for recommending LOs to learners, though. |
In the systems' meeting today, we talked about how peer grading (even instructor grading) currently does not update the LM and how it should, in the new system. Consensus was that we probably want to deal with this on the basis of subproblems (if there are any).
What's currently happening (as documented here) is that the LMP catches events from the frontend that have all the necessary information and updates the LM from that. Here's an example of that could look like:
The interesting bits are in the list associated to
"updates"
. This allows one event to update multiple concepts differently (flexibility that's required by answer classes), e.g. "This learner has understood DFS correctly but Prolog syntax poorly". This can include any combination of cognitive dimensions as well. As of now, the LMP does not look up anything like prerequisites or objectives, it only relies on the list given here (again, because the general information about the problem isn't as precise as answer classes and those should be what informs the update the most)The "quotient" is a number between 0 and 1 that reflects the performance in this answer. This can, but does not have to, be just the quotient of score over max-points as a first approximation. But again, maximum flexibility for maximum usefulness of answer classes.
What I would imagine to happen is something along the following lines:
I'm not exactly sure what parts of this still need to be designed for, but that's my end of it.
The text was updated successfully, but these errors were encountered: