Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NAP (Sub)Problems and their effects on the LM #2

Open
lambdaTotoro opened this issue Jan 21, 2025 · 6 comments
Open

NAP (Sub)Problems and their effects on the LM #2

lambdaTotoro opened this issue Jan 21, 2025 · 6 comments

Comments

@lambdaTotoro
Copy link

In the systems' meeting today, we talked about how peer grading (even instructor grading) currently does not update the LM and how it should, in the new system. Consensus was that we probably want to deal with this on the basis of subproblems (if there are any).

What's currently happening (as documented here) is that the LMP catches events from the frontend that have all the necessary information and updates the LM from that. Here's an example of that could look like:

{
	"type" : "problem-answer",
	"uri" : "http://mathhub.info/iwgs/quizzes/creative_commons_21.tex",
	"learner" : "ab34efgh",
	"score" : 2.0,
	"max-points" : 2.0,
	"updates" : [{
		"concept" : "http://mathhub.info/smglom/ip/cc-licenses",
		"dimensions" : ["Remember", "Understand", "Evaluate"],
		"quotient" : 1.0
	}]
	"time" : "2023-12-12 16:10:06",
	"payload" : "",
	"comment" : "IWGS Tuesday Quiz 7"
}

The interesting bits are in the list associated to "updates". This allows one event to update multiple concepts differently (flexibility that's required by answer classes), e.g. "This learner has understood DFS correctly but Prolog syntax poorly". This can include any combination of cognitive dimensions as well. As of now, the LMP does not look up anything like prerequisites or objectives, it only relies on the list given here (again, because the general information about the problem isn't as precise as answer classes and those should be what informs the update the most)

The "quotient" is a number between 0 and 1 that reflects the performance in this answer. This can, but does not have to, be just the quotient of score over max-points as a first approximation. But again, maximum flexibility for maximum usefulness of answer classes.

What I would imagine to happen is something along the following lines:

  • Learner submits an answer to a given practice (sub)problem.
  • Grader (peer or instructor) selects answer classes, adjusts points if necessary and enters feedback. (We can and should debate if
  • LMP gets thrown one event per problem if there are no subproblems, otherwise one per subproblems and updates the learner model.

I'm not exactly sure what parts of this still need to be designed for, but that's my end of it.

@Jazzpirate
Copy link
Collaborator

Ok, so here is the information iMMT has (because it's given in sTeX)

  • explicitly annotated preconditions (dimension+URI)
  • explicitly annotated objectives (dimension+URI)
  • explicitly annotated number of points per (sub)problem
  • explicitly annotated gnotes, which include the answer classes/traits etc.

It also knows

  • A user-supplied response to a, autogradable (sub)problem
  • The correct answer to an autogradable (sub)problem
  • It can compute a "feedback" from both (as seen in the demo yesterday)

The interaction with the LMP would be handled by ALeA; which makes up the majority of the metadata above. Shtml-viewer would interact with ALeA by updating it on user-interactions and providing it with the feedback.

The open questions are therefore: What else gets computed how and where? e.g.

  • What is the "quotient" exactly? Is that a job for shtml-viewer or for ALeA (the answer may or may not depend on: what data is it computed from exactly?)
  • What about free-text answers for NAPs? Is that an SHTML-viewer thing or an ALeA thing?
  • Answer classes/traits, their descriptions and feedback are all generated by tex, so they're a fixed part of the resulting SHTML, so displaying them should probably be SHTML-viewer functionality (The shtml-viewer knows exactly where they go). The SHTML-viewer could update ALeA on changes there the same way it (will soon) happen with responses to autogradable problems.

@lambdaTotoro
Copy link
Author

What is the "quotient" exactly?

It's supposed to be a measure of how well the learner performed with respect to the associated concept and cognitive dimensions. In the naive model we're using now, this is the value the entry in the LM will be adjusted towards (i.e. if it's lower than the quotient it will be raised and if it's higher it will be lowered). This is done so that an answer class can specify things like "The learner has understood quicksort very well but Python syntax very poorly" and such.

How exactly answer classes do/would specify this, I do not know. Probably a #NeedsDesign.

[Are free-text answers for NAPs] an SHTML-viewer thing or an ALeA thing?

I feel uncertain of my footing in answering this question. My intuition is that everything that touches the learner model is only ALeA and not SHTML viewer, but I don't feel secure in that hunch. Can you speak a bit more on the border of the two or point me to a place where I can read up?

@Jazzpirate
Copy link
Collaborator

How exactly answer classes do/would specify this, I do not know. Probably a #NeedsDesign.

^ that's the part of the answer that is relevant to me :D But I'm guessing ALeA is already doing something like that, so for version 0, it would be good enough to know what that is

My intuition is that everything that touches the learner model is only ALeA and not SHTML viewer

That's my intuition as well, but "enter free text here" does not involve the learner model, nor does "forward the response to somewhere"...

@lambdaTotoro
Copy link
Author

But I'm guessing ALeA is already doing something like that, so for version 0, it would be good enough to know what that is.

That would be a question for Abhishek, but as far as I understand it, it does the following (At least for APs. For NAPs it does nothing which was the original point.):

  • Grade problem.
  • Take the amount of points the learner reached and divide by maximum available points.
  • Find annotated objectives for the problem and take concepts and dimensions from there.

That's my intuition as well, but "enter free text here" does not involve the learner model [...]

I struggle to envision what "submitting an answer to an NAP" would even mean outside the context of ALeA. Submit to where? To be graded by whom?

@Jazzpirate
Copy link
Collaborator

Take the amount of points the learner reached and divide by maximum available points.

That needs specifying. I'm guessing for single choice, it's straight-forwardly all-or-nothing. Answer options in fillinsol I think take an optional pts value(?), otherwise all-or-nothing. What about multiple choice?

Prerequisites are currently not considered, I gather?

I struggle to envision what "submitting an answer to an NAP" would even mean outside the context of ALeA

it would mean "forward the response to somewhere"; probably an arbitrary optional callback function configurable in the SHTML viewer ;) The alternative would be a callback function for ALeA (or other systems) to insert the reponse field itself.

@lambdaTotoro
Copy link
Author

What about multiple choice?

IIRC we take the maximum achievable points divided by the amount of choices and then multiplied by the amount of boxes that are ticked (or not ticked) correctly. So in a problem with 4 choices, two of which are correct, you'd get half points if you tick all boxes and three quarters if you tick both correct answers and only one false one.

Prerequisites are currently not considered, I gather?

Not in updating the LM after someone submits an answer to a problem, no. At least not that I know of. They are considered for recommending LOs to learners, though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants