Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logging metrics and flags #59

Open
amrdraz opened this issue Sep 10, 2015 · 6 comments
Open

Logging metrics and flags #59

amrdraz opened this issue Sep 10, 2015 · 6 comments
Assignees
Milestone

Comments

@amrdraz
Copy link
Owner

amrdraz commented Sep 10, 2015

challegneTrial and arenaTrial will no longer use mongoose-version but instead updates to user metrics should be maintained in an associated collection dedicated for logging metrics

currently should be mixed

challenges and arenas should have flags mixed attribute which has known default settings but also can be freely set in order to account for custom challanges

@amrdraz amrdraz self-assigned this Sep 10, 2015
@amrdraz amrdraz added this to the flags milestone Sep 10, 2015
@amrdraz
Copy link
Owner Author

amrdraz commented Sep 10, 2015

Metrics used in Discovering Patterns in Student Activity
on Programming Assignments

where calcified as

UNIVERSE events describe the state of the universe in which the students are working. They are time-based events that represent fixed points relative to the due date.

• N days before the assignment due date (N = 5, 4, 3, 2, 1)
• assignment due date
• N days after the assignment due date (N = 1, 2, 3)

SUBMISSION events describe how far along a student is in his or her submission sequence.

• Student has made his/her first submission
• Student has made his/her first submission that compiled without errors
• Student has made N% of their eventual total number of submissions for this
assignment (N = 25, 50, 75)
• Student has made his/her final submission

METRIC events describe changes that a student has made to his or her code, based on code metrics that are computed during
grading.

• # of non-commented lines of (test, non-test) code (increased, decreased) by
5 or more lines
• # of commented lines of (test, non-test) code (increased, decreased) by 5 or
more lines
• # of functions/methods in non-test code (increased, decreased) by 1 or more
• # of test cases (increased, decreased) by 1 or more
• Cyclomatic complexity number [6] of non-test code (increased, decreased)
by 5 or more
• Ratio of (methods, statements, conditionals) in non-test code (increased,
decreased) significantly
OUTCOME events
describe a student’s score
on a particular submission.
• (Correctness, tools, automated) score started at (A, B, C, D, F)
• (Correctness, tools, automated) score increased to (A, B, C, D)
• (Correctness, tools, automated) score decreased to (B, C, D, F)
• (Correctness, tools, automated) score at the end of the assignment was (A,
B, C, D, F)

@amrdraz
Copy link
Owner Author

amrdraz commented Sep 10, 2015

Metrics form another learning system focused on completion time an error frequency
drawing a learning curve over time

@amrdraz
Copy link
Owner Author

amrdraz commented Sep 10, 2015

Another Metric is flow which can be measured by asking.

a) How challenging did you find the last activity
b) were your skills appropriate for understanding this last activity

over a scale of 5 after each challenge ends

@amrdraz
Copy link
Owner Author

amrdraz commented Sep 10, 2015

From the MSQL to be administered after the first week the first quiz and before the exam maybe use other course stuff

Intrinsic Motivation Questions

  1. In a class like this, I prefer course material that really challenges me so I can learn new things.
  2. In a class like this,I prefer course material that arouses my curiosity,evenifitisdificulttolearn.
  3. The most satisfying thing for me in this course is trying to understand the content as thoroughly as possible.
  4. When I have the opportunity in this class I chose course assignments that I can learn from even if they don't guarantee a god grade.

Extrinsic

  1. Getting a good grade in this class is the most satisfying thing for me right now.
  2. The most important thing for me right now is improving my overall grade point average, so my main concern in this class is getting a good grade.
  3. If I can, I want to get better grades in this class than most of the other students.
  4. I want to do well in this class because it is important to show my ability to my family, friends, employer,or others.

@amrdraz
Copy link
Owner Author

amrdraz commented Sep 12, 2015

I will start modifying the challenges to adapt the new code multiple choice challenge
this way I can run assessment at the end of arenas as well as build

The content of the challenge should be from now on
other then the publishing related stuff
blueprint, containing the definition of the challenge
for code it's traditionally [tests, solution, setup, description]
the instance of the code challenge should contain [solution, reports]

in order to track a code challenge I need to log every run submission
but also log a set of behaviours/events that get executed such as writing code pausing etc

to save that in the trial I need to record the history
with every run I should save the number of line and errors, the time
with every start debugging I should save that the debug started and when it stopped as well as errors
with every submission I should save the test report

for that I would need to log an array of activities on the trail
persisting the final results in the trial as far the game in the form of score but retaining all events in an array of activities

The trial activities are events that occur while debugging

I need to prompt the user for the flow rating at the end of the trials as well and store that result in the trial itself for now under the field metrics which is a mixed of other stuff we log such as start time, end time, and averages of activity logs (percentage of error etc)

the events that can be logged can be categories by mode
[run, debug, test]
interfaces
[code, console, glossary, truth_table, visualisation, canvas]
events
[idle, typing, reset, run, test, debug.start, debug.step, debug.back, debug.stop, lookup]

some events need to be appended by time proximity (debug step 10 times makes little sense we can just lump it as debug.step with start end time

@amrdraz
Copy link
Owner Author

amrdraz commented Sep 12, 2015

as for code assessment by multiple choice challenges
I think we should represent it as [code, question, choices, answer, correct_explanation, incorrect_explanation]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant