Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

User can report answer in peer review, teacher can view reports in manual review #1364

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

Maijjay
Copy link
Contributor

@Maijjay Maijjay commented Jan 13, 2025

Summary by CodeRabbit

Based on the comprehensive summary, here are the updated release notes:

  • New Features

    • Added ability to flag and report peer review answers.
    • Introduced a threshold system for flagged answers in courses.
    • Implemented a reporting dialog for inappropriate content.
    • Added a new component to display flagged peer review reports.
    • Introduced new localization entries for reporting functionality.
  • Improvements

    • Enhanced peer review moderation capabilities.
    • Expanded course management with flagging options.
    • Improved filtering of submissions for peer review by excluding flagged answers.
  • User Experience

    • Users can now report answers as spam, harmful, or AI-generated.
    • Instructors can set a threshold for moving flagged answers to manual review.
    • Improved transparency in peer review processes.
    • New input field for setting the flagging threshold in course management.

Copy link
Contributor

coderabbitai bot commented Jan 13, 2025

Walkthrough

This pull request introduces functionality for reporting answers within the peer review process across the learning management system. Key changes include the addition of a flagged_answers table, modifications to existing models to accommodate flagged answers, and enhancements to the frontend for user interaction. Users can now report inappropriate answers with a configurable threshold for moderation, improving the overall review and reporting capabilities.

Changes

File/Group Change Summary
Database Migrations Added flagged_answers table, flag_count column to exercise_slide_submissions, flagged_answers_threshold column to courses
Backend Models Introduced FlaggedAnswer and NewFlaggedAnswer structs, added methods for inserting and retrieving flagged answers, updated existing models to include new fields
Frontend Components Created MarkAsSpamDialog, enhanced PeerOrSelfReviewView with reporting functionality, added reporting button and dialog
Localization Added translations for reporting reasons, labels, and dialog text in English and Finnish
Tests Introduced automated tests for marking answers as spam in peer reviews

Sequence Diagram

sequenceDiagram
    participant User
    participant PeerReviewView
    participant ReportDialog
    participant Backend
    participant Database

    User->>PeerReviewView: Click "Report" button
    PeerReviewView->>ReportDialog: Open dialog
    User->>ReportDialog: Select reason and optional description
    User->>ReportDialog: Submit report
    ReportDialog->>Backend: POST flag answer request
    Backend->>Database: Insert flagged answer
    Backend->>Database: Increment flag count
    alt Flag count exceeds threshold
        Backend->>Database: Update submission review stage
    end
    Backend->>User: Confirmation
Loading

Poem

🐰 A Rabbit's Ode to Reporting

In peer reviews, where answers flow,
A flagging feature starts to grow.
With thresholds set and reasons clear,
We keep our learning spaces dear! 🚩

Hop, hop, report with gentle care! 🌈


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cee95ee and 1d5891c.

📒 Files selected for processing (3)
  • services/main-frontend/src/components/page-specific/manage/exercises/id/submissions/FlaggedPeerReviewAccordion.tsx (1 hunks)
  • system-tests/src/tests/mark-as-spam.spec.ts (1 hunks)
  • system-tests/src/tests/peer-reviews/manual-review-everything.spec.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • services/main-frontend/src/components/page-specific/manage/exercises/id/submissions/FlaggedPeerReviewAccordion.tsx
  • system-tests/src/tests/mark-as-spam.spec.ts
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: build
  • GitHub Check: build-and-deploy
  • GitHub Check: headless_lms
🔇 Additional comments (2)
system-tests/src/tests/peer-reviews/manual-review-everything.spec.ts (2)

93-93: LGTM! The added .first() selector improves test reliability.

The change ensures consistent selection of the first matching exercise row, preventing potential flakiness when multiple rows are present.


Line range hint 1-140: Test coverage missing for new reporting functionality.

Given that this PR introduces answer reporting functionality in peer reviews, this test file should include additional test cases to verify:

  1. Users can report answers during peer review
  2. Teachers can view reported answers in manual review
  3. The reporting threshold mechanism works as expected

Let's check if there are other test files covering this functionality:

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

🔭 Outside diff range comments (3)
services/headless-lms/models/.sqlx/query-136d766e505e02ad02623be6ee9a3bba4f1bbc94ecfddacc7d3e2438c31278fd.json (1)

Line range hint 3-23: Specify default value for flag_count in INSERT query.

The INSERT query doesn't specify a default value for the non-nullable flag_count column. This could lead to SQL errors if the database doesn't have a DEFAULT constraint.

Apply this diff to set a default value of 0:

 INSERT INTO exercise_slide_submissions (
     exercise_slide_id,
     course_id,
     course_instance_id,
     exam_id,
     exercise_id,
     user_id,
-    user_points_update_strategy
+    user_points_update_strategy,
+    flag_count
   )
-VALUES ($1, $2, $3, $4, $5, $6, $7)
+VALUES ($1, $2, $3, $4, $5, $6, $7, 0)
services/headless-lms/models/.sqlx/query-630403a014992249ea36c52d3611b6c3617b6ca2d7aa07de995feaee3e16e22a.json (1)

Line range hint 3-24: Consider adding value constraints for threshold

The Int4 type allows negative values, which wouldn't make sense for a threshold. Consider adding a CHECK constraint in the database schema.

Example constraint to add in the migration:

ALTER TABLE courses ADD CONSTRAINT check_flagged_answers_threshold 
CHECK (flagged_answers_threshold >= 0);

Also applies to: 105-109, 135-139

services/headless-lms/models/.sqlx/query-903b6bb1bf376d99638a2bf42751aeaa42b8bcf77a79c96262060cad5d4c6cc3.json (1)

Column order inconsistency found in one query

The column order is consistent across most queries, except in query-75146edf141f538da63a12b252133490abd01b91cc1e21a5a19f1589d1d77ecf.json where:

  • slug is selected right after id
  • name is moved after description
  • can_add_chatbot is moved after base_module_completion_requires_n_submodule_completions
🔗 Analysis chain

Line range hint 3-24: Verify column order consistency

The column order in this query matches the others, maintaining consistency. However, let's verify this across all course-related queries.

Also applies to: 105-109, 135-139

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check column order consistency across all course-related queries
rg -A 5 "SELECT.*FROM courses" services/headless-lms/models/.sqlx/

Length of output: 24073

🧹 Nitpick comments (23)
services/main-frontend/src/components/page-specific/manage/exercises/id/submissions/FlaggedPeerReviewAccordion.tsx (3)

10-13: Add JSDoc documentation for the interface.

Consider adding JSDoc documentation to describe the purpose of the component and its props.

+/**
+ * Props for the FlaggedPeerReviewAccordion component.
+ * @param reports - Array of flagged answers to display
+ * @param title - Title to display in the accordion header
+ */
 export interface FlaggedPeerReviewAccordionProps {
   reports: Array<FlaggedAnswer>
   title: string
 }

32-48: Extract badge styling to a reusable styled component.

The badge styling is currently defined inline. Consider extracting it to a reusable styled component for better maintainability and consistency.

+const ReportBadge = styled.span`
+  background: ${baseTheme.colors.green[400]};
+  border-radius: 20px;
+  line-height: 10px;
+  padding: 1px 5px;
+  text-align: center;
+  font-size: 14px;
+  color: ${baseTheme.colors.primary[100]};
+  margin-left: 3px;
+  width: 20px;
+  height: 20px;
+`

 <summary>
   {title}
-  <span
-    className={css`
-      background: ${baseTheme.colors.green[400]};
-      border-radius: 20px;
-      line-height: 10px;
-      padding: 1px 5px;
-      text-align: center;
-      font-size: 14px;
-      color: ${baseTheme.colors.primary[100]};
-      margin-left: 3px;
-      width: 20px;
-      height: 20px;
-    `}
-  >
+  <ReportBadge>
     {reports.length}
-  </span>
+  </ReportBadge>

76-78: Extract date formatting to a utility function.

Consider moving the date formatting logic to a reusable utility function for consistency across the application.

+// In shared utils/date.ts
+export const formatDateTime = (date: string | Date): string => {
+  return new Date(date).toLocaleString()
+}

-{new Date(report.created_at).toLocaleString()}
+{formatDateTime(report.created_at)}
services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerRevireMarkingSpam/MarkAsSpamDialog.tsx (1)

14-18: Consider renaming component and improving type safety.

  1. The component name MarkAsSpamDialog could be more generic since it handles various types of reports (spam, harmful content, AI-generated).
  2. The props interface should be explicitly defined.
+interface ReportDialogProps {
+  isOpen: boolean
+  onClose: () => void
+  onSubmit: (reason: string, description: string) => void
+}

-const MarkAsSpamDialog: React.FC<{
-  isOpen: boolean
-  onClose: () => void
-  onSubmit: (reason: string, description: string) => void
-}> = ({ isOpen, onClose, onSubmit }) => {
+const ReportDialog: React.FC<ReportDialogProps> = ({ isOpen, onClose, onSubmit }) => {
services/main-frontend/src/components/page-specific/manage/courses/id/index/UpdateCourseForm.tsx (1)

156-165: Enhance UX for the threshold input field.

Consider the following improvements:

  1. Add a maximum value constraint to prevent unreasonable thresholds
  2. Add a tooltip explaining how the threshold affects the review process
 <TextField
   type={"number"}
   min={0}
+  max={100}
   value={peerReviewFlaggingThreshold}
   label={t("label-threshold-to-move-flagged-answer-to-manual-review")}
+  tooltip={t("tooltip-flagging-threshold-explanation")}
   onChangeByValue={(value) => {
     setPeerReviewFlaggingThreshold(Number(value))
   }}
 />
services/headless-lms/server/src/programs/seed/seed_organizations/uh_mathstat.rs (1)

92-92: Consider extracting the flagged answers threshold to a constant.

The value Some(5) is repeated multiple times. To improve maintainability and ensure consistency, consider extracting it to a constant at the module level.

+const DEFAULT_FLAGGED_ANSWERS_THRESHOLD: Option<i32> = Some(5);

 pub async fn seed_organization_uh_mathstat(
     // ... existing code ...
 ) -> anyhow::Result<Uuid> {
     // ... existing code ...
     let new_course = NewCourse {
         // ... other fields ...
-        flagged_answers_threshold: Some(5),
+        flagged_answers_threshold: DEFAULT_FLAGGED_ANSWERS_THRESHOLD,
     };

Also applies to: 142-142, 174-174, 226-226

services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx (2)

134-149: Consider tracking user information for better accountability.

The mutation sends null for both flagged_user and flagged_by. Consider tracking this information for audit purposes and accountability.

 return await postFlagAnswerInPeerReview(exerciseId, {
   submission_id: peerOrSelfReviewData.answer_to_review.exercise_slide_submission_id,
   reason,
   description,
-  flagged_user: null,
-  flagged_by: null,
+  flagged_user: peerOrSelfReviewData.answer_to_review.user_id,
+  flagged_by: loginStateContext.user?.id,
 })

151-153: Add input validation and success feedback.

The submit handler could benefit from:

  1. Input validation for reason and description
  2. User feedback on successful submission
 const handleReportSubmit = (reason: string, description: string) => {
+  if (!reason.trim()) {
+    return // Add error toast
+  }
   reportMutation.mutate({ reason, description }, {
+    onSuccess: () => {
+      setIsReportDialogOpen(false)
+      // Add success toast
+    }
   })
 }
services/headless-lms/models/src/chapters.rs (1)

624-624: Document the significance of the threshold value.

The test uses a threshold value of 5, but its significance is not documented. Consider adding a comment explaining why this specific value was chosen for testing.

+                    // Default threshold of 5 flags triggers manual review
                     flagged_answers_threshold: Some(5),
services/headless-lms/server/src/controllers/course_material/exercises.rs (1)

369-474: Implementation looks good, with some suggestions for improvement.

The flagging functionality is implemented correctly with proper error handling and state transitions. However, consider the following improvements:

  1. The course ID validation could be consolidated to reduce code duplication:
let course_id = flagged_submission_data.course_id.ok_or_else(|| {
    ControllerError::new(
        ControllerErrorType::BadRequest,
        "Course ID not found for the submission.".to_string(),
        None,
    )
})?;
  1. The error messages could be more descriptive, e.g., "Failed to report answer" could be "Failed to insert flagged answer record in database".
services/headless-lms/models/src/exercise_slide_submissions.rs (2)

150-158: Consider making flagged_user and flagged_by required fields.

The NewFlaggedAnswer struct makes these fields optional, but they seem like they should be required for proper tracking and accountability.

-    pub flagged_user: Option<Uuid>,
-    pub flagged_by: Option<Uuid>,
+    pub flagged_user: Uuid,
+    pub flagged_by: Uuid,

196-214: Consider adding a transaction for atomicity.

The increment_flag_count function should probably be wrapped in a transaction to ensure atomicity of the update operation.

 pub async fn increment_flag_count(
     conn: &mut PgConnection,
     submission_id: Uuid,
 ) -> ModelResult<i32> {
+    let mut tx = conn.begin().await?;
     let result = sqlx::query!(
         r#"
         UPDATE exercise_slide_submissions
         SET flag_count = flag_count + 1,
             updated_at = NOW()
         WHERE id = $1
         RETURNING flag_count
         "#,
         submission_id
     )
-    .fetch_one(conn)
+    .fetch_one(&mut *tx)
     .await?;
 
+    tx.commit().await?;
     Ok(result.flag_count)
 }
shared-module/packages/common/src/bindings.guard.ts (1)

1051-1053: Consider setting a default flagged_answers_threshold.

Making flagged_answers_threshold nullable without a default value could lead to inconsistent flagging behavior across courses. Consider either:

  1. Making the field required with a sensible default, or
  2. Implementing a system-wide default threshold when the course-specific threshold is null
services/headless-lms/models/.sqlx/query-6fd565594531579c6b7925befe5013abd5d0ef5aad4af21336dee486f5ad5d27.json (1)

1-58: Consider optimizing the flagged answers query.

  1. Add an index on (submission_id, deleted_at) to improve query performance:
CREATE INDEX idx_flagged_answers_submission_deleted 
ON flagged_answers(submission_id) 
WHERE deleted_at IS NULL;
  1. Add explicit ordering to ensure consistent results:
 WHERE submission_id = $1
   AND deleted_at IS NULL
+ORDER BY created_at DESC
services/headless-lms/models/.sqlx/query-28ceed91e05d620f85da1a7cb9c763b2e7351e74f7f6e0c6cfe65816b6de1801.json (1)

Line range hint 3-20: Consider adding indexes for performance optimization.

The query uses DISTINCT ON (user_id) with complex conditions and ordering. Consider adding a composite index to improve performance.

Add the following index to your migration file:

CREATE INDEX idx_exercise_slide_submissions_user_exercise
ON exercise_slide_submissions (user_id, exercise_id, created_at DESC)
WHERE deleted_at IS NULL;
services/headless-lms/models/.sqlx/query-42f0e5ed7fe9d9edd3ec6b40c9c1a72d176753e127aec5795e6c4bd8a14b0b8f.json (1)

Line range hint 3-19: Consider adding indexes for paginated queries.

The query uses DISTINCT ON (user_id) with pagination. Consider adding a composite index to improve performance of paginated results.

Add the following index to your migration file:

CREATE INDEX idx_exercise_slide_submissions_exercise_user
ON exercise_slide_submissions (exercise_id, user_id, created_at DESC)
WHERE deleted_at IS NULL;
services/headless-lms/server/src/programs/seed/seed_courses.rs (1)

100-100: Consider making the threshold value configurable.

The threshold value is hardcoded to Some(5) across all test courses. Consider making it configurable through a constant or parameter to allow for easier testing of different threshold values.

+ const DEFAULT_FLAG_THRESHOLD: i32 = 5;

// In each function:
-  flagged_answers_threshold: Some(5),
+  flagged_answers_threshold: Some(DEFAULT_FLAG_THRESHOLD),

Also applies to: 2036-2036, 2164-2164, 3045-3045

services/headless-lms/server/src/test_helper.rs (1)

193-193: Consider making the flagged answers threshold configurable.

While adding the threshold is good, hardcoding Some(5) might limit testing different scenarios. Consider making this value configurable through the macro parameters to enable testing various threshold cases.

-                flagged_answers_threshold: Some(5),
+                flagged_answers_threshold: $threshold.unwrap_or(Some(5)),
services/headless-lms/models/.sqlx/query-3ef4b03ab1b23a15038e84194badf23a44dffcf34e70ac859a0d97c949fbb50b.json (1)

3-3: LGTM! Consider adding an index for performance.

The query changes look good and properly support pagination. Since this query filters by exercise_id and deleted_at, consider adding a composite index to improve query performance, especially when dealing with large numbers of submissions.

Consider adding the following index:

CREATE INDEX idx_exercise_slide_submissions_exercise_deleted 
ON exercise_slide_submissions(exercise_id) 
WHERE deleted_at IS NULL;

Also applies to: 70-74, 80-80

services/headless-lms/models/.sqlx/query-9620fbb3cbb3593dd4599f8872a377fe8beb392895c8e733f914b9c566e9847e.json (1)

3-3: Add ORDER BY clause for consistent results

The query retrieves submissions without specifying an order. Consider adding ORDER BY created_at DESC to ensure consistent results across multiple calls, similar to other queries in the codebase.

 SELECT id,
   created_at,
   updated_at,
   deleted_at,
   exercise_slide_id,
   course_id,
   course_instance_id,
   exam_id,
   exercise_id,
   user_id,
   user_points_update_strategy AS "user_points_update_strategy: _",
   flag_count
 FROM exercise_slide_submissions
 WHERE user_id = $1
   AND (course_instance_id = $2 OR exam_id = $3)
-  AND deleted_at IS NULL
+  AND deleted_at IS NULL
+ORDER BY created_at DESC
services/headless-lms/models/.sqlx/query-dfcbf0ef22c9cc2528c66b7fcd0c8189f6d504f3d81f12075b387a6128fb9d7b.json (1)

3-3: Document the threshold's purpose in the query

Consider adding a comment in the SQL query to explain the purpose and impact of the flagged_answers_threshold column (e.g., "Number of flags that trigger manual review").

-  "query": "\nSELECT courses.id,\n
+  "query": "\n-- flagged_answers_threshold: Number of flags that trigger manual review\nSELECT courses.id,\n
services/headless-lms/models/.sqlx/query-9cb1870d45712289597e4b0b278bfb502a864aa907e9bbcfb0b113f0e68f75bd.json (1)

3-3: Consider index optimization for DISTINCT query performance

The query uses DISTINCT with multiple joins which could impact performance with large datasets. Consider adding a composite index on (organization_id, course_id) if query performance becomes a concern.

services/headless-lms/models/.sqlx/query-0020f8d1577d278505f750f314f45dd74fd18a15546339eb2694d0f5285d8c72.json (1)

3-3: Consider materializing common subquery results

The query uses multiple subqueries to check user access. Consider materializing the results of common subqueries into a CTE (WITH clause) for better readability and potentially better query planning.

Example refactor:

WITH user_course_roles AS (
    SELECT course_id
    FROM roles
    WHERE deleted_at IS NULL
      AND user_id = $1
      AND course_id IS NOT NULL
),
user_instance_roles AS (
    SELECT ci.course_id
    FROM course_instances ci
      JOIN roles r ON r.course_instance_id = ci.id
    WHERE r.user_id = $1
      AND r.deleted_at IS NULL
      AND ci.deleted_at IS NULL
)
SELECT /* rest of the columns */
FROM courses
WHERE courses.deleted_at IS NULL
  AND (
    id IN (SELECT course_id FROM user_course_roles)
    OR id IN (SELECT course_id FROM user_instance_roles)
  )
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5a70a2e and 81a9db8.

📒 Files selected for processing (55)
  • services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx (5 hunks)
  • services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerRevireMarkingSpam/MarkAsSpamDialog.tsx (1 hunks)
  • services/course-material/src/services/backend.ts (2 hunks)
  • services/headless-lms/migrations/20250107103628_add_flagging_for_answers.down.sql (1 hunks)
  • services/headless-lms/migrations/20250107103628_add_flagging_for_answers.up.sql (1 hunks)
  • services/headless-lms/models/.sqlx/query-0020f8d1577d278505f750f314f45dd74fd18a15546339eb2694d0f5285d8c72.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-049f398260c49db909c2fb054a9beca4f06d3b2d921ce78dd9f52ce0fdf9ac06.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-136d766e505e02ad02623be6ee9a3bba4f1bbc94ecfddacc7d3e2438c31278fd.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-17bf0f2c956a3aa06506ba05c18e1b15b8423cb99f8f7b3d7ffa1e3ef81945c5.json (2 hunks)
  • services/headless-lms/models/.sqlx/query-18130e6045b555e04c07b2a0efa2d7da271236b44fd90da911bbce79e4bcf242.json (2 hunks)
  • services/headless-lms/models/.sqlx/query-28ceed91e05d620f85da1a7cb9c763b2e7351e74f7f6e0c6cfe65816b6de1801.json (2 hunks)
  • services/headless-lms/models/.sqlx/query-2b169ac8547e42b1afa42278b7f1c00c50c734e7d8b6d690914526699ec7a3a6.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-32265e9d7760402be8ffbe8ac3dd909142255676cfb06e9bbbbbc753a6fff391.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-373b37e6ff5d338ba1f1693bf807a3e396753077888a809c81f137f020ba88bb.json (1 hunks)
  • services/headless-lms/models/.sqlx/query-3ef4b03ab1b23a15038e84194badf23a44dffcf34e70ac859a0d97c949fbb50b.json (2 hunks)
  • services/headless-lms/models/.sqlx/query-42f0e5ed7fe9d9edd3ec6b40c9c1a72d176753e127aec5795e6c4bd8a14b0b8f.json (2 hunks)
  • services/headless-lms/models/.sqlx/query-4df8e4be76b9f6575e6abcd0bb74895bd3576071d702d184e8cdd8c0e366d1e0.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-588f8c05914b1077dee418b21038791db03ae1584540a66d065d05e88556d60f.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-630403a014992249ea36c52d3611b6c3617b6ca2d7aa07de995feaee3e16e22a.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-63046830fbee7ede5ea5cae981888755dc19e2f9d50b726d224803c7a2443e0f.json (2 hunks)
  • services/headless-lms/models/.sqlx/query-6fd565594531579c6b7925befe5013abd5d0ef5aad4af21336dee486f5ad5d27.json (1 hunks)
  • services/headless-lms/models/.sqlx/query-75146edf141f538da63a12b252133490abd01b91cc1e21a5a19f1589d1d77ecf.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-78e1728e1b7cc2e18bf53f4ad4584dbe18644a58c1adc9128be205fcd9813cc8.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-8abb8e7d9d09a78bb847d279c5bba07acf7aafd0885001735d66da49513c5811.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-8f7393748c9ef21fc9d2664ea18d2d16cb6ebe750a716912745f1a330406f29b.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-903b6bb1bf376d99638a2bf42751aeaa42b8bcf77a79c96262060cad5d4c6cc3.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-9620fbb3cbb3593dd4599f8872a377fe8beb392895c8e733f914b9c566e9847e.json (2 hunks)
  • services/headless-lms/models/.sqlx/query-9cb1870d45712289597e4b0b278bfb502a864aa907e9bbcfb0b113f0e68f75bd.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-caeaf4c59be4dde42812fa54f4e9753a257a787b1ffe8b01497373d5f1778244.json (1 hunks)
  • services/headless-lms/models/.sqlx/query-db37586d48fd6e45fc213da9ea7d8026fff3c6f54237beed79c168a2ffcd9a2f.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-dfcbf0ef22c9cc2528c66b7fcd0c8189f6d504f3d81f12075b387a6128fb9d7b.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-fd00896e8f130c629eb04465f1767a33b9117449d44e11d3b68ec601cb62659f.json (2 hunks)
  • services/headless-lms/models/src/chapters.rs (1 hunks)
  • services/headless-lms/models/src/courses.rs (18 hunks)
  • services/headless-lms/models/src/exams.rs (1 hunks)
  • services/headless-lms/models/src/exercise_slide_submissions.rs (12 hunks)
  • services/headless-lms/models/src/library/copying.rs (2 hunks)
  • services/headless-lms/models/src/library/grading.rs (4 hunks)
  • services/headless-lms/models/src/test_helper.rs (1 hunks)
  • services/headless-lms/server/src/controllers/course_material/exercises.rs (3 hunks)
  • services/headless-lms/server/src/programs/seed/seed_courses.rs (5 hunks)
  • services/headless-lms/server/src/programs/seed/seed_organizations/uh_cs.rs (1 hunks)
  • services/headless-lms/server/src/programs/seed/seed_organizations/uh_mathstat.rs (4 hunks)
  • services/headless-lms/server/src/test_helper.rs (1 hunks)
  • services/headless-lms/server/src/ts_binding_generator.rs (1 hunks)
  • services/headless-lms/server/tests/study_registry_test.rs (1 hunks)
  • services/main-frontend/src/components/page-specific/manage/courses/id/index/UpdateCourseForm.tsx (3 hunks)
  • services/main-frontend/src/components/page-specific/manage/exercises/id/submissions/AnswersRequiringAttentionItem.tsx (2 hunks)
  • services/main-frontend/src/components/page-specific/manage/exercises/id/submissions/FlaggedPeerReviewAccordion.tsx (1 hunks)
  • shared-module/packages/common/src/bindings.guard.ts (8 hunks)
  • shared-module/packages/common/src/bindings.ts (6 hunks)
  • shared-module/packages/common/src/locales/en/course-material.json (5 hunks)
  • shared-module/packages/common/src/locales/en/main-frontend.json (3 hunks)
  • shared-module/packages/common/src/locales/fi/course-material.json (5 hunks)
  • shared-module/packages/common/src/locales/fi/main-frontend.json (3 hunks)
✅ Files skipped from review due to trivial changes (1)
  • services/headless-lms/migrations/20250107103628_add_flagging_for_answers.down.sql
👮 Files not reviewed due to content moderation or server errors (4)
  • services/headless-lms/models/src/test_helper.rs
  • services/headless-lms/models/.sqlx/query-8f7393748c9ef21fc9d2664ea18d2d16cb6ebe750a716912745f1a330406f29b.json
  • services/headless-lms/models/.sqlx/query-4df8e4be76b9f6575e6abcd0bb74895bd3576071d702d184e8cdd8c0e366d1e0.json
  • services/headless-lms/models/.sqlx/query-8abb8e7d9d09a78bb847d279c5bba07acf7aafd0885001735d66da49513c5811.json
🔇 Additional comments (42)
shared-module/packages/common/src/locales/fi/main-frontend.json (2)

341-341: LGTM! Clear and consistent translations for reporting functionality.

The Finnish translations for reporting-related labels are concise, clear, and follow the established naming conventions.

Also applies to: 373-374, 377-377


391-391: LGTM! Clear translation for the threshold configuration.

The Finnish translation for the threshold configuration label is descriptive and follows the established conventions.

services/main-frontend/src/components/page-specific/manage/exercises/id/submissions/FlaggedPeerReviewAccordion.tsx (1)

1-9: LGTM! Clean and well-organized imports.

All imports are necessary and properly organized.

services/headless-lms/server/tests/study_registry_test.rs (1)

168-168: Add test cases for flagged answers threshold.

Consider adding test cases to verify:

  1. Course creation with different threshold values
  2. Course creation with None threshold (should use default)
  3. Course update with threshold changes

Would you like me to help create these additional test cases?

services/main-frontend/src/components/page-specific/manage/exercises/id/submissions/AnswersRequiringAttentionItem.tsx (1)

12-12: LGTM! Component integration looks clean and well-structured.

The FlaggedPeerReviewAccordion is appropriately placed alongside other review components, maintaining a consistent UI structure.

Also applies to: 436-445

services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx (2)

3-3: LGTM! Well-structured imports and state management.

The new imports and state variable are logically organized and follow React conventions.

Also applies to: 12-12, 18-18, 44-44


335-353: LGTM! Clean UI implementation.

The report button and dialog integration are well-structured and follow React best practices.

services/headless-lms/server/src/ts_binding_generator.rs (1)

127-128: LGTM! Proper type exports.

The new types are correctly added and maintain alphabetical ordering.

services/headless-lms/models/src/exams.rs (1)

101-102: LGTM! Clean SQL query update.

The addition of flagged_answers_threshold to the query maintains proper SQL formatting.

services/headless-lms/server/src/programs/seed/seed_organizations/uh_cs.rs (1)

501-501: LGTM!

The default threshold of 5 flags for the seed course is a reasonable value for testing purposes.

services/headless-lms/server/src/controllers/course_material/exercises.rs (1)

504-506: LGTM!

The route is properly configured with a descriptive path.

services/headless-lms/models/src/library/grading.rs (1)

577-582: LGTM!

The implementation correctly fetches and includes flagged answers in the response.

Also applies to: 597-597

services/headless-lms/models/src/courses.rs (1)

174-175: LGTM!

The SQL queries are consistently updated to include the new field across all operations.

Also applies to: 211-212, 255-256, 311-312, 352-353, 418-419, 633-634, 698-699, 731-732, 823-824, 880-881

services/course-material/src/services/backend.ts (1)

416-424: LGTM! Clean implementation of the flag answer endpoint.

The function follows the established patterns for API endpoints in this file and properly handles the request.

services/headless-lms/models/src/exercise_slide_submissions.rs (4)

62-62: LGTM! Appropriate addition of the flag count field.

The flag_count field is correctly added as an optional i32, allowing for null values in the database.


136-148: LGTM! Well-structured FlaggedAnswer model.

The struct includes all necessary fields for tracking flagged answers, including audit fields (created_at, updated_at, deleted_at).


160-194: LGTM! Well-implemented insert function with proper error handling.

The insert_flagged_answer function follows Rust best practices and properly handles database operations.


216-243: LGTM! Comprehensive query for retrieving flagged answers.

The get_flagged_answers_by_submission_id function properly filters out deleted records and includes all necessary fields.

services/headless-lms/models/src/library/copying.rs (1)

1180-1180: LGTM! Appropriate default threshold value.

The flagged_answers_threshold is initialized with a sensible default value of 5.

shared-module/packages/common/src/bindings.ts (3)

819-829: LGTM! Well-structured FlaggedAnswer interface.

The interface properly mirrors the Rust model and includes all necessary fields.


831-837: LGTM! Consistent NewFlaggedAnswer interface.

The interface maintains consistency with the Rust model.


489-489: LGTM! Consistent threshold field implementation.

The flagged_answers_threshold field is properly added to all relevant interfaces with appropriate types.

Also applies to: 520-520, 538-538

shared-module/packages/common/src/bindings.guard.ts (1)

1575-1589: LGTM! Type guard for FlaggedAnswer is well-implemented.

The type guard properly validates all required fields and handles optional fields correctly.

services/headless-lms/models/.sqlx/query-373b37e6ff5d338ba1f1693bf807a3e396753077888a809c81f137f020ba88bb.json (1)

1-18: LGTM! Flag count increment query is well-implemented.

The query:

  • Is atomic and handles concurrent updates correctly
  • Properly updates the timestamp
  • Returns the new count for verification
services/headless-lms/models/.sqlx/query-caeaf4c59be4dde42812fa54f4e9753a257a787b1ffe8b01497373d5f1778244.json (1)

1-58: LGTM! Flagged answer insertion query is well-implemented.

The query:

  • Properly handles all required and optional fields
  • Returns the complete record for verification
  • Uses appropriate data types for all fields
services/headless-lms/migrations/20250107103628_add_flagging_for_answers.up.sql (3)

1-12: Well-designed table schema with effective constraints!

The unique constraint with NULLS NOT DISTINCT effectively prevents duplicate flags from the same user while working correctly with soft deletes.


14-25: Excellent documentation with clear and comprehensive comments!

The comments effectively explain the purpose of each column and the table itself.


27-30: Good choice of constraints for the flag_count column!

The NOT NULL constraint with DEFAULT 0 ensures data integrity while maintaining backward compatibility.

services/headless-lms/models/.sqlx/query-42f0e5ed7fe9d9edd3ec6b40c9c1a72d176753e127aec5795e6c4bd8a14b0b8f.json (1)

Line range hint 1-100: Verify migration strategy for non-nullable column.

Adding a non-nullable column requires careful migration planning. Ensure the migration script:

  1. Adds the column as nullable first
  2. Updates existing rows with a default value
  3. Then alters the column to be non-nullable

Run this script to check if there's a corresponding migration file:

✅ Verification successful

Migration strategy for flag_count is appropriate

The migration strategy is correct as implemented. Since this is a new feature (January 2025) adding flagging functionality, the column can be safely added as NOT NULL DEFAULT 0 because:

  • All existing submissions would logically start with 0 flags
  • The DEFAULT 0 constraint ensures new rows will always have a value
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for migration files related to flag_count column
# Test: Look for migration files that add the flag_count column
fd -e sql . | xargs rg -l "flag_count"

Length of output: 200


Script:

#!/bin/bash
# Check the content of the up migration file
cat services/headless-lms/migrations/20250107103628_add_flagging_for_answers.up.sql

Length of output: 2125

services/headless-lms/models/.sqlx/query-2b169ac8547e42b1afa42278b7f1c00c50c734e7d8b6d690914526699ec7a3a6.json (1)

Line range hint 3-24: Verify data migration strategy for non-nullable column

The flagged_answers_threshold column is added as non-nullable without a default value. This could cause issues when migrating existing data.

Run this script to check if there's a corresponding migration file with a default value:

Also applies to: 105-109, 135-139

services/headless-lms/models/.sqlx/query-049f398260c49db909c2fb054a9beca4f06d3b2d921ce78dd9f52ce0fdf9ac06.json (1)

Line range hint 3-30: LGTM! Query structure is well-organized

The query maintains proper join conditions and includes appropriate NULL checks for both courses and user_course_settings tables.

Also applies to: 105-109, 135-139

services/headless-lms/models/.sqlx/query-75146edf141f538da63a12b252133490abd01b91cc1e21a5a19f1589d1d77ecf.json (1)

Line range hint 3-27: LGTM! Join conditions are properly handled

The query correctly joins courses with course_exams and includes appropriate NULL checks for both tables.

Also applies to: 105-109, 135-139

shared-module/packages/common/src/locales/fi/course-material.json (1)

26-26: LGTM! Finnish translations are accurate and consistent.

The added translations for the reporting functionality are well-structured and use proper Finnish language. The translations align well with their English counterparts while maintaining natural Finnish expressions.

Also applies to: 94-96, 149-149, 178-178, 203-203

shared-module/packages/common/src/locales/en/course-material.json (1)

26-26: LGTM! English translations are accurate and consistent.

The added translations for the reporting functionality are well-structured and use clear English language. The translations align well with their Finnish counterparts.

Also applies to: 93-95, 149-149, 177-177, 202-202

shared-module/packages/common/src/locales/en/main-frontend.json (1)

337-337: LGTM! English translations are accurate and consistent.

The added translations for the admin/teacher view of flagged answers are well-structured and use clear English language. The labels are descriptive and follow a consistent pattern.

Also applies to: 369-370, 373-373, 387-387

services/headless-lms/models/.sqlx/query-18130e6045b555e04c07b2a0efa2d7da271236b44fd90da911bbce79e4bcf242.json (1)

3-3: LGTM! Verify database schema changes.

The addition of the non-nullable flag_count column is well-structured. The query changes look correct for tracking submission flags.

Also applies to: 70-74, 80-80

✅ Verification successful

✓ Schema changes verified and correct

The flag_count column is properly added to exercise_slide_submissions table in the January 2025 migration with matching properties (non-nullable with default 0) as used in the query.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify if the database schema has been updated with the flag_count column
rg -A 5 "CREATE TABLE.*exercise_slide_submissions" 
rg -A 5 "ALTER TABLE.*exercise_slide_submissions.*ADD.*flag_count"

Length of output: 1704


Script:

#!/bin/bash
# Search for any migration mentioning flag_count
rg -l "flag_count" services/headless-lms/migrations/

# Also search for recent migrations that might contain this column
fd -e sql -e SQL . services/headless-lms/migrations/ -x head -n 20 {}

Length of output: 65751


Script:

#!/bin/bash
# Check content of the flagging migration
cat services/headless-lms/migrations/20250107103628_add_flagging_for_answers.up.sql

Length of output: 2125

services/headless-lms/models/.sqlx/query-17bf0f2c956a3aa06506ba05c18e1b15b8423cb99f8f7b3d7ffa1e3ef81945c5.json (1)

3-3: Verify default value for non-nullable flag_count column

The flag_count column is marked as non-nullable in the query definition. Ensure that the table definition includes a default value (likely 0) for this column to prevent INSERT operations from failing.

Also applies to: 80-80

✅ Verification successful

Column definition is correct with DEFAULT 0

The flag_count column is properly defined in the migration file with NOT NULL constraint and DEFAULT 0, which prevents INSERT operations from failing.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check the table definition for flag_count column default value
psql -c "\d+ exercise_slide_submissions" | grep -A 1 "flag_count"

Length of output: 160


Script:

#!/bin/bash
# Search for migration files or schema files containing exercise_slide_submissions table definition
fd -e sql -e sqlx . | xargs rg -l "exercise_slide_submissions"

# Search for flag_count column definition
rg -A 5 -B 5 "flag_count.*exercise_slide_submissions|exercise_slide_submissions.*flag_count"

Length of output: 4610

services/headless-lms/models/.sqlx/query-fd00896e8f130c629eb04465f1767a33b9117449d44e11d3b68ec601cb62659f.json (1)

3-3: LGTM! Well-structured query with proper ordering and pagination.

services/headless-lms/models/.sqlx/query-63046830fbee7ede5ea5cae981888755dc19e2f9d50b726d224803c7a2443e0f.json (1)

3-3: LGTM! Efficient query for retrieving the latest submission.

services/headless-lms/models/.sqlx/query-dfcbf0ef22c9cc2528c66b7fcd0c8189f6d504f3d81f12075b387a6128fb9d7b.json (1)

3-3: Verify database migration for the new column

The query includes a new non-nullable column flagged_answers_threshold of type Int4. Please ensure that:

  1. A database migration exists to add this column
  2. The migration specifies a default value since it's non-nullable

Run this script to check for the migration:

Also applies to: 105-109

✅ Verification successful

Migration verified successfully

The column flagged_answers_threshold is properly implemented with a migration that:

  • Adds the column as INTEGER NOT NULL with DEFAULT 5
  • Includes clear documentation of its purpose
  • Provides proper rollback support
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for migration adding flagged_answers_threshold column
rg -l "flagged_answers_threshold" services/headless-lms/migrations/

Length of output: 229


Script:

#!/bin/bash
# Description: View content of migration files
echo "=== UP Migration ==="
cat services/headless-lms/migrations/20250107103628_add_flagging_for_answers.up.sql
echo -e "\n=== DOWN Migration ==="
cat services/headless-lms/migrations/20250107103628_add_flagging_for_answers.down.sql

Length of output: 2487

services/headless-lms/models/.sqlx/query-9cb1870d45712289597e4b0b278bfb502a864aa907e9bbcfb0b113f0e68f75bd.json (1)

105-109: LGTM! Column definition is well-structured

The addition of flagged_answers_threshold as a non-nullable Int4 is appropriate for storing the threshold value for flagged answers.

services/headless-lms/models/.sqlx/query-0020f8d1577d278505f750f314f45dd74fd18a15546339eb2694d0f5285d8c72.json (1)

105-109: LGTM! Column definition maintains consistency

The flagged_answers_threshold column definition is consistent with the previous query, maintaining the same type (Int4) and non-nullable constraint.

Comment on lines +20 to +21
const [selectedReason, setSelectedReason] = useState<string>("")
const [description, setDescription] = useState<string>("")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add form validation and improve type safety for reasons.

Consider using an enum for report reasons to ensure type safety and validation.

+export enum ReportReason {
+  SPAM = 'flagging-reason-spam',
+  HARMFUL_CONTENT = 'flagging-reason-harmful-content',
+  AI_GENERATED = 'flagging-reason-ai-generated',
+}

-const [selectedReason, setSelectedReason] = useState<string>("")
+const [selectedReason, setSelectedReason] = useState<ReportReason | "">("")

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 369 to 474
payload: web::Json<NewFlaggedAnswer>,
user: AuthUser,
) -> ControllerResult<web::Json<bool>> {
let mut conn = pool.acquire().await?;

let flagged_submission_data =
exercise_slide_submissions::get_by_id(&mut conn, payload.submission_id).await?;

if flagged_submission_data.course_id.is_none() {
return Err(ControllerError::new(
ControllerErrorType::NotFound,
"Submission not found.".to_string(),
None,
));
}

let flagged_user = flagged_submission_data.user_id;

let course_id = match flagged_submission_data.course_id {
Some(id) => id,
None => {
return Err(ControllerError::new(
ControllerErrorType::BadRequest,
"Course ID not found for the submission.".to_string(),
None,
));
}
};

let new_flagged_answer = NewFlaggedAnswer {
submission_id: payload.submission_id,
flagged_user: Some(flagged_user),
flagged_by: Some(user.id),
reason: payload.reason.clone(),
description: payload.description.clone(),
};

let insert_result =
exercise_slide_submissions::insert_flagged_answer(&mut conn, new_flagged_answer).await;

//If the flagging was successful, increment the answers flag count and check if it needs to be moved to manual grading
if insert_result.is_err() {
return Err(ControllerError::new(
ControllerErrorType::InternalServerError,
"Failed to report answer".to_string(),
None,
));
} else {
let increment_flag_count =
exercise_slide_submissions::increment_flag_count(&mut conn, payload.submission_id)
.await;

if let Ok(updated_flag_count) = increment_flag_count {
let course = courses::get_course(&mut conn, course_id).await?;

if let Some(flagged_answers_threshold) = course.flagged_answers_threshold {
if updated_flag_count >= flagged_answers_threshold {
let course_instance_or_exam_id = if let Some(course_instance_id) =
flagged_submission_data.course_instance_id
{
CourseInstanceOrExamId::Instance(course_instance_id)
} else if let Some(exam_id) = flagged_submission_data.exam_id {
CourseInstanceOrExamId::Exam(exam_id)
} else {
return Err(ControllerError::new(
ControllerErrorType::InternalServerError,
"No course instance or exam ID found for the submission.".to_string(),
None,
));
};

let update_result = user_exercise_states::update_reviewing_stage(
&mut conn,
flagged_user,
course_instance_or_exam_id,
flagged_submission_data.exercise_id,
ReviewingStage::WaitingForManualGrading,
)
.await;

if update_result.is_err() {
return Err(ControllerError::new(
ControllerErrorType::InternalServerError,
"Failed to update the reviewing stage.".to_string(),
None,
));
}
}
}
} else {
return Err(ControllerError::new(
ControllerErrorType::InternalServerError,
"Failed to increment the flag count.".to_string(),
None,
));
}
}
let token = skip_authorize();
token.authorized_ok(web::Json(true))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider breaking down the function into smaller, focused functions.

The function is quite long and handles multiple responsibilities. Consider extracting the following into separate functions:

  1. Flag count management:
async fn handle_flag_count_increment(
    conn: &mut PgConnection,
    submission_id: Uuid,
    course: &Course,
    flagged_user: Uuid,
    flagged_submission_data: &ExerciseSlideSubmission,
) -> ControllerResult<()> {
    let updated_flag_count = exercise_slide_submissions::increment_flag_count(conn, submission_id).await?;
    
    if let Some(threshold) = course.flagged_answers_threshold {
        if updated_flag_count >= threshold {
            update_reviewing_stage_if_threshold_reached(
                conn,
                flagged_user,
                flagged_submission_data,
            ).await?;
        }
    }
    Ok(())
}
  1. Reviewing stage update:
async fn update_reviewing_stage_if_threshold_reached(
    conn: &mut PgConnection,
    flagged_user: Uuid,
    flagged_submission_data: &ExerciseSlideSubmission,
) -> ControllerResult<()> {
    let course_instance_or_exam_id = get_course_instance_or_exam_id(flagged_submission_data)?;
    
    user_exercise_states::update_reviewing_stage(
        conn,
        flagged_user,
        course_instance_or_exam_id,
        flagged_submission_data.exercise_id,
        ReviewingStage::WaitingForManualGrading,
    )
    .await
    .map_err(|_| {
        ControllerError::new(
            ControllerErrorType::InternalServerError,
            "Failed to update the reviewing stage.".to_string(),
            None,
        )
    })
}

services/headless-lms/models/src/library/grading.rs Outdated Show resolved Hide resolved
services/headless-lms/models/src/courses.rs Show resolved Hide resolved
@@ -1,6 +1,6 @@
{
"db_name": "PostgreSQL",
"query": "\nINSERT INTO exercise_slide_submissions (\n id,\n exercise_slide_id,\n course_id,\n course_instance_id,\n exam_id,\n exercise_id,\n user_id,\n user_points_update_strategy\n )\nVALUES ($1, $2, $3, $4, $5, $6, $7, $8)\nRETURNING id,\n created_at,\n updated_at,\n deleted_at,\n exercise_slide_id,\n course_id,\n course_instance_id,\n exam_id,\n exercise_id,\n user_id,\n user_points_update_strategy AS \"user_points_update_strategy: _\"\n ",
"query": "\nINSERT INTO exercise_slide_submissions (\n id,\n exercise_slide_id,\n course_id,\n course_instance_id,\n exam_id,\n exercise_id,\n user_id,\n user_points_update_strategy\n )\nVALUES ($1, $2, $3, $4, $5, $6, $7, $8)\nRETURNING id,\n created_at,\n updated_at,\n deleted_at,\n exercise_slide_id,\n course_id,\n course_instance_id,\n exam_id,\n exercise_id,\n user_id,\n user_points_update_strategy AS \"user_points_update_strategy: _\",\n flag_count\n ",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Re-add DEFAULT 0 constraint for flag_count column

The flag_count column was previously added with DEFAULT 0 constraint, but this constraint is missing in the current implementation. This could cause issues with existing rows.

  • Add DEFAULT 0 constraint to maintain consistency with the previous implementation and ensure safe handling of existing rows.
🔗 Analysis chain

Verify migration script for flag_count column

The flag_count column is added as non-nullable (false in nullable array), but no default value is specified in the RETURNING clause. This could cause issues with existing rows.

Run this script to check the migration that adds this column:

Also applies to: 70-74, 99-99

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check migration script for flag_count column
# Expected: Find migration adding flag_count with default value

fd -e sql -x cat {} \; | rg -A 5 -B 5 'ALTER TABLE.*exercise_slide_submissions.*ADD.*flag_count'

Length of output: 101


Script:

#!/bin/bash
# First, find the initial table creation
echo "=== Searching for table creation ==="
fd -e sql -x cat {} \; | rg -A 10 -B 2 'CREATE TABLE.*exercise_slide_submissions'

echo -e "\n=== Searching for any mention of flag_count in SQL files ==="
fd -e sql -x cat {} \; | rg -A 3 -B 3 'flag_count'

Length of output: 2742

@@ -1,6 +1,6 @@
{
"db_name": "PostgreSQL",
"query": "\nUPDATE courses\nSET name = $1,\n description = $2,\n is_draft = $3,\n is_test_mode = $4,\n can_add_chatbot = $5,\n is_unlisted = $6,\n is_joinable_by_code_only = $7,\n ask_marketing_consent = $8\nWHERE id = $9\nRETURNING id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n can_add_chatbot,\n is_unlisted,\n base_module_completion_requires_n_submodule_completions,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent\n ",
"query": "\nUPDATE courses\nSET name = $1,\n description = $2,\n is_draft = $3,\n is_test_mode = $4,\n can_add_chatbot = $5,\n is_unlisted = $6,\n is_joinable_by_code_only = $7,\n ask_marketing_consent = $8,\n flagged_answers_threshold = $9\nWHERE id = $10\nRETURNING id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n can_add_chatbot,\n is_unlisted,\n base_module_completion_requires_n_submodule_completions,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent,\n flagged_answers_threshold\n ",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix parameter index mismatch in UPDATE query.

The query uses $9 for flagged_answers_threshold in the SET clause but $10 for the id in the WHERE clause. This parameter index mismatch could cause runtime errors.

Apply this diff to fix the parameter indices:

-  flagged_answers_threshold = $9
-WHERE id = $10
+  flagged_answers_threshold = $10
+WHERE id = $11
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"query": "\nUPDATE courses\nSET name = $1,\n description = $2,\n is_draft = $3,\n is_test_mode = $4,\n can_add_chatbot = $5,\n is_unlisted = $6,\n is_joinable_by_code_only = $7,\n ask_marketing_consent = $8,\n flagged_answers_threshold = $9\nWHERE id = $10\nRETURNING id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n can_add_chatbot,\n is_unlisted,\n base_module_completion_requires_n_submodule_completions,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent,\n flagged_answers_threshold\n ",
"query": "\nUPDATE courses\nSET name = $1,\n description = $2,\n is_draft = $3,\n is_test_mode = $4,\n can_add_chatbot = $5,\n is_unlisted = $6,\n is_joinable_by_code_only = $7,\n ask_marketing_consent = $8,\n flagged_answers_threshold = $10\nWHERE id = $11\nRETURNING id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n can_add_chatbot,\n is_unlisted,\n base_module_completion_requires_n_submodule_completions,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent,\n flagged_answers_threshold\n ",

@@ -1,6 +1,6 @@
{
"db_name": "PostgreSQL",
"query": "\nSELECT id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n can_add_chatbot,\n is_unlisted,\n base_module_completion_requires_n_submodule_completions,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent\nFROM courses\nWHERE id = $1;\n ",
"query": "\nSELECT id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n can_add_chatbot,\n is_unlisted,\n base_module_completion_requires_n_submodule_completions,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent,\n flagged_answers_threshold\nFROM courses\nWHERE id = $1;\n ",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Add CHECK constraint for flagged_answers_threshold

The column is properly defined as NOT NULL with DEFAULT 5, but consider adding a CHECK constraint to ensure the threshold remains non-negative:

ALTER TABLE courses ADD CONSTRAINT check_flagged_answers_threshold 
  CHECK (flagged_answers_threshold >= 0);
🔗 Analysis chain

Verify constraints for flagged_answers_threshold

The flagged_answers_threshold column is marked as non-nullable. Please ensure:

  1. The table definition includes a default value
  2. A CHECK constraint exists to ensure the threshold is non-negative

Also applies to: 135-135

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check the table definition for flagged_answers_threshold constraints
psql -c "\d+ courses" | grep -A 2 "flagged_answers_threshold"
psql -c "\d+ courses" | grep -A 5 "Indexes:" | grep "flagged_answers_threshold"

Length of output: 329


Script:

#!/bin/bash
# Search for migration files containing courses table definition or alterations
rg -l "CREATE TABLE.*courses|ALTER TABLE.*courses" --type sql

# Search specifically for flagged_answers_threshold in SQL files
rg "flagged_answers_threshold" --type sql -C 5

Length of output: 5037

@@ -1,6 +1,6 @@
{
"db_name": "PostgreSQL",
"query": "\nINSERT INTO courses (\n name,\n organization_id,\n slug,\n content_search_language,\n language_code,\n copied_from,\n course_language_group_id,\n is_draft,\n base_module_completion_requires_n_submodule_completions,\n can_add_chatbot,\n is_unlisted,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent\n )\nVALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14)\nRETURNING id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n base_module_completion_requires_n_submodule_completions,\n can_add_chatbot,\n is_unlisted,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent\n ",
"query": "\nINSERT INTO courses (\n name,\n organization_id,\n slug,\n content_search_language,\n language_code,\n copied_from,\n course_language_group_id,\n is_draft,\n base_module_completion_requires_n_submodule_completions,\n can_add_chatbot,\n is_unlisted,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent\n )\nVALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14)\nRETURNING id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n base_module_completion_requires_n_submodule_completions,\n can_add_chatbot,\n is_unlisted,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent,\n flagged_answers_threshold\n ",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add missing column to INSERT values

The flagged_answers_threshold column is missing from the INSERT statement but is marked as non-nullable. This could cause insertion failures if no database default exists.

Add the column to both the column list and VALUES clause:

  "query": "\nINSERT INTO courses (
    name,
    organization_id,
    slug,
    content_search_language,
    language_code,
    copied_from,
    course_language_group_id,
    is_draft,
    base_module_completion_requires_n_submodule_completions,
    can_add_chatbot,
    is_unlisted,
    is_joinable_by_code_only,
    join_code,
-    ask_marketing_consent
+    ask_marketing_consent,
+    flagged_answers_threshold
  )
- VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14)
+ VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"query": "\nINSERT INTO courses (\n name,\n organization_id,\n slug,\n content_search_language,\n language_code,\n copied_from,\n course_language_group_id,\n is_draft,\n base_module_completion_requires_n_submodule_completions,\n can_add_chatbot,\n is_unlisted,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent\n )\nVALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14)\nRETURNING id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n base_module_completion_requires_n_submodule_completions,\n can_add_chatbot,\n is_unlisted,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent,\n flagged_answers_threshold\n ",
"query": "\nINSERT INTO courses (
name,
organization_id,
slug,
content_search_language,
language_code,
copied_from,
course_language_group_id,
is_draft,
base_module_completion_requires_n_submodule_completions,
can_add_chatbot,
is_unlisted,
is_joinable_by_code_only,
join_code,
ask_marketing_consent,
flagged_answers_threshold
)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15)
RETURNING id,
name,
created_at,
updated_at,
organization_id,
deleted_at,
slug,
content_search_language::text,
language_code,
copied_from,
course_language_group_id,
description,
is_draft,
is_test_mode,
base_module_completion_requires_n_submodule_completions,
can_add_chatbot,
is_unlisted,
is_joinable_by_code_only,
join_code,
ask_marketing_consent,
flagged_answers_threshold
",

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

🔭 Outside diff range comments (2)
services/headless-lms/models/.sqlx/query-136d766e505e02ad02623be6ee9a3bba4f1bbc94ecfddacc7d3e2438c31278fd.json (1)

Line range hint 3-23: Add default value for flag_count in INSERT query.

The INSERT query doesn't specify a value for flag_count, yet it's marked as non-nullable in the schema. This could lead to SQL errors.

Add flag_count with a default value of 0 to the INSERT statement:

 INSERT INTO exercise_slide_submissions (
     exercise_slide_id,
     course_id,
     course_instance_id,
     exam_id,
     exercise_id,
     user_id,
-    user_points_update_strategy
+    user_points_update_strategy,
+    flag_count
   )
-VALUES ($1, $2, $3, $4, $5, $6, $7)
+VALUES ($1, $2, $3, $4, $5, $6, $7, 0)
services/headless-lms/models/.sqlx/query-db37586d48fd6e45fc213da9ea7d8026fff3c6f54237beed79c168a2ffcd9a2f.json (1)

Line range hint 3-42: Add missing column in INSERT statement

The flagged_answers_threshold column is included in the RETURNING clause but not in the INSERT statement. Since it's defined as non-nullable in the schema, this could lead to SQL errors.

Apply this diff to include the column in the INSERT statement:

 INSERT INTO courses (
     name,
     organization_id,
     slug,
     content_search_language,
     language_code,
     copied_from,
     course_language_group_id,
     is_draft,
     base_module_completion_requires_n_submodule_completions,
     can_add_chatbot,
     is_unlisted,
     is_joinable_by_code_only,
     join_code,
-    ask_marketing_consent
+    ask_marketing_consent,
+    flagged_answers_threshold
   )
-VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14)
+VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15)
🧹 Nitpick comments (18)
services/main-frontend/src/components/page-specific/manage/exercises/id/submissions/FlaggedPeerReviewAccordion.tsx (2)

22-86: Add error handling and improve accessibility.

The component implementation looks good overall, but consider these improvements:

  1. Add error handling for empty reports array
  2. Use localized date formatting
  3. Add ARIA attributes for better accessibility
 const FlaggedPeerReviewAccordion: React.FC<FlaggedPeerReviewAccordionProps> = ({
   reports,
   title,
 }) => {
   const { t } = useTranslation()
+  if (!reports?.length) {
+    return null
+  }
   return (
-    <Accordion variant="detail">
+    <Accordion variant="detail" aria-label={t("flagged-peer-reviews")}>
       <details>
-        <summary>
+        <summary aria-expanded="false">
           {title}
           <span
             className={css`
               background: ${baseTheme.colors.green[400]};
               border-radius: 20px;
               line-height: 10px;
               padding: 1px 5px;
               text-align: center;
               font-size: 14px;
               color: ${baseTheme.colors.primary[100]};
               margin-left: 3px;
               width: 20px;
               height: 20px;
             `}
+            role="status"
+            aria-label={t("number-of-reports", { count: reports.length })}
           >
             {reports.length}
           </span>
         </summary>
         <div
           className={css`
             background: ${baseTheme.colors.clear[100]};
             margin: 0.5rem 0;
           `}
         >
           {reports.map((report, index) => (
             <div key={report.submission_id}>
               <Title>
                 {t("label-report")} {index + 1}
               </Title>
               <ul>
                 <li>
                   <strong>{t("label-reason")}: </strong> {report.reason}
                 </li>
                 <li>
                   <strong>{t("text-field-label-description")}: </strong>
                   {report.description || t("no-description-available")}
                 </li>
                 <li>
                   <strong>{t("label-flagged-by")}:</strong> {report.flagged_by}
                 </li>
                 <li>
                   <strong>{t("title-submission-id", report.submission_id)}</strong>
                 </li>
                 <li>
                   <strong>{t("label-created-at")}</strong>{" "}
-                  {new Date(report.created_at).toLocaleString()}
+                  {new Date(report.created_at).toLocaleDateString(undefined, {
+                    year: 'numeric',
+                    month: 'long',
+                    day: 'numeric',
+                    hour: '2-digit',
+                    minute: '2-digit'
+                  })}
                 </li>
               </ul>
             </div>
           ))}
         </div>
       </details>
     </Accordion>
   )
 }

15-20: Consider extracting styles to theme.

The Title styled component uses hardcoded colors from the theme. Consider extracting these styles to the theme for better maintainability.

-const Title = styled.h5`
-  border-top: 1px solid ${baseTheme.colors.clear[600]};
-  border-bottom: 1px solid ${baseTheme.colors.clear[600]};
-  padding: 1rem 1.5rem 1rem;
-`
+const Title = styled.h5(({ theme }) => ({
+  borderTop: `1px solid ${theme.colors.clear[600]}`,
+  borderBottom: `1px solid ${theme.colors.clear[600]}`,
+  padding: '1rem 1.5rem'
+}))
services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx (2)

134-149: Add error handling for the report mutation.

While the mutation is well-structured, consider adding:

  1. Error handling for the API response
  2. Verification of the null values for flagged_user and flagged_by
 const reportMutation = useToastMutation(
   async ({ reason, description }: { reason: string; description: string }) => {
     if (!peerOrSelfReviewData || !peerOrSelfReviewData.answer_to_review) {
       return
     }
     return await postFlagAnswerInPeerReview(exerciseId, {
       submission_id: peerOrSelfReviewData.answer_to_review.exercise_slide_submission_id,
       reason,
       description,
       flagged_user: null,
       flagged_by: null,
     })
   },
   { notify: true, method: "POST" },
-  {},
+  {
+    onError: (error) => {
+      console.error('Failed to report answer:', error)
+    },
+    onSuccess: () => {
+      setIsReportDialogOpen(false)
+    },
+  },
 )

151-153: Add input validation to handleReportSubmit.

The function should validate the reason and description before calling the mutation.

-const handleReportSubmit = (reason: string, description: string) => {
+const handleReportSubmit = (reason: string, description: string) => {
+  if (!reason.trim()) {
+    return
+  }
+  if (!description.trim()) {
+    return
+  }
   reportMutation.mutate({ reason, description })
 }
services/headless-lms/models/src/chapters.rs (1)

624-624: Document the default threshold value.

The magic number 5 used for flagged_answers_threshold should be documented to explain its significance.

-                    flagged_answers_threshold: Some(5),
+                    // Default threshold of 5 flagged answers before moving to manual review
+                    flagged_answers_threshold: Some(5),
services/headless-lms/server/src/programs/seed/seed_organizations/uh_cs.rs (1)

501-501: Consider making the threshold configurable.

The hardcoded threshold value of 5 in the seed data might not be suitable for all courses. Consider making it configurable through environment variables or configuration files.

services/headless-lms/server/src/controllers/course_material/exercises.rs (1)

384-389: Improve error messages for better debugging.

The error messages could be more descriptive and include more context. Consider adding relevant IDs and error details to help with debugging.

Apply this diff to improve error messages:

-            "Submission not found.".to_string(),
+            format!("Submission with ID {} not found.", payload.submission_id),

-            "Course ID not found for the submission.".to_string(),
+            format!("Course ID not found for submission with ID {}.", payload.submission_id),

-            "No course instance or exam ID found for the submission.".to_string(),
+            format!("No course instance or exam ID found for submission with ID {} and user ID {}.", payload.submission_id, flagged_user),

-            "Failed to update the reviewing stage.".to_string(),
+            format!("Failed to update reviewing stage to WaitingForManualGrading for user {} and submission {}.", flagged_user, payload.submission_id),

-            "Failed to increment the flag count.".to_string(),
+            format!("Failed to increment flag count for submission with ID {}.", payload.submission_id),

Also applies to: 396-401, 439-444, 456-461, 465-470

services/headless-lms/models/src/library/grading.rs (1)

577-582: Add documentation for the new field.

The new field reveived_peer_flagging_reports lacks documentation explaining its purpose and usage.

Add documentation above the struct field:

+    /// Reports submitted by peers flagging this answer as inappropriate or problematic.
+    /// These reports are used to determine if the answer should be moved to manual grading.
     pub reveived_peer_flagging_reports: Vec<FlaggedAnswer>,

Also applies to: 597-597

services/headless-lms/models/src/library/copying.rs (1)

1180-1180: Consider making the default threshold configurable

The default threshold of 5 is hardcoded. Consider:

  1. Making it configurable through environment variables
  2. Adding documentation explaining the choice of default value
  3. Moving it to a constants file for better maintainability
-                flagged_answers_threshold: Some(5),
+                // Default threshold from environment or fallback to constant
+                flagged_answers_threshold: Some(get_default_flag_threshold()),
services/headless-lms/server/src/programs/seed/seed_courses.rs (1)

100-100: LGTM! Consider documenting the threshold value.

The consistent use of flagged_answers_threshold: Some(5) across all test courses looks good. However, it would be helpful to add a comment explaining why 5 was chosen as the default threshold for testing purposes.

 flagged_answers_threshold: Some(5), // Default threshold for testing - triggers manual review after 5 flags

Also applies to: 2036-2036, 2164-2164, 3045-3045, 4454-4454

services/headless-lms/migrations/20250107103628_add_flagging_for_answers.up.sql (1)

32-33: Consider adding a maximum threshold limit.

The flagged_answers_threshold should have an upper bound to prevent abuse or performance issues from an extremely high threshold.

Add a check constraint:

-ADD COLUMN flagged_answers_threshold INTEGER NOT NULL DEFAULT 5;
+ADD COLUMN flagged_answers_threshold INTEGER NOT NULL DEFAULT 5 CHECK (flagged_answers_threshold BETWEEN 1 AND 100);
services/headless-lms/models/.sqlx/query-fd00896e8f130c629eb04465f1767a33b9117449d44e11d3b68ec601cb62659f.json (1)

Line range hint 3-19: Consider adding an index for flag_count.

Since this query is used to fetch submissions and will likely be used to filter flagged submissions, consider adding an index on flag_count to improve query performance, especially when implementing threshold-based filtering.

Run this verification script to check if an index already exists:

#!/bin/bash
# Check if an index exists on flag_count column
echo "Checking for existing indexes on flag_count..."
fd -e sql . | xargs rg -l "CREATE.*INDEX.*flag_count"

# Check if there are any queries that filter by flag_count
echo "Checking for queries that filter by flag_count..."
rg -l "WHERE.*flag_count|HAVING.*flag_count" .
services/headless-lms/models/.sqlx/query-32265e9d7760402be8ffbe8ac3dd909142255676cfb06e9bbbbbc753a6fff391.json (1)

3-3: Consider race conditions in flag count updates

The flag count implementation might be susceptible to race conditions when multiple users flag an answer simultaneously. Consider using atomic updates in the database.

Example implementation using PostgreSQL:

UPDATE exercise_slide_submissions 
SET flag_count = flag_count + 1 
WHERE id = $1 
RETURNING flag_count;
shared-module/packages/common/src/locales/fi/main-frontend.json (1)

341-341: Consider using consistent terminology for reporting actions

In course-material.json, the reporting action uses "Ilmoita", while here "Raportti" is used. Consider using "Ilmoitus" instead of "Raportti" to maintain consistency in terminology across the application.

-  "label-report": "Raportti",
+  "label-report": "Ilmoitus",

Also applies to: 373-374, 377-377, 391-391

services/headless-lms/models/.sqlx/query-9cb1870d45712289597e4b0b278bfb502a864aa907e9bbcfb0b113f0e68f75bd.json (1)

3-3: Consider adding an index for performance optimization.

Since the flagged_answers_threshold column will be used to determine when to move submissions to manual review, consider adding an index to optimize queries that filter based on this threshold.

CREATE INDEX idx_courses_flagged_answers_threshold ON courses(flagged_answers_threshold);
services/headless-lms/models/.sqlx/query-42f0e5ed7fe9d9edd3ec6b40c9c1a72d176753e127aec5795e6c4bd8a14b0b8f.json (2)

3-3: Consider additional validation and constraints for flagged_answers_threshold

A few architectural considerations for the new threshold field:

  1. Consider adding a CHECK constraint to ensure the threshold is non-negative
  2. Document the expected range of values
  3. Consider adding application-level validation

Example CHECK constraint:

ALTER TABLE courses
ADD CONSTRAINT check_flagged_answers_threshold 
CHECK (flagged_answers_threshold >= 0);

Also applies to: 70-74, 105-109


3-3: Add documentation for the flagging system

Consider adding comments in the SQL queries to document:

  • The purpose of the flagging system
  • How the threshold is used
  • The relationship between flag_count and flagged_answers_threshold

Example documentation:

-- flagged_answers_threshold: Number of flags required to move an answer to manual review
-- When flag_count >= flagged_answers_threshold, the answer requires teacher review
services/headless-lms/models/.sqlx/query-2b169ac8547e42b1afa42278b7f1c00c50c734e7d8b6d690914526699ec7a3a6.json (1)

107-109: Consider adding value constraints for the threshold

The Int4 type allows any integer value, but the threshold should likely have a minimum value (e.g., >= 1) and possibly a maximum value to prevent abuse.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5a70a2e and 81a9db8.

📒 Files selected for processing (55)
  • services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx (5 hunks)
  • services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerRevireMarkingSpam/MarkAsSpamDialog.tsx (1 hunks)
  • services/course-material/src/services/backend.ts (2 hunks)
  • services/headless-lms/migrations/20250107103628_add_flagging_for_answers.down.sql (1 hunks)
  • services/headless-lms/migrations/20250107103628_add_flagging_for_answers.up.sql (1 hunks)
  • services/headless-lms/models/.sqlx/query-0020f8d1577d278505f750f314f45dd74fd18a15546339eb2694d0f5285d8c72.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-049f398260c49db909c2fb054a9beca4f06d3b2d921ce78dd9f52ce0fdf9ac06.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-136d766e505e02ad02623be6ee9a3bba4f1bbc94ecfddacc7d3e2438c31278fd.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-17bf0f2c956a3aa06506ba05c18e1b15b8423cb99f8f7b3d7ffa1e3ef81945c5.json (2 hunks)
  • services/headless-lms/models/.sqlx/query-18130e6045b555e04c07b2a0efa2d7da271236b44fd90da911bbce79e4bcf242.json (2 hunks)
  • services/headless-lms/models/.sqlx/query-28ceed91e05d620f85da1a7cb9c763b2e7351e74f7f6e0c6cfe65816b6de1801.json (2 hunks)
  • services/headless-lms/models/.sqlx/query-2b169ac8547e42b1afa42278b7f1c00c50c734e7d8b6d690914526699ec7a3a6.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-32265e9d7760402be8ffbe8ac3dd909142255676cfb06e9bbbbbc753a6fff391.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-373b37e6ff5d338ba1f1693bf807a3e396753077888a809c81f137f020ba88bb.json (1 hunks)
  • services/headless-lms/models/.sqlx/query-3ef4b03ab1b23a15038e84194badf23a44dffcf34e70ac859a0d97c949fbb50b.json (2 hunks)
  • services/headless-lms/models/.sqlx/query-42f0e5ed7fe9d9edd3ec6b40c9c1a72d176753e127aec5795e6c4bd8a14b0b8f.json (2 hunks)
  • services/headless-lms/models/.sqlx/query-4df8e4be76b9f6575e6abcd0bb74895bd3576071d702d184e8cdd8c0e366d1e0.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-588f8c05914b1077dee418b21038791db03ae1584540a66d065d05e88556d60f.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-630403a014992249ea36c52d3611b6c3617b6ca2d7aa07de995feaee3e16e22a.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-63046830fbee7ede5ea5cae981888755dc19e2f9d50b726d224803c7a2443e0f.json (2 hunks)
  • services/headless-lms/models/.sqlx/query-6fd565594531579c6b7925befe5013abd5d0ef5aad4af21336dee486f5ad5d27.json (1 hunks)
  • services/headless-lms/models/.sqlx/query-75146edf141f538da63a12b252133490abd01b91cc1e21a5a19f1589d1d77ecf.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-78e1728e1b7cc2e18bf53f4ad4584dbe18644a58c1adc9128be205fcd9813cc8.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-8abb8e7d9d09a78bb847d279c5bba07acf7aafd0885001735d66da49513c5811.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-8f7393748c9ef21fc9d2664ea18d2d16cb6ebe750a716912745f1a330406f29b.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-903b6bb1bf376d99638a2bf42751aeaa42b8bcf77a79c96262060cad5d4c6cc3.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-9620fbb3cbb3593dd4599f8872a377fe8beb392895c8e733f914b9c566e9847e.json (2 hunks)
  • services/headless-lms/models/.sqlx/query-9cb1870d45712289597e4b0b278bfb502a864aa907e9bbcfb0b113f0e68f75bd.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-caeaf4c59be4dde42812fa54f4e9753a257a787b1ffe8b01497373d5f1778244.json (1 hunks)
  • services/headless-lms/models/.sqlx/query-db37586d48fd6e45fc213da9ea7d8026fff3c6f54237beed79c168a2ffcd9a2f.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-dfcbf0ef22c9cc2528c66b7fcd0c8189f6d504f3d81f12075b387a6128fb9d7b.json (3 hunks)
  • services/headless-lms/models/.sqlx/query-fd00896e8f130c629eb04465f1767a33b9117449d44e11d3b68ec601cb62659f.json (2 hunks)
  • services/headless-lms/models/src/chapters.rs (1 hunks)
  • services/headless-lms/models/src/courses.rs (18 hunks)
  • services/headless-lms/models/src/exams.rs (1 hunks)
  • services/headless-lms/models/src/exercise_slide_submissions.rs (12 hunks)
  • services/headless-lms/models/src/library/copying.rs (2 hunks)
  • services/headless-lms/models/src/library/grading.rs (4 hunks)
  • services/headless-lms/models/src/test_helper.rs (1 hunks)
  • services/headless-lms/server/src/controllers/course_material/exercises.rs (3 hunks)
  • services/headless-lms/server/src/programs/seed/seed_courses.rs (5 hunks)
  • services/headless-lms/server/src/programs/seed/seed_organizations/uh_cs.rs (1 hunks)
  • services/headless-lms/server/src/programs/seed/seed_organizations/uh_mathstat.rs (4 hunks)
  • services/headless-lms/server/src/test_helper.rs (1 hunks)
  • services/headless-lms/server/src/ts_binding_generator.rs (1 hunks)
  • services/headless-lms/server/tests/study_registry_test.rs (1 hunks)
  • services/main-frontend/src/components/page-specific/manage/courses/id/index/UpdateCourseForm.tsx (3 hunks)
  • services/main-frontend/src/components/page-specific/manage/exercises/id/submissions/AnswersRequiringAttentionItem.tsx (2 hunks)
  • services/main-frontend/src/components/page-specific/manage/exercises/id/submissions/FlaggedPeerReviewAccordion.tsx (1 hunks)
  • shared-module/packages/common/src/bindings.guard.ts (8 hunks)
  • shared-module/packages/common/src/bindings.ts (6 hunks)
  • shared-module/packages/common/src/locales/en/course-material.json (5 hunks)
  • shared-module/packages/common/src/locales/en/main-frontend.json (3 hunks)
  • shared-module/packages/common/src/locales/fi/course-material.json (5 hunks)
  • shared-module/packages/common/src/locales/fi/main-frontend.json (3 hunks)
✅ Files skipped from review due to trivial changes (1)
  • services/headless-lms/migrations/20250107103628_add_flagging_for_answers.down.sql
👮 Files not reviewed due to content moderation or server errors (8)
  • services/headless-lms/models/src/test_helper.rs
  • services/headless-lms/server/src/programs/seed/seed_organizations/uh_mathstat.rs
  • services/headless-lms/server/src/test_helper.rs
  • services/main-frontend/src/components/page-specific/manage/exercises/id/submissions/AnswersRequiringAttentionItem.tsx
  • services/headless-lms/models/.sqlx/query-78e1728e1b7cc2e18bf53f4ad4584dbe18644a58c1adc9128be205fcd9813cc8.json
  • services/headless-lms/models/.sqlx/query-8f7393748c9ef21fc9d2664ea18d2d16cb6ebe750a716912745f1a330406f29b.json
  • services/headless-lms/models/.sqlx/query-4df8e4be76b9f6575e6abcd0bb74895bd3576071d702d184e8cdd8c0e366d1e0.json
  • services/headless-lms/models/.sqlx/query-8abb8e7d9d09a78bb847d279c5bba07acf7aafd0885001735d66da49513c5811.json
🔇 Additional comments (34)
services/headless-lms/server/tests/study_registry_test.rs (1)

168-168: LGTM!

The addition of flagged_answers_threshold with a default value of 5 is consistent with the schema changes and frontend implementation.

services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx (2)

3-3: Clean imports and state management!

The new imports and state management for the report dialog are well-organized and follow React best practices.

Also applies to: 12-12, 18-18, 23-23, 44-44


335-353: Well-structured UI implementation!

The report button and dialog implementation follow good practices:

  • Clean styling with emotion css
  • Proper event handling
  • Icon usage for better UX
services/headless-lms/server/src/ts_binding_generator.rs (1)

127-128: Clean addition of TypeScript bindings!

The new types FlaggedAnswer and NewFlaggedAnswer are properly integrated into the export list.

services/headless-lms/models/src/exams.rs (1)

101-102: Clean SQL field addition!

The flagged_answers_threshold field is properly added to the course data retrieval query.

shared-module/packages/common/src/bindings.ts (2)

819-837: LGTM! Well-structured interfaces for flagged answers

The FlaggedAnswer and NewFlaggedAnswer interfaces are well-designed with proper types and optional fields where needed.


489-489: LGTM! Consistent course interface updates

The flagged answers threshold field is consistently added across all course-related interfaces with proper typing.

Also applies to: 520-520, 538-538

services/headless-lms/models/src/exercise_slide_submissions.rs (1)

62-62: LGTM! Well-implemented database models and functions

The implementation is thorough and follows best practices:

  • Proper error handling in database functions
  • Consistent use of parameterized queries
  • Proper soft delete pattern with deleted_at checks
  • Consistent flag_count field updates across all queries

Also applies to: 136-243

services/course-material/src/services/backend.ts (1)

416-424: LGTM! Well-implemented API function

The implementation follows the established patterns and best practices:

  • Proper typing from bindings
  • RESTful endpoint path
  • Consistent error handling through courseMaterialClient
shared-module/packages/common/src/bindings.guard.ts (3)

1051-1053: LGTM!

The addition of flagged_answers_threshold to the Course type is well-implemented and properly typed.


1991-1993: LGTM! (after fixing the typo)

The addition of flagged answers array to AnswerRequiringAttentionWithTasks is well-implemented. Just ensure to fix the typo mentioned in the earlier comment.


1575-1601: ⚠️ Potential issue

Fix typo in property name "reveived_peer_flagging_reports"

The property name has a typo ("reveived" instead of "received") which will cause runtime issues when validating objects. This typo appears in multiple places in the codebase.

Apply this diff to fix the typo:

-  reveived_peer_flagging_reports: FlaggedAnswer[];
+  received_peer_flagging_reports: FlaggedAnswer[];

Likely invalid or redundant comment.

services/headless-lms/models/.sqlx/query-373b37e6ff5d338ba1f1693bf807a3e396753077888a809c81f137f020ba88bb.json (1)

1-18: LGTM! Well-structured atomic update query.

The query correctly handles atomic increment of flag_count while updating the timestamp. Good use of parameterization and minimal return value.

services/headless-lms/models/.sqlx/query-caeaf4c59be4dde42812fa54f4e9753a257a787b1ffe8b01497373d5f1778244.json (1)

1-58: LGTM! Well-structured INSERT query with proper RETURNING clause.

The query follows best practices with:

  • Complete column specification
  • Proper parameterization
  • Comprehensive RETURNING clause
  • Correct handling of nullable columns
services/headless-lms/models/.sqlx/query-6fd565594531579c6b7925befe5013abd5d0ef5aad4af21336dee486f5ad5d27.json (1)

1-58: LGTM! Well-structured SELECT query with proper filtering.

The query follows best practices with:

  • Explicit column selection instead of SELECT *
  • Proper soft delete handling
  • Clear parameter usage
  • Logical filtering conditions
services/headless-lms/models/.sqlx/query-18130e6045b555e04c07b2a0efa2d7da271236b44fd90da911bbce79e4bcf242.json (1)

3-3: LGTM!

The query has been properly updated to include the new flag_count column with correct typing and nullability constraints.

Also applies to: 70-74, 80-80

services/headless-lms/models/.sqlx/query-17bf0f2c956a3aa06506ba05c18e1b15b8423cb99f8f7b3d7ffa1e3ef81945c5.json (1)

3-3: LGTM!

The query has been properly updated to include the new flag_count column with correct typing and nullability constraints.

Also applies to: 70-74, 80-80

services/headless-lms/models/.sqlx/query-9620fbb3cbb3593dd4599f8872a377fe8beb392895c8e733f914b9c566e9847e.json (1)

3-3: LGTM!

The query has been properly updated to include the new flag_count column with correct typing and nullability constraints.

Also applies to: 70-74, 80-80

services/headless-lms/models/.sqlx/query-28ceed91e05d620f85da1a7cb9c763b2e7351e74f7f6e0c6cfe65816b6de1801.json (1)

Line range hint 3-20: Verify the DISTINCT ON optimization.

The query uses DISTINCT ON (user_id) with ORDER BY user_id, created_at DESC. This is good for getting the latest submission per user, but ensure the ordering matches the requirements when flag_count becomes a factor.

Run this verification script to check other similar queries:

✅ Verification successful

DISTINCT ON optimization is correctly implemented.

The current query implementation is optimal because:

  • DISTINCT ON (user_id) with ORDER BY user_id, created_at DESC correctly retrieves the latest submission per user
  • flag_count is not used for ordering in any similar queries in the codebase, suggesting it's not a factor in submission ordering requirements
  • The ordering matches the standard pattern for getting most recent records per group in PostgreSQL
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Find similar queries using DISTINCT ON to ensure consistent ordering
echo "Checking for similar DISTINCT ON queries..."
rg -l "DISTINCT ON.*ORDER BY" .

# Check if any queries order by flag_count
echo "Checking for queries that order by flag_count..."
rg -l "ORDER BY.*flag_count" .

Length of output: 270


Script:

#!/bin/bash
# Find all .sqlx files first
echo "=== Finding all .sqlx files ==="
fd -e json .sqlx/

echo "=== Checking DISTINCT ON usage with context ==="
rg -C 5 "DISTINCT ON" --type json

echo "=== Checking ORDER BY patterns with context ==="
rg -C 5 "ORDER BY" --type json

echo "=== Checking flag_count usage in queries ==="
rg -C 5 "flag_count" --type json

Length of output: 503


Script:

#!/bin/bash
# Check the specific directory
echo "=== Checking directory structure ==="
ls -la services/headless-lms/models/.sqlx/

echo "=== Searching for queries in the specific file and similar ones ==="
rg -C 5 "\"query\":" services/headless-lms/models/.sqlx/

echo "=== Looking for DISTINCT ON in escaped JSON strings ==="
rg -C 5 "DISTINCT ON.*user_id" services/headless-lms/models/.sqlx/

echo "=== Checking flag_count in the specific directory ==="
rg -C 5 "flag_count" services/headless-lms/models/.sqlx/

Length of output: 70454


Script:

#!/bin/bash
# Examine the specific query file
echo "=== Content of the specific query file ==="
cat services/headless-lms/models/.sqlx/query-28ceed91e05d620f85da1a7cb9c763b2e7351e74f7f6e0c6cfe65816b6de1801.json

echo -e "\n=== Looking for similar DISTINCT ON queries with flag_count ==="
rg -l "DISTINCT ON.*user_id.*flag_count" services/headless-lms/models/.sqlx/

echo -e "\n=== Checking queries that order by flag_count ==="
rg -l "ORDER BY.*flag_count" services/headless-lms/models/.sqlx/

Length of output: 3095

services/headless-lms/models/.sqlx/query-32265e9d7760402be8ffbe8ac3dd909142255676cfb06e9bbbbbc753a6fff391.json (1)

3-3: Verify database migration for flagged_answers_threshold

The flagged_answers_threshold column is marked as non-nullable in all course queries. Please ensure that:

  1. A database migration file exists
  2. A default value is specified for existing rows
  3. A constraint is added to ensure non-negative values

Let's verify the migration file:

Also applies to: 135-135

✅ Verification successful

Migration exists but could be strengthened

The migration file properly adds the flagged_answers_threshold column with a default value of 5, but consider adding a CHECK constraint to ensure non-negative values:

ALTER TABLE courses
ADD CONSTRAINT flagged_answers_threshold_non_negative 
CHECK (flagged_answers_threshold >= 0);
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for migration file and its content
# Expected: Find migration file that adds flagged_answers_threshold column

# Look for migration files
fd --type f . services/headless-lms/models/migrations | xargs rg -l "flagged_answers_threshold"

# Check content of found files
fd --type f . services/headless-lms/models/migrations | xargs rg "ALTER TABLE.*courses.*flagged_answers_threshold|ADD COLUMN.*flagged_answers_threshold"

Length of output: 1777


Script:

#!/bin/bash
# Description: Check complete content of migration files
# Expected: Find any CHECK constraints for non-negative values

cat services/headless-lms/migrations/20250107103628_add_flagging_for_answers.up.sql

Length of output: 2125

services/headless-lms/models/.sqlx/query-db37586d48fd6e45fc213da9ea7d8026fff3c6f54237beed79c168a2ffcd9a2f.json (2)

3-3: LGTM! Consistent implementation across query files

The flagged_answers_threshold column is consistently implemented across all SQL query files, with matching column definitions and proper hash updates.

Also applies to: 139-139


105-109: Consider schema design for non-nullable column

The flagged_answers_threshold is defined as a non-nullable column (Int4) without a default value. This design choice requires:

  1. A migration strategy for existing rows
  2. Explicit values for all new inserts
    Consider either:
  • Adding a default value in the database schema
  • Making the column nullable

Run this script to check the current schema definition:

Also applies to: 135-135

shared-module/packages/common/src/locales/fi/course-material.json (1)

26-26: Finnish translations for reporting functionality look good!

The translations are clear, concise, and maintain consistency with Finnish UI conventions.

Also applies to: 94-96, 149-149, 178-178, 203-203

shared-module/packages/common/src/locales/en/course-material.json (1)

26-26: English translations for reporting functionality look good!

The translations are clear, professional, and maintain consistency with English UI conventions.

Also applies to: 93-95, 149-149, 177-177, 202-202

shared-module/packages/common/src/locales/en/main-frontend.json (1)

337-337: English translations for reporting labels look good!

The translations are clear, professional, and follow consistent labeling patterns.

Also applies to: 369-370, 373-373, 387-387

services/headless-lms/models/.sqlx/query-63046830fbee7ede5ea5cae981888755dc19e2f9d50b726d224803c7a2443e0f.json (1)

3-3: 🛠️ Refactor suggestion

Consider migration impact of non-nullable flag_count without default.

The flag_count column is marked as non-nullable but doesn't specify a default value. This could cause issues during migration for existing rows and when creating new submissions.

Run this script to check for existing rows that would be affected:

Consider either:

  1. Making the column nullable
  2. Adding a default value of 0
  3. Ensuring the migration script handles existing rows

Also applies to: 80-80

services/headless-lms/models/.sqlx/query-42f0e5ed7fe9d9edd3ec6b40c9c1a72d176753e127aec5795e6c4bd8a14b0b8f.json (1)

3-3: Verify database migration for non-nullable flag_count

The flag_count column is defined as non-nullable (false in nullable array). Ensure that the corresponding database migration includes a default value for existing rows.

Also applies to: 80-80

✅ Verification successful

Migration correctly handles non-nullable flag_count

The migration safely adds the flag_count column with NOT NULL DEFAULT 0, ensuring both existing and new rows will have a valid value.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for migration files that add the flag_count column
fd -e sql -e SQL . | xargs rg -l "flag_count"

Length of output: 207


Script:

#!/bin/bash
# Check the content of the up migration
cat services/headless-lms/migrations/20250107103628_add_flagging_for_answers.up.sql

Length of output: 2125

services/headless-lms/models/.sqlx/query-2b169ac8547e42b1afa42278b7f1c00c50c734e7d8b6d690914526699ec7a3a6.json (1)

Line range hint 3-24: Verify data migration strategy for existing courses

The new flagged_answers_threshold column is marked as non-nullable but no default value is specified. This requires attention to:

  1. How existing course records will be migrated
  2. What default threshold value should be set for existing courses

Also applies to: 105-109, 135-136

✅ Verification successful

Migration strategy for flagged_answers_threshold is properly handled

The migration adds the column as NOT NULL DEFAULT 5, ensuring all existing courses get a reasonable default threshold value. The migration is well-documented and includes both up and down paths.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for migration files that handle the new column
# Expected: Find migration files that add the column and set default values

fd -e sql -e psql . migrations/ | xargs rg -l "flagged_answers_threshold"

Length of output: 1159


Script:

#!/bin/bash
# Check the content of migration files
cat services/headless-lms/migrations/20250107103628_add_flagging_for_answers.up.sql
echo "---DOWN MIGRATION---"
cat services/headless-lms/migrations/20250107103628_add_flagging_for_answers.down.sql

Length of output: 2426

services/headless-lms/models/.sqlx/query-630403a014992249ea36c52d3611b6c3617b6ca2d7aa07de995feaee3e16e22a.json (1)

Line range hint 3-24: Verify consistent implementation across related queries

The implementation of flagged_answers_threshold appears consistent with other queries. However, let's verify that all course-related queries include this field to prevent inconsistencies.

Also applies to: 105-109, 135-136

✅ Verification successful

All course-related queries consistently include flagged_answers_threshold

The field is properly included in all queries that select the full course record, maintaining consistency across the codebase.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for consistency in course-related queries
# Expected: All course SELECT queries should include flagged_answers_threshold

rg -l "SELECT.*FROM courses" . | xargs rg -L "flagged_answers_threshold"

Length of output: 690


Script:

#!/bin/bash
# Find all .sqlx files with SELECT queries from courses table
# and check if they include flagged_answers_threshold
fd '.sqlx$' . -x sh -c '
  echo "=== Checking file: {}" && 
  cat {} | 
  jq -r "select(.query | test(\"SELECT.*FROM courses\"; \"i\")) | 
    if (.query | test(\"flagged_answers_threshold\"; \"i\")) then
      \"✓ Has flagged_answers_threshold\"
    else 
      \"❌ Missing flagged_answers_threshold:\n\" + .query
    end"
'

Length of output: 327


Script:

#!/bin/bash
# First, find all .sqlx files
echo "=== Finding .sqlx files ==="
fd '.sqlx$'

# Then, examine the content of one file to understand structure
echo -e "\n=== Examining content structure ==="
fd '.sqlx$' | head -n 1 | xargs cat

Length of output: 203


Script:

#!/bin/bash
echo "=== Listing .sqlx files ==="
ls services/headless-lms/models/.sqlx/

echo -e "\n=== Checking course queries ==="
grep -l "SELECT.*FROM courses" services/headless-lms/models/.sqlx/* | \
while read file; do
  echo "=== File: $file ==="
  grep -A 1 "SELECT.*FROM courses" "$file"
done

Length of output: 131070

services/headless-lms/models/.sqlx/query-903b6bb1bf376d99638a2bf42751aeaa42b8bcf77a79c96262060cad5d4c6cc3.json (1)

Line range hint 3-25: LGTM: Query maintains data consistency

The query correctly includes flagged_answers_threshold while maintaining the existing filtering logic for non-deleted courses within a language group.

Also applies to: 105-109, 135-136

services/headless-lms/models/.sqlx/query-049f398260c49db909c2fb054a9beca4f06d3b2d921ce78dd9f52ce0fdf9ac06.json (1)

Line range hint 3-30: LGTM: User course settings query is complete

The query correctly includes flagged_answers_threshold while maintaining proper joins with user course settings.

Also applies to: 105-109, 135-136

services/headless-lms/models/.sqlx/query-75146edf141f538da63a12b252133490abd01b91cc1e21a5a19f1589d1d77ecf.json (1)

Line range hint 3-27: LGTM: Exam-related course query is complete

The query correctly includes flagged_answers_threshold while maintaining proper joins with course exams.

Also applies to: 105-109, 135-136

services/headless-lms/models/.sqlx/query-dfcbf0ef22c9cc2528c66b7fcd0c8189f6d504f3d81f12075b387a6128fb9d7b.json (2)

105-109: LGTM! Column metadata is properly defined.

The metadata for flagged_answers_threshold is correctly specified with:

  • Appropriate ordinal position (20)
  • Correct type (Int4)
  • Clear naming convention

3-3: Verify database migration for non-nullable column.

The addition of flagged_answers_threshold as a non-nullable column requires a corresponding migration that handles existing rows.

Let's verify the migration exists:

✅ Verification successful

Migration properly handles non-nullable column addition

The migration safely adds the flagged_answers_threshold column with a default value of 5, ensuring existing rows are handled correctly.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for migrations related to flagged_answers_threshold
rg -l "flagged_answers_threshold" --type sql

Length of output: 206


Script:

#!/bin/bash
# Display content of migration files
echo "=== UP Migration ==="
cat services/headless-lms/migrations/20250107103628_add_flagging_for_answers.up.sql
echo -e "\n=== DOWN Migration ==="
cat services/headless-lms/migrations/20250107103628_add_flagging_for_answers.down.sql

Length of output: 2487

Comment on lines 1 to 86
import { css } from "@emotion/css"
import styled from "@emotion/styled"
import React, { useState } from "react"
import { useTranslation } from "react-i18next"

import Button from "@/shared-module/common/components/Button"
import Dialog from "@/shared-module/common/components/Dialog"
import RadioButton from "@/shared-module/common/components/InputFields/RadioButton"

const FieldContainer = styled.div`
margin-bottom: 1rem;
`

const MarkAsSpamDialog: React.FC<{
isOpen: boolean
onClose: () => void
onSubmit: (reason: string, description: string) => void
}> = ({ isOpen, onClose, onSubmit }) => {
const { t } = useTranslation()
const [selectedReason, setSelectedReason] = useState<string>("")
const [description, setDescription] = useState<string>("")

const handleSubmit = () => {
if (selectedReason) {
onSubmit(selectedReason, description)
setSelectedReason("")
setDescription("")
onClose()
}
}

return (
<Dialog open={isOpen} onClose={onClose}>
<h1> {t("title-report-dialog")}</h1>

<div
className={css`
margin-bottom: 1rem;
`}
>
{t("select-reason")}
<FieldContainer>
<RadioButton
label={t("flagging-reason-spam")}
value={t("flagging-reason-spam")}
// eslint-disable-next-line i18next/no-literal-string
name={"reason"}
onChange={(e) => setSelectedReason(e.target.value)}
></RadioButton>
</FieldContainer>
<FieldContainer>
<RadioButton
label={t("flagging-reason-harmful-content")}
value={t("flagging-reason-harmful-content")}
// eslint-disable-next-line i18next/no-literal-string
name={"reason"}
onChange={(e) => setSelectedReason(e.target.value)}
></RadioButton>
</FieldContainer>
<FieldContainer>
<RadioButton
label={t("flagging-reason-ai-generated")}
value={t("flagging-reason-ai-generated")} // eslint-disable-next-line i18next/no-literal-string
name={"reason"}
onChange={(e) => setSelectedReason(e.target.value)}
></RadioButton>
</FieldContainer>
</div>
<textarea
placeholder={t("optional-description")}
value={description}
onChange={(e) => setDescription(e.target.value)}
className={css`
width: 100%;
height: 5rem;
margin-bottom: 1rem;
`}
/>
<Button variant="primary" onClick={handleSubmit} disabled={!selectedReason} size={"small"}>
{t("submit-button")}
</Button>
</Dialog>
)
}

export default MarkAsSpamDialog
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Fix typo in folder name and improve form validation.

  1. The folder name contains a typo: "PeerRevireMarkingSpam" should be "PeerReviewMarkingSpam"
  2. Add form validation for the description field
  3. Improve accessibility with ARIA labels
 const MarkAsSpamDialog: React.FC<{
   isOpen: boolean
   onClose: () => void
   onSubmit: (reason: string, description: string) => void
 }> = ({ isOpen, onClose, onSubmit }) => {
   const { t } = useTranslation()
   const [selectedReason, setSelectedReason] = useState<string>("")
   const [description, setDescription] = useState<string>("")
+  const [error, setError] = useState<string>("")
+  const MAX_DESCRIPTION_LENGTH = 1000
 
   const handleSubmit = () => {
     if (selectedReason) {
+      if (description && description.length > MAX_DESCRIPTION_LENGTH) {
+        setError(t("error-description-too-long"))
+        return
+      }
       onSubmit(selectedReason, description)
       setSelectedReason("")
       setDescription("")
+      setError("")
       onClose()
     }
   }
 
   return (
-    <Dialog open={isOpen} onClose={onClose}>
+    <Dialog 
+      open={isOpen} 
+      onClose={onClose}
+      aria-labelledby="report-dialog-title"
+      role="dialog"
+    >
-      <h1> {t("title-report-dialog")}</h1>
+      <h1 id="report-dialog-title">{t("title-report-dialog")}</h1>
 
       <div
         className={css`
           margin-bottom: 1rem;
         `}
+        role="radiogroup"
+        aria-label={t("select-reason")}
       >
         {t("select-reason")}
         <FieldContainer>
           <RadioButton
             label={t("flagging-reason-spam")}
             value={t("flagging-reason-spam")}
-            // eslint-disable-next-line i18next/no-literal-string
-            name={"reason"}
+            name="report-reason"
+            aria-describedby="report-reason-description"
             onChange={(e) => setSelectedReason(e.target.value)}
           />
         </FieldContainer>
         {/* Similar changes for other RadioButton components */}
       </div>
       <textarea
         placeholder={t("optional-description")}
         value={description}
         onChange={(e) => setDescription(e.target.value)}
+        maxLength={MAX_DESCRIPTION_LENGTH}
+        aria-label={t("report-description")}
         className={css`
           width: 100%;
           height: 5rem;
           margin-bottom: 1rem;
         `}
       />
+      {error && (
+        <div role="alert" className={css`color: ${baseTheme.colors.error};`}>
+          {error}
+        </div>
+      )}
       <Button
         variant="primary"
         onClick={handleSubmit}
         disabled={!selectedReason}
         size={"small"}
+        aria-label={t("submit-report")}
       >
         {t("submit-button")}
       </Button>
     </Dialog>
   )
 }

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 35 to 37
const [peerReviewFlaggingThreshold, setPeerReviewFlaggingThreshold] = useState(
course.flagged_answers_threshold ?? 5,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add validation for the flagging threshold.

The threshold input needs additional validation:

  1. Add a reasonable maximum limit
  2. Prevent decimal numbers
  3. Add step attribute
 const [peerReviewFlaggingThreshold, setPeerReviewFlaggingThreshold] = useState(
   course.flagged_answers_threshold ?? 5,
 )

 // In the TextField component
 <TextField
   type={"number"}
   min={0}
+  max={100}
+  step={1}
   value={peerReviewFlaggingThreshold}
   label={t("label-threshold-to-move-flagged-answer-to-manual-review")}
   onChangeByValue={(value) => {
-    setPeerReviewFlaggingThreshold(Number(value))
+    const numValue = Math.floor(Number(value))
+    if (numValue >= 0 && numValue <= 100) {
+      setPeerReviewFlaggingThreshold(numValue)
+    }
   }}
 />

Also applies to: 155-165

Comment on lines 423 to 463
let increment_flag_count =
exercise_slide_submissions::increment_flag_count(&mut conn, payload.submission_id)
.await;

if let Ok(updated_flag_count) = increment_flag_count {
let course = courses::get_course(&mut conn, course_id).await?;

if let Some(flagged_answers_threshold) = course.flagged_answers_threshold {
if updated_flag_count >= flagged_answers_threshold {
let course_instance_or_exam_id = if let Some(course_instance_id) =
flagged_submission_data.course_instance_id
{
CourseInstanceOrExamId::Instance(course_instance_id)
} else if let Some(exam_id) = flagged_submission_data.exam_id {
CourseInstanceOrExamId::Exam(exam_id)
} else {
return Err(ControllerError::new(
ControllerErrorType::InternalServerError,
"No course instance or exam ID found for the submission.".to_string(),
None,
));
};

let update_result = user_exercise_states::update_reviewing_stage(
&mut conn,
flagged_user,
course_instance_or_exam_id,
flagged_submission_data.exercise_id,
ReviewingStage::WaitingForManualGrading,
)
.await;

if update_result.is_err() {
return Err(ControllerError::new(
ControllerErrorType::InternalServerError,
"Failed to update the reviewing stage.".to_string(),
None,
));
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix potential race condition in flag count update.

The increment of flag count and the threshold check are not atomic, which could lead to race conditions if multiple users flag the answer simultaneously. Consider wrapping these operations in a transaction.

Apply this diff to fix the issue:

-        let increment_flag_count =
-            exercise_slide_submissions::increment_flag_count(&mut conn, payload.submission_id)
-                .await;
-
-        if let Ok(updated_flag_count) = increment_flag_count {
+        let mut tx = conn.begin().await.map_err(|e| ControllerError::new(
+            ControllerErrorType::InternalServerError,
+            "Failed to start transaction".to_string(),
+            Some(Box::new(e)),
+        ))?;
+
+        let updated_flag_count = exercise_slide_submissions::increment_flag_count(&mut tx, payload.submission_id)
+            .await?;
+
+        if updated_flag_count >= flagged_answers_threshold {
             let course = courses::get_course(&mut conn, course_id).await?;

             if let Some(flagged_answers_threshold) = course.flagged_answers_threshold {
                 if updated_flag_count >= flagged_answers_threshold {
                     let course_instance_or_exam_id = if let Some(course_instance_id) =
                         flagged_submission_data.course_instance_id
                     {
                         CourseInstanceOrExamId::Instance(course_instance_id)
                     } else if let Some(exam_id) = flagged_submission_data.exam_id {
                         CourseInstanceOrExamId::Exam(exam_id)
                     } else {
                         return Err(ControllerError::new(
                             ControllerErrorType::InternalServerError,
                             "No course instance or exam ID found for the submission.".to_string(),
                             None,
                         ));
                     };

                     let update_result = user_exercise_states::update_reviewing_stage(
-                        &mut conn,
+                        &mut tx,
                         flagged_user,
                         course_instance_or_exam_id,
                         flagged_submission_data.exercise_id,
                         ReviewingStage::WaitingForManualGrading,
                     )
                     .await;

                     if update_result.is_err() {
-                        return Err(ControllerError::new(
-                            ControllerErrorType::InternalServerError,
-                            "Failed to update the reviewing stage.".to_string(),
-                            None,
-                        ));
+                        tx.rollback().await?;
+                        return Err(update_result.unwrap_err());
                     }
                 }
             }
-        } else {
-            return Err(ControllerError::new(
-                ControllerErrorType::InternalServerError,
-                "Failed to increment the flag count.".to_string(),
-                None,
-            ));
         }
+        tx.commit().await.map_err(|e| ControllerError::new(
+            ControllerErrorType::InternalServerError,
+            "Failed to commit transaction".to_string(),
+            Some(Box::new(e)),
+        ))?;

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +404 to +410
let new_flagged_answer = NewFlaggedAnswer {
submission_id: payload.submission_id,
flagged_user: Some(flagged_user),
flagged_by: Some(user.id),
reason: payload.reason.clone(),
description: payload.description.clone(),
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Add validation for flagged answer fields to prevent abuse

The codebase lacks validation for the reason and description fields in flagged answers:

  • No length constraints (using unconstrained TEXT type)
  • No content validation to prevent malicious input
  • No checks for empty strings or whitespace-only content

Consider adding validation to:

  • Enforce reasonable length limits
  • Sanitize or escape potentially harmful content
  • Ensure meaningful content by rejecting empty or whitespace-only strings
🔗 Analysis chain

Consider validating the reason and description fields.

The function doesn't validate the length or content of the reason and description fields. Consider adding validation to prevent abuse.

Run this script to check if there are any existing validations in the codebase:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for existing validations of flagged answer fields
# Test: Search for validation patterns. Expect: Validation logic for flagged answers.

# Check for validation patterns in Rust files
rg -A 5 'reason|description.*validate|check|verify' --type rust

# Check for SQL constraints
rg -A 5 'flagged_answers.*CHECK|CONSTRAINT' --type sql

Length of output: 68368


Script:

#!/bin/bash
# Search for flagged answers schema and models
rg -A 10 'CREATE TABLE.*flagged_answers|struct.*FlaggedAnswer' --type rust --type sql

# Search for validation patterns in similar features
rg -A 5 'validate.*length|validate.*content|validate_field' --type rust

Length of output: 3562

@@ -526,6 +528,7 @@ pub struct AnswerRequiringAttentionWithTasks {
pub tasks: Vec<CourseMaterialExerciseTask>,
pub given_peer_reviews: Vec<PeerReviewWithQuestionsAndAnswers>,
pub received_peer_or_self_reviews: Vec<PeerReviewWithQuestionsAndAnswers>,
pub reveived_peer_flagging_reports: Vec<FlaggedAnswer>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix typo in field name.

The field name "reveived_peer_flagging_reports" contains a typo. It should be "received_peer_flagging_reports".

Apply this diff to fix the typo:

-    pub reveived_peer_flagging_reports: Vec<FlaggedAnswer>,
+    pub received_peer_flagging_reports: Vec<FlaggedAnswer>,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub reveived_peer_flagging_reports: Vec<FlaggedAnswer>,
pub received_peer_flagging_reports: Vec<FlaggedAnswer>,

@@ -1,6 +1,6 @@
{
"db_name": "PostgreSQL",
"query": "\nSELECT id,\n created_at,\n updated_at,\n deleted_at,\n exercise_slide_id,\n course_id,\n course_instance_id,\n exam_id,\n exercise_id,\n user_id,\n user_points_update_strategy AS \"user_points_update_strategy: _\"\nFROM exercise_slide_submissions\nWHERE exercise_id = $1\n AND deleted_at IS NULL\nORDER BY created_at DESC\nLIMIT $2 OFFSET $3\n ",
"query": "\nSELECT id,\n created_at,\n updated_at,\n deleted_at,\n exercise_slide_id,\n course_id,\n course_instance_id,\n exam_id,\n exercise_id,\n user_id,\n user_points_update_strategy AS \"user_points_update_strategy: _\",\n flag_count\nFROM exercise_slide_submissions\nWHERE exercise_id = $1\n AND deleted_at IS NULL\nLIMIT $2 OFFSET $3\n ",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add ORDER BY clause for consistent pagination.

When using LIMIT and OFFSET for pagination, it's crucial to have a consistent sort order to avoid skipping or duplicating rows across pages.

Add an ORDER BY clause:

 WHERE exercise_id = $1
   AND deleted_at IS NULL
+ORDER BY created_at DESC, id
 LIMIT $2 OFFSET $3
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"query": "\nSELECT id,\n created_at,\n updated_at,\n deleted_at,\n exercise_slide_id,\n course_id,\n course_instance_id,\n exam_id,\n exercise_id,\n user_id,\n user_points_update_strategy AS \"user_points_update_strategy: _\",\n flag_count\nFROM exercise_slide_submissions\nWHERE exercise_id = $1\n AND deleted_at IS NULL\nLIMIT $2 OFFSET $3\n ",
"query": "\nSELECT id,\n created_at,\n updated_at,\n deleted_at,\n exercise_slide_id,\n course_id,\n course_instance_id,\n exam_id,\n exercise_id,\n user_id,\n user_points_update_strategy AS \"user_points_update_strategy: _\",\n flag_count\nFROM exercise_slide_submissions\nWHERE exercise_id = $1\n AND deleted_at IS NULL\nORDER BY created_at DESC, id\nLIMIT $2 OFFSET $3\n ",

@@ -1,6 +1,6 @@
{
"db_name": "PostgreSQL",
"query": "\nINSERT INTO exercise_slide_submissions (\n id,\n exercise_slide_id,\n course_id,\n course_instance_id,\n exam_id,\n exercise_id,\n user_id,\n user_points_update_strategy\n )\nVALUES ($1, $2, $3, $4, $5, $6, $7, $8)\nRETURNING id,\n created_at,\n updated_at,\n deleted_at,\n exercise_slide_id,\n course_id,\n course_instance_id,\n exam_id,\n exercise_id,\n user_id,\n user_points_update_strategy AS \"user_points_update_strategy: _\"\n ",
"query": "\nINSERT INTO exercise_slide_submissions (\n id,\n exercise_slide_id,\n course_id,\n course_instance_id,\n exam_id,\n exercise_id,\n user_id,\n user_points_update_strategy\n )\nVALUES ($1, $2, $3, $4, $5, $6, $7, $8)\nRETURNING id,\n created_at,\n updated_at,\n deleted_at,\n exercise_slide_id,\n course_id,\n course_instance_id,\n exam_id,\n exercise_id,\n user_id,\n user_points_update_strategy AS \"user_points_update_strategy: _\",\n flag_count\n ",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add flag_count to INSERT VALUES clause or specify a default value

The flag_count column is marked as non-nullable but is missing from the INSERT VALUES clause. This could cause INSERT operations to fail.

Consider one of these solutions:

  1. Add flag_count to the INSERT VALUES clause:
 INSERT INTO exercise_slide_submissions (
     id,
     exercise_slide_id,
     course_id,
     course_instance_id,
     exam_id,
     exercise_id,
     user_id,
-    user_points_update_strategy
+    user_points_update_strategy,
+    flag_count
   )
-VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
+VALUES ($1, $2, $3, $4, $5, $6, $7, $8, 0)
  1. Or specify a default value in the database migration:
ALTER TABLE exercise_slide_submissions 
ADD COLUMN flag_count INT NOT NULL DEFAULT 0;

Also applies to: 99-99

@@ -1,6 +1,6 @@
{
"db_name": "PostgreSQL",
"query": "\nUPDATE courses\nSET name = $1,\n description = $2,\n is_draft = $3,\n is_test_mode = $4,\n can_add_chatbot = $5,\n is_unlisted = $6,\n is_joinable_by_code_only = $7,\n ask_marketing_consent = $8\nWHERE id = $9\nRETURNING id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n can_add_chatbot,\n is_unlisted,\n base_module_completion_requires_n_submodule_completions,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent\n ",
"query": "\nUPDATE courses\nSET name = $1,\n description = $2,\n is_draft = $3,\n is_test_mode = $4,\n can_add_chatbot = $5,\n is_unlisted = $6,\n is_joinable_by_code_only = $7,\n ask_marketing_consent = $8,\n flagged_answers_threshold = $9\nWHERE id = $10\nRETURNING id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n can_add_chatbot,\n is_unlisted,\n base_module_completion_requires_n_submodule_completions,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent,\n flagged_answers_threshold\n ",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding a default value for flagged_answers_threshold.

Since flagged_answers_threshold is non-nullable, consider adding a default value in the UPDATE query to handle cases where the parameter might not be provided.

-  "query": "\nUPDATE courses\nSET name = $1,\n  description = $2,\n  is_draft = $3,\n  is_test_mode = $4,\n  can_add_chatbot = $5,\n  is_unlisted = $6,\n  is_joinable_by_code_only = $7,\n  ask_marketing_consent = $8,\n  flagged_answers_threshold = $9\nWHERE id = $10\n
+  "query": "\nUPDATE courses\nSET name = $1,\n  description = $2,\n  is_draft = $3,\n  is_test_mode = $4,\n  can_add_chatbot = $5,\n  is_unlisted = $6,\n  is_joinable_by_code_only = $7,\n  ask_marketing_consent = $8,\n  flagged_answers_threshold = COALESCE($9, 3)\nWHERE id = $10\n
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"query": "\nUPDATE courses\nSET name = $1,\n description = $2,\n is_draft = $3,\n is_test_mode = $4,\n can_add_chatbot = $5,\n is_unlisted = $6,\n is_joinable_by_code_only = $7,\n ask_marketing_consent = $8,\n flagged_answers_threshold = $9\nWHERE id = $10\nRETURNING id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n can_add_chatbot,\n is_unlisted,\n base_module_completion_requires_n_submodule_completions,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent,\n flagged_answers_threshold\n ",
"query": "\nUPDATE courses\nSET name = $1,\n description = $2,\n is_draft = $3,\n is_test_mode = $4,\n can_add_chatbot = $5,\n is_unlisted = $6,\n is_joinable_by_code_only = $7,\n ask_marketing_consent = $8,\n flagged_answers_threshold = COALESCE($9, 3)\nWHERE id = $10\nRETURNING id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n can_add_chatbot,\n is_unlisted,\n base_module_completion_requires_n_submodule_completions,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent,\n flagged_answers_threshold\n ",

@@ -1,6 +1,6 @@
{
"db_name": "PostgreSQL",
"query": "\nSELECT id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n can_add_chatbot,\n is_unlisted,\n base_module_completion_requires_n_submodule_completions,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent\nFROM courses\nWHERE courses.deleted_at IS NULL\n AND (\n id IN (\n SELECT course_id\n FROM roles\n WHERE deleted_at IS NULL\n AND user_id = $1\n AND course_id IS NOT NULL\n )\n OR (\n id IN (\n SELECT ci.course_id\n FROM course_instances ci\n JOIN ROLES r ON r.course_instance_id = ci.id\n WHERE r.user_id = $1\n AND r.deleted_at IS NULL\n AND ci.deleted_at IS NULL\n )\n )\n ) ",
"query": "\nSELECT id,\n name,\n created_at,\n updated_at,\n organization_id,\n deleted_at,\n slug,\n content_search_language::text,\n language_code,\n copied_from,\n course_language_group_id,\n description,\n is_draft,\n is_test_mode,\n can_add_chatbot,\n is_unlisted,\n base_module_completion_requires_n_submodule_completions,\n is_joinable_by_code_only,\n join_code,\n ask_marketing_consent,\n flagged_answers_threshold\nFROM courses\nWHERE courses.deleted_at IS NULL\n AND (\n id IN (\n SELECT course_id\n FROM roles\n WHERE deleted_at IS NULL\n AND user_id = $1\n AND course_id IS NOT NULL\n )\n OR (\n id IN (\n SELECT ci.course_id\n FROM course_instances ci\n JOIN ROLES r ON r.course_instance_id = ci.id\n WHERE r.user_id = $1\n AND r.deleted_at IS NULL\n AND ci.deleted_at IS NULL\n )\n )\n ) ",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add constraint to validate the threshold value.

Consider adding a CHECK constraint to ensure the threshold value is within a reasonable range (e.g., greater than 0).

ALTER TABLE courses ADD CONSTRAINT chk_flagged_answers_threshold 
  CHECK (flagged_answers_threshold > 0);

Comment on lines +135 to 136
false,
false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Code handling doesn't match database constraints for flagged_answers_threshold

The database schema defines flagged_answers_threshold as non-null with a default value of 5, which aligns with the business requirement that every course needs a threshold for triggering teacher reviews. However, the application code incorrectly treats this field as optional. Specific instances:

  • controllers/course_material/exercises.rs: Uses if let Some(flagged_answers_threshold) when the value is guaranteed to exist
  • Test files use Some(5) unnecessarily
🔗 Analysis chain

Verify the non-null constraint aligns with business requirements.

Setting flagged_answers_threshold as non-nullable (false in nullable array) enforces that every course must have a threshold value. This seems appropriate for the peer review system but should be confirmed.

Let's check if this is consistent across the codebase:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other usages of flagged_answers_threshold to verify consistency
ast-grep --pattern 'flagged_answers_threshold'

Length of output: 663


Script:

#!/bin/bash
# Search for SQL schema definitions and migrations related to flagged_answers_threshold
rg -A 5 -B 5 "flagged_answers_threshold" --type sql
fd -e sql -x cat {} \; | rg -A 5 -B 5 "CREATE TABLE.*courses"

Length of output: 4092

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (4)
services/headless-lms/models/src/exercise_slide_submissions.rs (2)

160-194: Consider adding database constraints for data integrity.

While the insert_flagged_answer function is well-implemented, consider adding database constraints to ensure:

  • A user cannot flag the same submission multiple times
  • A user cannot flag their own submissions

245-275: Consider adding pagination to prevent memory issues.

The get_flagged_answers_submission_ids_by_flaggers_id function fetches all flagged submissions without pagination, which could cause memory issues with a large number of flags.

Consider adding pagination:

 pub async fn get_flagged_answers_submission_ids_by_flaggers_id(
     conn: &mut PgConnection,
     flagged_by: Uuid,
+    pagination: Option<Pagination>,
 ) -> ModelResult<Vec<Uuid>> {
+    let (limit, offset) = pagination
+        .map(|p| (p.limit(), p.offset()))
+        .unwrap_or((100, 0));
     let flagged_submissions = sqlx::query_as!(
         FlaggedAnswer,
         r#"
         SELECT
             id,
             submission_id,
             flagged_user,
             flagged_by,
             reason,
             description,
             created_at,
             updated_at,
             deleted_at
         FROM flagged_answers
         WHERE flagged_by = $1
           AND deleted_at IS NULL
+        LIMIT $2 OFFSET $3
         "#,
         flagged_by,
+        limit,
+        offset
     )
     .fetch_all(conn)
     .await?;

     Ok(flagged_submissions
         .into_iter()
         .map(|row| row.submission_id)
         .collect())
 }
services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx (2)

18-18: Fix typo in import path: "PeerRevireMarkingSpam" should be "PeerReviewMarkingSpam"

The directory name contains a typo: "Revire" instead of "Review".

-import MarkAsSpamDialog from "./PeerRevireMarkingSpam/MarkAsSpamDialog"
+import MarkAsSpamDialog from "./PeerReviewMarkingSpam/MarkAsSpamDialog"

343-361: Enhance accessibility of the report functionality

The UI components need accessibility improvements:

  1. Add aria-label to the report button
  2. Add loading state indication during submission
  3. Consider keyboard navigation
 <Button
   className={css`
     display: flex !important;
     align-items: center;
     gap: 6px;
   `}
   variant={"icon"}
   transform="capitalize"
   size={"small"}
   onClick={() => setIsReportDialogOpen(true)}
+  aria-label={t("button-text-report")}
+  disabled={reportMutation.isPending}
 >
-  <ExclamationMessage /> {t("button-text-report")}
+  <ExclamationMessage aria-hidden="true" /> {t("button-text-report")}
+  {reportMutation.isPending && <Spinner size="small" />}
 </Button>

 <MarkAsSpamDialog
   isOpen={isReportDialogOpen}
   onClose={() => setIsReportDialogOpen(false)}
   onSubmit={handleReportSubmit}
+  isSubmitting={reportMutation.isPending}
 />
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 81a9db8 and b5f8bdc.

📒 Files selected for processing (5)
  • services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx (5 hunks)
  • services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerRevireMarkingSpam/MarkAsSpamDialog.tsx (1 hunks)
  • services/headless-lms/models/.sqlx/query-2a2f7d900ec3ce47be59da8b9a874aa1bc7ac567323b25476e766e95d32c7ed1.json (1 hunks)
  • services/headless-lms/models/src/exercise_slide_submissions.rs (12 hunks)
  • services/headless-lms/models/src/library/peer_or_self_reviewing.rs (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerRevireMarkingSpam/MarkAsSpamDialog.tsx
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: headless_lms
  • GitHub Check: build
  • GitHub Check: build-and-deploy
🔇 Additional comments (6)
services/headless-lms/models/src/library/peer_or_self_reviewing.rs (1)

415-430: LGTM! Good enhancement to the peer review selection process.

The addition of flagged submissions to the exclusion list prevents users from reviewing submissions they have previously flagged, which helps maintain the objectivity of the peer review process.

services/headless-lms/models/.sqlx/query-2a2f7d900ec3ce47be59da8b9a874aa1bc7ac567323b25476e766e95d32c7ed1.json (1)

1-58: LGTM! Well-structured query definition.

The query is properly defined with:

  • Comprehensive column selection
  • Appropriate filtering of deleted records
  • Correct parameter typing for UUID
services/headless-lms/models/src/exercise_slide_submissions.rs (2)

62-62: LGTM! Good addition of flag tracking.

The flag_count field in ExerciseSlideSubmission enables tracking the number of times a submission has been flagged.


136-158: LGTM! Well-structured data models for flagged answers.

The FlaggedAnswer and NewFlaggedAnswer structs are well-designed with:

  • Clear separation between creation and stored data
  • Appropriate optional fields for flexibility
  • Comprehensive tracking of metadata (timestamps, users)
services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx (2)

134-149: Consider security implications in report submission

The mutation sends null for flagged_user and flagged_by. This might:

  1. Make it difficult to track abuse of the reporting system
  2. Prevent proper auditing of reports
  3. Allow anonymous reports which could be misused

Additionally, consider adding validation for reason and description to prevent empty or malicious content.


139-145: Verify backend integration for report submission

Let's verify the integration with the backend service and ensure proper error handling.

✅ Verification successful

Backend integration verified with proper error handling

The implementation correctly uses the useToastMutation hook which provides built-in error handling and success notifications. The mutation is properly configured with error states, loading indicators, and success handling that includes UI updates and data refetching.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for the backend service implementation and error handling
rg -A 5 "postFlagAnswerInPeerReview" --type ts

# Check for existing error handling patterns
rg -A 10 "useToastMutation.*error" --type ts

Length of output: 2609


Script:

#!/bin/bash
# Get the complete implementation of PeerOrSelfReviewViewImpl
ast-grep --pattern 'const PeerOrSelfReviewViewImpl = $_'

# Check for error handling patterns in the component
rg -B 5 -A 10 "postFlagAnswerInPeerReview.*catch" --type ts
rg -B 5 -A 10 "useMutation.*postFlagAnswerInPeerReview" --type ts

Length of output: 16299


Script:

#!/bin/bash
# Get the specific file content
cat services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx

# Check for any error handling or toast notifications in the file
rg -B 5 -A 10 "try.*catch|useToast|onError" services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx

Length of output: 15632

Comment on lines +196 to +214
pub async fn increment_flag_count(
conn: &mut PgConnection,
submission_id: Uuid,
) -> ModelResult<i32> {
let result = sqlx::query!(
r#"
UPDATE exercise_slide_submissions
SET flag_count = flag_count + 1,
updated_at = NOW()
WHERE id = $1
RETURNING flag_count
"#,
submission_id
)
.fetch_one(conn)
.await?;

Ok(result.flag_count)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for concurrent flag updates.

The increment_flag_count function should handle potential race conditions when multiple users flag simultaneously.

Consider using a transaction with row-level locking:

 pub async fn increment_flag_count(
     conn: &mut PgConnection,
     submission_id: Uuid,
 ) -> ModelResult<i32> {
     let result = sqlx::query!(
         r#"
         UPDATE exercise_slide_submissions
-        SET flag_count = flag_count + 1,
+        SET flag_count = COALESCE(flag_count, 0) + 1,
             updated_at = NOW()
         WHERE id = $1
+        FOR UPDATE
         RETURNING flag_count
         "#,
         submission_id
     )
     .fetch_one(conn)
     .await?;

     Ok(result.flag_count)
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub async fn increment_flag_count(
conn: &mut PgConnection,
submission_id: Uuid,
) -> ModelResult<i32> {
let result = sqlx::query!(
r#"
UPDATE exercise_slide_submissions
SET flag_count = flag_count + 1,
updated_at = NOW()
WHERE id = $1
RETURNING flag_count
"#,
submission_id
)
.fetch_one(conn)
.await?;
Ok(result.flag_count)
}
pub async fn increment_flag_count(
conn: &mut PgConnection,
submission_id: Uuid,
) -> ModelResult<i32> {
let result = sqlx::query!(
r#"
UPDATE exercise_slide_submissions
SET flag_count = COALESCE(flag_count, 0) + 1,
updated_at = NOW()
WHERE id = $1
FOR UPDATE
RETURNING flag_count
"#,
submission_id
)
.fetch_one(conn)
.await?;
Ok(result.flag_count)
}

Comment on lines 151 to 161
const handleReportSubmit = (reason: string, description: string) => {
reportMutation.mutate(
{ reason, description },
{
onSuccess: () => {
setAnswers(new Map())
query.refetch()
},
},
)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve report submission handler

The current implementation has several concerns:

  1. setAnswers(new Map()) clears all review answers when a report is submitted, which could lead to lost work
  2. Missing error handling for failed submissions
  3. No loading state management during submission

Consider this improved implementation:

 const handleReportSubmit = (reason: string, description: string) => {
   reportMutation.mutate(
     { reason, description },
     {
       onSuccess: () => {
-        setAnswers(new Map())
         query.refetch()
+        setIsReportDialogOpen(false)
       },
+      onError: (error) => {
+        // Handle error appropriately
+        console.error('Failed to submit report:', error)
+      }
     },
   )
 }

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
services/headless-lms/server/src/controllers/course_material/exercises.rs (2)

404-410: ⚠️ Potential issue

Add validation for flagged answer fields to prevent abuse.

The function doesn't validate the length or content of the reason and description fields.


423-433: ⚠️ Potential issue

Fix potential race condition in flag count update.

The increment of flag count and the threshold check are not atomic, which could lead to race conditions if multiple users flag the answer simultaneously.

services/headless-lms/models/src/exercise_slide_submissions.rs (1)

196-214: ⚠️ Potential issue

Add error handling for concurrent flag updates.

The function should handle potential race conditions when multiple users flag simultaneously.

🧹 Nitpick comments (2)
services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx (1)

349-367: Enhance accessibility for the report functionality.

The UI components should include:

  1. ARIA labels for the report button
  2. Keyboard navigation support

Apply this diff to improve accessibility:

 <Button
   className={css`
     display: flex !important;
     align-items: center;
     gap: 6px;
   `}
   variant={"icon"}
   transform="capitalize"
   size={"small"}
   onClick={() => setIsReportDialogOpen(true)}
+  aria-label={t("button-text-report")}
 >
   <ExclamationMessage /> {t("button-text-report")}
 </Button>

 <MarkAsSpamDialog
   isOpen={isReportDialogOpen}
   onClose={() => setIsReportDialogOpen(false)}
   onSubmit={handleReportSubmit}
+  aria-describedby="report-dialog-description"
 />
services/headless-lms/models/src/exercise_slide_submissions.rs (1)

62-62: Add documentation for the new structs and fields.

Consider adding documentation to explain:

  1. The purpose of the flag_count field
  2. The fields in FlaggedAnswer struct
  3. The fields in NewFlaggedAnswer struct

Apply this diff to add documentation:

+/// Number of times this submission has been flagged by peer reviewers
 pub flag_count: Option<i32>,

+/// Represents a flagged answer in peer review
+///
+/// This struct contains information about who flagged the answer,
+/// the reason for flagging, and when it was flagged.
 #[derive(Debug, Serialize, Deserialize, PartialEq, Clone)]
 #[cfg_attr(feature = "ts_rs", derive(TS))]
 pub struct FlaggedAnswer {
     // ... fields ...
 }

+/// Represents the data needed to create a new flagged answer
+///
+/// This struct is used when submitting a new flag for an answer
+/// in peer review.
 #[derive(Debug, Serialize, Deserialize, PartialEq, Clone)]
 #[cfg_attr(feature = "ts_rs", derive(TS))]
 pub struct NewFlaggedAnswer {
     // ... fields ...
 }

Also applies to: 136-158

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b5f8bdc and 8a30fc6.

📒 Files selected for processing (5)
  • services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx (6 hunks)
  • services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerRevireMarkingSpam/MarkAsSpamDialog.tsx (1 hunks)
  • services/headless-lms/models/.sqlx/query-86dd2c11afce68ecf9d9221cde16f86c59484d5fc6a81f73654f1922cf8a0d01.json (2 hunks)
  • services/headless-lms/models/src/exercise_slide_submissions.rs (12 hunks)
  • services/headless-lms/server/src/controllers/course_material/exercises.rs (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerRevireMarkingSpam/MarkAsSpamDialog.tsx
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: headless_lms
  • GitHub Check: build
  • GitHub Check: build-and-deploy
🔇 Additional comments (5)
services/headless-lms/models/.sqlx/query-86dd2c11afce68ecf9d9221cde16f86c59484d5fc6a81f73654f1922cf8a0d01.json (1)

Line range hint 3-80: LGTM! The query changes look good.

The query has been correctly updated to:

  • Join with the courses table to access the flagged_answers_threshold
  • Filter submissions where flag_count is less than the threshold
  • Include flag_count in the result set
services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx (2)

3-3: LGTM! Import statement is correctly added.

The ExclamationMessage icon import aligns with its usage in the report button.


151-161: ⚠️ Potential issue

Fix handleReportSubmit implementation.

The current implementation has several issues:

  1. setAnswers(new Map()) clears all review answers when a report is submitted
  2. Missing error handling
  3. Duplicate refetch logic with mutation's onSuccess

Apply this diff to fix the implementation:

 const handleReportSubmit = (reason: string, description: string) => {
   reportMutation.mutate(
     { reason, description },
-    {
-      onSuccess: () => {
-        setAnswers(new Map())
-        query.refetch()
-      },
-    },
   )
 }

Likely invalid or redundant comment.

services/headless-lms/server/src/controllers/course_material/exercises.rs (1)

523-525: LGTM! Route configuration is correctly added.

The new endpoint is properly configured with the correct HTTP method and path.

services/headless-lms/models/src/exercise_slide_submissions.rs (1)

304-305: LGTM! SQL queries are consistently updated.

All relevant queries have been correctly updated to include the flag_count field in their result sets.

Also applies to: 349-350, 380-381, 458-459, 493-494, 527-528, 609-610, 681-682

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
services/headless-lms/models/src/library/grading.rs (1)

577-582: Consider optimizing database queries for flagged answers.

The current implementation makes a separate database call for each answer to fetch its flagged reports. This could lead to N+1 query performance issues when processing multiple answers.

Consider modifying the code to fetch all flagged answers for all submissions in a single query:

  1. First, collect all submission IDs:
let submission_ids: Vec<Uuid> = data.iter().map(|answer| answer.submission_id).collect();
  1. Then, fetch all flagged answers in one query:
let all_flagged_answers: HashMap<Uuid, Vec<FlaggedAnswer>> = 
    exercise_slide_submissions::get_flagged_answers_by_submission_ids(
        conn,
        &submission_ids,
    )
    .await?;
  1. Finally, use the map when constructing each answer:
let received_peer_review_flagging_reports = 
    all_flagged_answers
        .get(&answer.submission_id)
        .cloned()
        .unwrap_or_default();

Also applies to: 597-597

services/headless-lms/server/src/programs/seed/seed_courses.rs (1)

100-100: LGTM! Consider adding documentation for the threshold value.

The addition of flagged_answers_threshold with a default value of 3 is appropriate.

Consider adding a code comment explaining why 3 was chosen as the default threshold value to help future maintainers understand the rationale.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8a30fc6 and 2e076b5.

📒 Files selected for processing (17)
  • services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx (6 hunks)
  • services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerRevireMarkingSpam/MarkAsSpamDialog.tsx (1 hunks)
  • services/headless-lms/migrations/20250107103628_add_flagging_for_answers.up.sql (1 hunks)
  • services/headless-lms/models/src/chapters.rs (1 hunks)
  • services/headless-lms/models/src/courses.rs (18 hunks)
  • services/headless-lms/models/src/library/copying.rs (2 hunks)
  • services/headless-lms/models/src/library/grading.rs (4 hunks)
  • services/headless-lms/models/src/test_helper.rs (1 hunks)
  • services/headless-lms/server/src/programs/seed/seed_courses.rs (5 hunks)
  • services/headless-lms/server/src/programs/seed/seed_organizations/uh_cs.rs (1 hunks)
  • services/headless-lms/server/src/programs/seed/seed_organizations/uh_mathstat.rs (4 hunks)
  • services/headless-lms/server/src/test_helper.rs (1 hunks)
  • services/headless-lms/server/tests/study_registry_test.rs (1 hunks)
  • services/main-frontend/src/components/page-specific/manage/courses/id/index/UpdateCourseForm.tsx (3 hunks)
  • services/main-frontend/src/components/page-specific/manage/exercises/id/submissions/AnswersRequiringAttentionItem.tsx (2 hunks)
  • shared-module/packages/common/src/bindings.guard.ts (8 hunks)
  • shared-module/packages/common/src/bindings.ts (6 hunks)
🚧 Files skipped from review as they are similar to previous changes (12)
  • services/headless-lms/server/tests/study_registry_test.rs
  • services/headless-lms/server/src/test_helper.rs
  • services/headless-lms/server/src/programs/seed/seed_organizations/uh_cs.rs
  • services/headless-lms/server/src/programs/seed/seed_organizations/uh_mathstat.rs
  • services/headless-lms/models/src/test_helper.rs
  • services/main-frontend/src/components/page-specific/manage/exercises/id/submissions/AnswersRequiringAttentionItem.tsx
  • services/headless-lms/models/src/library/copying.rs
  • services/headless-lms/models/src/chapters.rs
  • services/headless-lms/models/src/courses.rs
  • services/main-frontend/src/components/page-specific/manage/courses/id/index/UpdateCourseForm.tsx
  • services/headless-lms/migrations/20250107103628_add_flagging_for_answers.up.sql
  • services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerRevireMarkingSpam/MarkAsSpamDialog.tsx
👮 Files not reviewed due to content moderation or server errors (3)
  • services/course-material/src/components/ContentRenderer/moocfi/ExerciseBlock/PeerOrSelfReviewView/PeerOrSelfReviewViewImpl.tsx
  • shared-module/packages/common/src/bindings.ts
  • shared-module/packages/common/src/bindings.guard.ts
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: headless_lms
  • GitHub Check: build
  • GitHub Check: build-and-deploy
🔇 Additional comments (5)
services/headless-lms/models/src/library/grading.rs (2)

9-11: LGTM!

The import of FlaggedAnswer is correctly placed within the existing exercise_slide_submissions group.


531-531: LGTM!

The new field received_peer_review_flagging_reports is well-named and properly typed as Vec<FlaggedAnswer>.

services/headless-lms/server/src/programs/seed/seed_courses.rs (3)

2036-2036: LGTM! Consistent implementation across course types.

The flagged_answers_threshold is consistently implemented with the same default value as in seed_sample_course.


2164-2164: LGTM! Maintains consistency in course material seeding.

The flagged_answers_threshold implementation remains consistent in the CS course material seeding.


4454-4454: LGTM! Consider validating the threshold value for peer review courses.

The flagged_answers_threshold implementation is consistent with other course types. Since this is specifically for peer review courses, the threshold value might need special consideration.

Let's verify if there are any specific requirements or configurations for peer review courses that might affect the threshold value:

✅ Verification successful

Verified: The flagged_answers_threshold of 3 is appropriate for peer review courses

The threshold of 3 flags is separate from the peer review grading mechanism and serves as a content moderation trigger, which is a reasonable default value across all course types, including peer review courses.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for any peer review specific configurations or thresholds
rg -A 3 "peer.*review.*threshold|threshold.*peer.*review"

# Look for any existing peer review configurations
ast-grep --pattern 'peer_review_config { $$$ }'

Length of output: 10865

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
services/headless-lms/server/src/controllers/course_material/exercises.rs (2)

404-410: ⚠️ Potential issue

Add validation for flagged answer fields.

The function lacks validation for the reason and description fields:

  • No length constraints
  • No content validation
  • No checks for empty strings

Apply this diff to add validation:

+const MAX_REASON_LENGTH: usize = 100;
+const MAX_DESCRIPTION_LENGTH: usize = 1000;
+
+fn validate_flagged_answer(answer: &NewFlaggedAnswer) -> Result<(), ControllerError> {
+    if answer.reason.trim().is_empty() {
+        return Err(ControllerError::new(
+            ControllerErrorType::BadRequest,
+            "Reason cannot be empty".to_string(),
+            None,
+        ));
+    }
+    
+    if answer.reason.len() > MAX_REASON_LENGTH {
+        return Err(ControllerError::new(
+            ControllerErrorType::BadRequest,
+            format!("Reason cannot be longer than {} characters", MAX_REASON_LENGTH),
+            None,
+        ));
+    }
+    
+    if let Some(desc) = &answer.description {
+        if desc.len() > MAX_DESCRIPTION_LENGTH {
+            return Err(ControllerError::new(
+                ControllerErrorType::BadRequest,
+                format!("Description cannot be longer than {} characters", MAX_DESCRIPTION_LENGTH),
+                None,
+            ));
+        }
+    }
+    
+    Ok(())
+}

 let new_flagged_answer = NewFlaggedAnswer {
     submission_id: payload.submission_id,
     flagged_user: Some(flagged_user),
     flagged_by: Some(user.id),
     reason: payload.reason.clone(),
     description: payload.description.clone(),
 };
+
+validate_flagged_answer(&new_flagged_answer)?;

412-433: ⚠️ Potential issue

Fix potential race condition in flag count update.

The increment of flag count and threshold check are not atomic, which could lead to race conditions if multiple users flag the answer simultaneously.

The operations should be wrapped in a transaction to ensure atomicity. See the transaction handling implementation in the past review comments.

🧹 Nitpick comments (3)
system-tests/src/tests/mark-as-spam.spec.ts (1)

1-23: Consider using a custom fixture for browser contexts.

The test setup creates and manages multiple browser contexts. Consider extracting this into a reusable Playwright fixture to reduce boilerplate code and improve maintainability.

// fixtures.ts
export const test = base.extend({
  contexts: async ({ browser }, use) => {
    const contexts = {
      student1: await browser.newContext({ storageState: "src/states/[email protected]" }),
      student2: await browser.newContext({ storageState: "src/states/[email protected]" }),
      teacher: await browser.newContext({ storageState: "src/states/[email protected]" })
    };
    
    await use(contexts);
    
    // Cleanup
    await Promise.all(Object.values(contexts).map(context => context.close()));
  }
});
services/headless-lms/server/src/controllers/course_material/exercises.rs (1)

369-371: Improve function documentation.

The function documentation could be more detailed about:

  • Expected payload format
  • Success/error responses
  • Side effects (e.g., updating review stage)
 /**
- * POST `/api/v0/course-material/exercises/:exercise_id/flag-peer-review-answer - Post a report of an answer in peer review made by a student
+ * Reports an answer in peer review as inappropriate.
+ * 
+ * # Endpoint
+ * POST `/api/v0/course-material/exercises/:exercise_id/flag-peer-review-answer`
+ * 
+ * # Request Payload
+ * ```json
+ * {
+ *   "submission_id": "uuid",
+ *   "flagged_user": "uuid",
+ *   "flagged_by": "uuid",
+ *   "reason": "string",
+ *   "description": "string"
+ * }
+ * ```
+ * 
+ * # Response
+ * - Success: `true`
+ * - Error: ControllerError with appropriate message
+ * 
+ * # Side Effects
+ * - Increments flag count for the submission
+ * - If flag count exceeds threshold:
+ *   - Updates reviewing stage to WaitingForManualGrading
+ *   - Removes submission from peer review queue
  */
shared-module/packages/common/src/locales/fi/main-frontend.json (1)

347-347: Ensure consistent terminology in translations.

The translations use different terms for similar concepts:

  • "Ilmoittaja" (reporter) vs "raportti" (report)
  • "Saapuneet raportoinnit" (received reports) vs "raportoitu vastaus" (reported answer)

Consider standardizing the terminology for better user experience.

-  "label-flagged-by": "Ilmoittaja",
+  "label-flagged-by": "Raportoija",
   "label-reason": "Syy",
-  "label-received-reports": "Saapuneet raportoinnit",
+  "label-received-reports": "Saapuneet raportit",
   "label-report": "Raportti",

Also applies to: 379-380, 383-383, 397-397

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2e076b5 and cee95ee.

📒 Files selected for processing (10)
  • .stylelintrc (0 hunks)
  • services/headless-lms/models/src/library/copying.rs (2 hunks)
  • services/headless-lms/server/src/controllers/course_material/exercises.rs (3 hunks)
  • services/headless-lms/server/src/programs/seed/seed_courses.rs (5 hunks)
  • services/main-frontend/src/components/page-specific/manage/courses/id/index/UpdateCourseForm.tsx (3 hunks)
  • shared-module/packages/common/src/locales/en/course-material.json (5 hunks)
  • shared-module/packages/common/src/locales/en/main-frontend.json (3 hunks)
  • shared-module/packages/common/src/locales/fi/course-material.json (5 hunks)
  • shared-module/packages/common/src/locales/fi/main-frontend.json (3 hunks)
  • system-tests/src/tests/mark-as-spam.spec.ts (1 hunks)
💤 Files with no reviewable changes (1)
  • .stylelintrc
🚧 Files skipped from review as they are similar to previous changes (5)
  • shared-module/packages/common/src/locales/en/main-frontend.json
  • services/headless-lms/models/src/library/copying.rs
  • services/main-frontend/src/components/page-specific/manage/courses/id/index/UpdateCourseForm.tsx
  • services/headless-lms/server/src/programs/seed/seed_courses.rs
  • shared-module/packages/common/src/locales/en/course-material.json
🧰 Additional context used
🪛 Biome (1.9.4)
system-tests/src/tests/mark-as-spam.spec.ts

[error] 25-25: Unexpected empty object pattern.

(lint/correctness/noEmptyPattern)

⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: build
  • GitHub Check: build-and-deploy
  • GitHub Check: headless_lms
🔇 Additional comments (6)
shared-module/packages/common/src/locales/fi/course-material.json (6)

26-26: LGTM! Translation for report button is clear and concise.

The Finnish translation "Ilmoita" for the report button accurately conveys the action and follows the established pattern of keeping UI button text concise.


95-97: LGTM! Flagging reason translations are clear and appropriate.

The translations for flagging reasons are well-structured and follow Finnish language conventions:

  • "Tekoälyn tuottama sisältö" (AI-generated content)
  • "Haitallinen sisältö" (Harmful content)
  • "Roskaposti" (Spam)

152-152: LGTM! Optional description placeholder text is appropriate.

The translation "Valinnainen kuvaus..." for the optional description follows the Finnish convention of using ellipsis (...) for placeholder text.


181-181: LGTM! Selection prompt is clear and follows conventions.

The translation "Valitse syy ilmoittamiselle:" (Select reason for reporting) is grammatically correct and includes the appropriate colon suffix for prompts.


206-206: LGTM! Dialog title translation is consistent.

The translation "Ilmoita vastaus" (Report answer) for the dialog title maintains consistency with the button text (line 26) and clearly indicates the purpose.


26-26: Verify completeness of reporting feature translations.

While the individual translations are good, let's verify that all necessary translations for the reporting feature are included.

Also applies to: 95-97, 152-152, 181-181, 206-206

✅ Verification successful

All reporting feature translations are present and complete

The Finnish translations for the reporting feature are complete, with all necessary keys present and properly translated across the locale files. This includes the reporting button, dialog titles, flagging reasons (spam, harmful content, AI-generated), and administrative labels.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for any untranslated strings related to reporting/flagging in the codebase
echo "Searching for potential missing translations..."

# Look for report-related text in TypeScript/JavaScript files
rg -i "report|flag" --type ts --type js -g '!*.json' -g '!*.test.*' -A 2 -B 2

# Look for translation keys containing 'report' or 'flag' in other locale files
rg -i '"(report|flag)' --type json -g 'locales/*.json' -g '!course-material.json'

Length of output: 58024


Script:

#!/bin/bash
# Search for reporting-related translation keys in Finnish locale files
echo "Checking Finnish translations for reporting feature..."

# Create a pattern of all required translation keys
KEYS='"button-text-report"|"title-report-dialog"|"flagging-reason-spam"|"flagging-reason-harmful-content"|"flagging-reason-ai-generated"|"label-threshold-to-move-flagged-answer-to-manual-review"|"label-received-reports"|"label-report"|"label-flagged-by"'

# Search in Finnish locale files
fd . -t f -e json --full-path '*/fi/*' -x rg -i "($KEYS)"

Length of output: 771


Script:

#!/bin/bash
# Search for reporting-related translation keys in Finnish locale files
echo "Checking Finnish translations for reporting feature..."

# First check the specific file mentioned in review
echo "Checking shared-module/packages/common/src/locales/fi/course-material.json:"
rg -i '("button-text-report"|"title-report-dialog"|"flagging-reason-spam"|"flagging-reason-harmful-content"|"flagging-reason-ai-generated"|"label-threshold-to-move-flagged-answer-to-manual-review"|"label-received-reports"|"label-report"|"label-flagged-by")' "shared-module/packages/common/src/locales/fi/course-material.json"

# Then check other Finnish locale files
echo -e "\nChecking other Finnish locale files:"
fd . --full-path "shared-module/packages/common/src/locales/fi" -t f -e json -E "course-material.json" -x rg -i '("button-text-report"|"title-report-dialog"|"flagging-reason-spam"|"flagging-reason-harmful-content"|"flagging-reason-ai-generated"|"label-threshold-to-move-flagged-answer-to-manual-review"|"label-received-reports"|"label-report"|"label-flagged-by")'

Length of output: 1543

system-tests/src/tests/mark-as-spam.spec.ts Outdated Show resolved Hide resolved
system-tests/src/tests/mark-as-spam.spec.ts Outdated Show resolved Hide resolved
}

return (
<Dialog open={isOpen} onClose={onClose} aria-labelledby="report-dialog-title">
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

StandardDialog

submission_id UUID NOT NULL REFERENCES exercise_slide_submissions(id),
flagged_user UUID NOT NULL REFERENCES users(id),
flagged_by UUID NOT NULL REFERENCES users(id),
reason TEXT NOT NULL,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Postgres enum

@@ -132,6 +133,147 @@ pub struct ExerciseSlideSubmissionAndUserExerciseStateList {
pub total_pages: u32,
}

#[derive(Debug, Serialize, Deserialize, PartialEq, Clone)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all of these should be in a new file flagged_answers

user: AuthUser,
) -> ControllerResult<web::Json<bool>> {
let mut conn = pool.acquire().await?;

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use givepeerreview claim here to verify the student actually got this answer to be peer reviewed

description: payload.description.clone(),
};

let insert_result =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wrap everything in a transaction

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants