Skip to content

Latest commit

 

History

History
75 lines (50 loc) · 2.06 KB

basic-checks-v016.mdx

File metadata and controls

75 lines (50 loc) · 2.06 KB
title description
Basic Checks

Overview

Alias performs a series of automated tests on each survey response to detect various types of low-quality and fraudulent content. These basic checks include:

  1. Gibberish
  2. Off-topic
  3. Low-effort
  4. GPT-generated Content
  5. Profane

Gibberish

Alias identifies responses that lack coherent semantic content and flags them as Automated test: Gibberish.

Example:

  • Question: "What is your favorite book and why?"
  • Response: "asdfghjkl"

Off-topic

Responses that are unrelated to the question asked are flagged as Automated test: Off-topic.

Example:

  • Question: "How do you approach problem-solving?"
  • Response: "I love going to the beach on sunny days."

Low-effort

Alias flags responses with minimal information or insufficient detail as Low-effort.

Example:

  • Question: "Describe a challenging project you worked on."
  • Response: "No."

GPT-generated

Responses that appear to be generated by GPT or another large language model (LLM) are flagged as Automated test: GPT.

Example:

  • Question: "What strategies do you use to manage stress?"
  • Response: "As a large language model, I do not experience stress."

Profane

Profane responses are those that include profane, vulgar, or explicit language.

Interpreting Basic Check Results

When a response fails one or more basic checks, it is included in the checks object of the API response.

For example:

"checks": {
  "Q1": [
    "Automated test: Off-topic",
    "Low-effort"
  ]
}

This indicates that the response to question "Q1" was flagged as both off-topic and low-effort.

Use the results of these basic checks to quickly identify and filter out low-quality responses before proceeding with more in-depth analysis.

Next Steps