Skip to content

Commit

Permalink
Merge #639
Browse files Browse the repository at this point in the history
639: Make doc-comments consistently be doc-comments r=irevoire a=CommanderStorm

# Pull Request

## Related issue
no related issue, as just a docs change

## What does this PR do?
- Make doc-comments consistently be doc-comments 
- Also fixes a few links which `cargo doc` complained about

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Frank Elsinga <[email protected]>
  • Loading branch information
meili-bors[bot] and CommanderStorm authored Jan 27, 2025
2 parents 52b71ff + db4e9ed commit b5cd83e
Show file tree
Hide file tree
Showing 6 changed files with 29 additions and 29 deletions.
2 changes: 1 addition & 1 deletion src/documents.rs
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
use async_trait::async_trait;
use serde::{de::DeserializeOwned, Deserialize, Serialize};

/// Derive the [`IndexConfig`](crate::documents::IndexConfig) trait.
/// Derive the [`IndexConfig`] trait.
///
/// ## Field attribute
/// Use the `#[index_config(..)]` field attribute to generate the correct settings
Expand Down
2 changes: 1 addition & 1 deletion src/dumps.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
//!
//! - Creating a dump is also referred to as exporting it, whereas launching Meilisearch with a dump is referred to as importing it.
//!
//! - During a [dump export](Client::create_dump), all [indexes](crate::indexes::Index) of the current instance are exported—together with their documents and settings—and saved as a single `.dump` file.
//! - During a [dump export](crate::client::Client::create_dump), all [indexes](crate::indexes::Index) of the current instance are exported—together with their documents and settings—and saved as a single `.dump` file.
//!
//! - During a dump import, all indexes contained in the indicated `.dump` file are imported along with their associated documents and [settings](crate::settings::Settings).
//! Any existing [index](crate::indexes::Index) with the same uid as an index in the dump file will be overwritten.
Expand Down
6 changes: 3 additions & 3 deletions src/errors.rs
Original file line number Diff line number Diff line change
Expand Up @@ -49,16 +49,16 @@ pub enum Error {
#[error("HTTP request failed: {}", .0)]
HttpError(#[from] reqwest::Error),

// The library formatting the query parameters encountered an error.
/// The library formatting the query parameters encountered an error.
#[error("Internal Error: could not parse the query parameters: {}", .0)]
Yaup(#[from] yaup::Error),

// The library validating the format of an uuid.
/// The library validating the format of an uuid.
#[cfg(not(target_arch = "wasm32"))]
#[error("The uid of the token has bit an uuid4 format: {}", .0)]
Uuid(#[from] uuid::Error),

// Error thrown in case the version of the Uuid is not v4.
/// Error thrown in case the version of the Uuid is not v4.
#[error("The uid provided to the token is not of version uuidv4")]
InvalidUuid4Version,

Expand Down
12 changes: 6 additions & 6 deletions src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@
#![warn(clippy::all)]
#![allow(clippy::needless_doctest_main)]

/// Module containing the [`Client`] struct.
/// Module containing the [`Client`](client::Client) struct.
pub mod client;
/// Module representing the [documents] structures.
pub mod documents;
Expand All @@ -242,18 +242,18 @@ pub mod errors;
pub mod features;
/// Module containing the Index struct.
pub mod indexes;
/// Module containing the [`Key`] struct.
/// Module containing the [`Key`](key::Key) struct.
pub mod key;
pub mod request;
/// Module related to search queries and results.
pub mod search;
/// Module containing [`Settings`].
/// Module containing [`Settings`](settings::Settings).
pub mod settings;
/// Module containing the [snapshots] trait.
/// Module containing the [snapshots](snapshots::create_snapshot)-feature.
pub mod snapshots;
/// Module representing the [`TaskInfo`]s.
/// Module representing the [`TaskInfo`](task_info::TaskInfo)s.
pub mod task_info;
/// Module representing the [`Task`]s.
/// Module representing the [`Task`](tasks::Task)s.
pub mod tasks;
/// Module that generates tenant tokens.
#[cfg(not(target_arch = "wasm32"))]
Expand Down
8 changes: 4 additions & 4 deletions src/search.rs
Original file line number Diff line number Diff line change
Expand Up @@ -76,13 +76,13 @@ pub struct SearchResults<T> {
pub limit: Option<usize>,
/// Estimated total number of matches.
pub estimated_total_hits: Option<usize>,
// Current page number
/// Current page number
pub page: Option<usize>,
// Maximum number of hits in a page.
/// Maximum number of hits in a page.
pub hits_per_page: Option<usize>,
// Exhaustive number of matches.
/// Exhaustive number of matches.
pub total_hits: Option<usize>,
// Exhaustive number of pages.
/// Exhaustive number of pages.
pub total_pages: Option<usize>,
/// Distribution of the given facets.
pub facet_distribution: Option<HashMap<String, HashMap<String, usize>>>,
Expand Down
28 changes: 14 additions & 14 deletions src/tasks.rs
Original file line number Diff line number Diff line change
Expand Up @@ -474,10 +474,10 @@ impl AsRef<u32> for Task {

#[derive(Debug, Serialize, Clone)]
pub struct TasksPaginationFilters {
// Maximum number of tasks to return.
/// Maximum number of tasks to return.
#[serde(skip_serializing_if = "Option::is_none")]
limit: Option<u32>,
// The first task uid that should be returned.
/// The first task uid that should be returned.
#[serde(skip_serializing_if = "Option::is_none")]
from: Option<u32>,
}
Expand All @@ -497,52 +497,52 @@ pub type TasksDeleteQuery<'a, Http> = TasksQuery<'a, TasksDeleteFilters, Http>;
pub struct TasksQuery<'a, T, Http: HttpClient> {
#[serde(skip_serializing)]
client: &'a Client<Http>,
// Index uids array to only retrieve the tasks of the indexes.
/// Index uids array to only retrieve the tasks of the indexes.
#[serde(skip_serializing_if = "Option::is_none")]
index_uids: Option<Vec<&'a str>>,
// Statuses array to only retrieve the tasks with these statuses.
/// Statuses array to only retrieve the tasks with these statuses.
#[serde(skip_serializing_if = "Option::is_none")]
statuses: Option<Vec<&'a str>>,
// Types array to only retrieve the tasks with these [TaskType].
/// Types array to only retrieve the tasks with these [`TaskType`]s.
#[serde(skip_serializing_if = "Option::is_none", rename = "types")]
task_types: Option<Vec<&'a str>>,
// Uids of the tasks to retrieve.
/// Uids of the tasks to retrieve.
#[serde(skip_serializing_if = "Option::is_none")]
uids: Option<Vec<&'a u32>>,
// Uids of the tasks that canceled other tasks.
/// Uids of the tasks that canceled other tasks.
#[serde(skip_serializing_if = "Option::is_none")]
canceled_by: Option<Vec<&'a u32>>,
// Date to retrieve all tasks that were enqueued before it.
/// Date to retrieve all tasks that were enqueued before it.
#[serde(
skip_serializing_if = "Option::is_none",
serialize_with = "time::serde::rfc3339::option::serialize"
)]
before_enqueued_at: Option<OffsetDateTime>,
// Date to retrieve all tasks that were enqueued after it.
/// Date to retrieve all tasks that were enqueued after it.
#[serde(
skip_serializing_if = "Option::is_none",
serialize_with = "time::serde::rfc3339::option::serialize"
)]
after_enqueued_at: Option<OffsetDateTime>,
// Date to retrieve all tasks that were started before it.
/// Date to retrieve all tasks that were started before it.
#[serde(
skip_serializing_if = "Option::is_none",
serialize_with = "time::serde::rfc3339::option::serialize"
)]
before_started_at: Option<OffsetDateTime>,
// Date to retrieve all tasks that were started after it.
/// Date to retrieve all tasks that were started after it.
#[serde(
skip_serializing_if = "Option::is_none",
serialize_with = "time::serde::rfc3339::option::serialize"
)]
after_started_at: Option<OffsetDateTime>,
// Date to retrieve all tasks that were finished before it.
/// Date to retrieve all tasks that were finished before it.
#[serde(
skip_serializing_if = "Option::is_none",
serialize_with = "time::serde::rfc3339::option::serialize"
)]
before_finished_at: Option<OffsetDateTime>,
// Date to retrieve all tasks that were finished after it.
/// Date to retrieve all tasks that were finished after it.
#[serde(
skip_serializing_if = "Option::is_none",
serialize_with = "time::serde::rfc3339::option::serialize"
Expand All @@ -552,7 +552,7 @@ pub struct TasksQuery<'a, T, Http: HttpClient> {
#[serde(flatten)]
pagination: T,

// Whether to reverse the sort
/// Whether to reverse the sort
#[serde(skip_serializing_if = "Option::is_none")]
reverse: Option<bool>,
}
Expand Down

0 comments on commit b5cd83e

Please sign in to comment.