-
Notifications
You must be signed in to change notification settings - Fork 1
TableSchema macros
This attribute enables to auto-create and auto-update table schema.
The cases which are handled are listed here:
- If the table does not exist at all - it's being created according to schema;
- If the column double exists - the column is being created;
- If Primary Key does not exit - it's being created;
- If Index does not exit - it's being created;
- If Primary Key is different between DB and Model - it's being dropped and recreated;
- If Index is different between DB and model - it's being dropped and recreated;
- If the Column property is can not be NULL, it can be changed to can be NULL;
What can not be done automatically and can be considered as an improvement in the future;
- If the property type is different than a column type - it can not be changed since it can have data inside;
- If the property can be NULL - and we want to change it to - can not be NULL - it can not be changed since it can have null data inside the table already;
Example of the table schema with the primary key, which includes two fields:
#[derive(SelectDbEntity, InsertDbEntity, UpdateDbEntity)]
pub struct KeyValueDto {
#[primary_key(1)] //Second Primary Key field
pub key: String,
#[primary_key(0)] //First Primary key field
pub client_id: String,
pub value: String,
}
The index of the primary key can be 0,1,2 or 1,2,3, or 10,20,30. It helps the table schema engine to understand which field goes first, which one goes second etc.
This attribute is used to specify additional indexes to be created during the schema validation process
A general example of the usage of the DbIndex
#[derive(InsertDbEntity, TableSchema)]
pub struct ClientCredentialsDto {
#[primary_key(0)]
pub id: String,
#[db_index(id:0, index_name: "email_idx", is_unique: true, order: "ASC")]
pub email: String,
pub hash: String,
pub rnd: String,
pub process_id: String,
#[sql_type("timestamp")]
pub created: DateTimeAsMicroseconds,
}
In this example, the email property will be mapped to the indexed column in DB by the index with the name email_ids, which will be unique with ordering as ASC.
The order parameter of the index can have two options:
- ASC,
- DESC
If we want an index with two or more fields, please mark #[db_index] attribute each property which should be a part of the index with the conditions:
The following fields should have the same value to combine them into single index:
- index_name;
- is_unique;
Other fields:
- id- must be unique;
- order - specifies index value;
A general example of the usage of the DbIndex
```rust
#[derive(InsertDbEntity, TableSchema)]
pub struct ClientCredentialsDto {
#[primary_key(0)]
#[db_index(id:0, index_name: "some_idx", is_unique: true, order: "ASC")]
pub id: String,
#[db_index(id:1, index_name: "some_idx", is_unique: true, order: "ASC")]
pub email: String,
pub hash: String,
pub rnd: String,
pub process_id: String,
#[sql_type("timestamp")]
pub created: DateTimeAsMicroseconds,
}
- #[db_field_name]
- #[sql_type]
Same usage as here: https://github.com/MyJetTools/my-postgres-macros/wiki/Other-attributes
- #[ignore_table_column] - ignores column to be a part of the table schema
- #[default_value("my_value")] - default value of column
To read default value from enum - Enum case must be marked with attribute #[default_value]
#[derive(DbEnumAsI32)]
pub enum MyEnum {
#[enum_case(0)]
Field1,
#[default_value]
#[enum_case(1)]
Field2,
}
This opens the ability to mark table schema field default without specifing the value. Value is read from the structure attribute
#[derive(TableSchema)]
pub struct TableSchemaModel {
// Other fields....
#[default_value]
pub enum_field: MyEnum,
}
Usually - if we create a repository object which uses Postgres as a Database, we can verify the schema on the Postgres creation phase.
pub const TABLE_NAME: &str = "key_value";
pub const PK_NAME: &str = "key_value_pk";
pub struct KeyValueRepo {
postgres: MyPostgres,
}
impl KeyValueRepo {
pub async fn new(settings: Arc<crate::settings::SettingsReader>) -> Self {
Self {
postgres: MyPostgres::new(
crate::app::APP_NAME.to_string(),
settings,
my_logger::LOGGER.clone(),
)
.with_table_schema_verification::<KeyValueDto>(TABLE_NAME, PK_NAME.to_string().into())
.await,
}
}
}
The results of the schema verification and synchronization can be seen in the logs;
To Generate Update Structure - fields which are should be included to the structure must be marked with #[generate_update_model] attribute
Attribute supports params:
- name - Name of the Rust struct to be generated;
- param_type - can have value either "where" or "update";
#[derive(TableSchema)]
pub struct MyDto {
#[generate_update_model(name:"UpdateAcceptedDto", param_type:"where")]
pub id: String,
#[generate_update_model(name:"UpdateAcceptedDto", param_type:"update")]
pub accepted: Option<DateTimeAsMicroseconds>,
}
This Attribute would generate struct
pub struct UpdateAcceptedDto {
pub accepted: Option<rust_extensions::date_time::DateTimeAsMicroseconds>,
pub id: String,
}
which can be used at update fields scenario
pub async fn make_deposit_accepted(
&self,
id: &str,
accept_time: DateTimeAsMicroseconds,
my_telemetry: &MyTelemetryContext,
) -> Result<(), MyPostgresError> {
let update_model = UpdateAcceptedDto {
accepted: Some(accept_time),
id: id.to_string(),
};
self.postgres
.update_db_entity(&update_model, TABLE_NAME, Some(my_telemetry))
.await?;
Ok(())
}
To generate Where structure - fields must be marked with #[generate_where_model]
which support params
- name - Name of the struct to be generated;
- operator - if we want to override operator for the where model;
- operator_from - if we want to use a field in a range search;
- operator_to - if we want to use a field in a range search;
- as_str - uses to generate &'s str instead of String type;
- as_option - wrap type to option;
- as_vec - wrap type to vec;
- ignore_if_none - add #[ignore_if_none]
- limit:"field_name" - Add Limit field with the specified name to the output model. Has to be added at ones of the generate_where_model attribute;
Examples:
#[derive(TableSchema)]
pub struct MyTableModel {
#[generate_where_model(name:"ByTraderIdAndDateWhereModel", as_str=true, as_vec=true)]
#[db_column_name(name:"my_trader_id")]
pub trader_id: String,
#[sql_type("timestamp")]
#[generate_where_model(name:"ByTraderIdAndDateWhereModel", operator_from: ">", operator_to: "<", limit:"limit_field_name")]
pub date: DateTimeAsMicroseconds,
}
// Auto Generates code
#[derive(my_postgres_macros::WhereDbModel)]
pub struct ByTraderIdAndDateWhereModel<'s> {
#[db_column_name("my_trader_id")]
pub trader_id: &'s str,
#[operator(">")]
#[sql_type("timestamp")]
#[db_column_name("date")]
pub date_from: rust_extensions::date_time::DateTimeAsMicroseconds,
#[operator("<")]
#[sql_type("timestamp")]
#[db_column_name("date")]
pub date_to: rust_extensions::date_time::DateTimeAsMicroseconds,
}
Attribute #[generate_select_model("StructName")] would help
#[derive(TableSchema)]
pub struct MyTableModel {
#[generate_select_model("MySelectDto")]
#[db_column_name(name:"my_trader_id")]
pub trader_id: String,
#[sql_type("timestamp")]
pub date: DateTimeAsMicroseconds,
}
Auto generates
#[derive(my_postgres_macros::SelectDbEntity)]
pub struct MySelectDto {
#[db_column_name("my_trader_id")]
pub trader_id: String,
}
All the unit tests can be found here: https://github.com/MyJetTools/my-postgres-macros/tree/main/tests/src/dto/table_schema_tests