diff --git a/docs/engineering/architecture/index.md b/docs/engineering/architecture/index.md new file mode 100644 index 000000000..bca95133e --- /dev/null +++ b/docs/engineering/architecture/index.md @@ -0,0 +1,158 @@ +# Proposed Back End Architecture + +The main pieces of the back end code base are described here, and more depth is available in specific pages. + +## Initial Definitions + +We need a couple of terms to get started: + +- **web service**: the Django service +- **Django DB**: the DB (PostgreSQL or sqlite3) where Django keeps model data +- **User DB**: a DB owned by a Mathesar user, with their tables on it. This is a DB shown in the Mathesar UI. + +## Bird's Eye View + +The main goals of this architectural redesign are to + +- improve the speed of the back end, and +- reduce the complexity of the back end. + +Secondary goals are to + +- improve the convenience of the API for the use case of our front end, and +- make it easier for users and contributors to identify which back end code supports a given front end feature. + +We plan to do this by: + +- Removing Django models representing User DB objects (e.g., tables), replacing them with functions that query and act on those DB objects directly. Insofar as we need to enrich the metadata about User DB objects with Mathesar-managed metadata, we'll do that after gathering the relevant info from the User DB. +- Changing our API to use a JSON-RPC (2.0) spec. + +## A Motivating Example: Getting the table info in a schema + +To get the table info in a schema using our current architecture, + +1. The front end calls the endpoint `GET /api/db/v0/tables/` using a query string parameter to filter results from that endpoint based on a schema (identified by a Django-assigned integer id). Internally, then the following happens: + +1. The web service builds a query that gets some `Table` model instances from the Django DB, filtered based on the desired schema, as well as filtered according to applicable access control policies, and runs it. This gets the following info for each table: + + - `created_at` -- The date of creation of the table model instance (not the actual table) + - `updated_at` -- The date of last modification of the table model instance (not the actual table) + - `import_verified` -- Whether the import process was verified by the user for this table + - `is_temp` -- Whether this table is supposed to be copied into a preexisting table, then deleted + - `import_target_id` -- A preexisting table which should receive this table's data + - `schema_id` -- The Django id of the schema containing the table + - `data_files` -- A list of any data files imported to the table + - `settings` -- Some metadata describing how the table is displayed + +1. The web service determines which connection to use with the User DB by querying for which `Database` model (called connections in the API) the requested schema lives under, and asking that model to give it a connection string. + +1. The web service then gathers the following info _for each table_ by querying the User DB. These queries are initiated by `@property` annotations in the Django models. + + - `name` + - `description` -- The comment (description) of the table, defined in the User DB. + - `has_depenents` (many requests for this, actually) + - `columns` -- These are found by following a foreign key link on the Django DB. Each column model instance then runs a bunch of queries to gather relevant info from the user DB and Django DB. + +1. _For each column of each table_, the web service then gathers the following info by querying the Django DB (can be batched): + + - `display_options` -- These describe table-wide metadata about how to show the table in the UI. + +1. _For each column of each table_, we gather the following info by querying the user DB, again with queries initiated by `@property` annotations on the Django `Column` model. For these, we end up making separate requests: + + - `name` (multiple requests to user DB for each column) + - `type` + - `type_options` -- e.g., the precision specified for a `numeric` column + +All of this gets joined together, then sent back as a response from the API. + +With the new architecture, to get the same info, + +1. The front end calls an RPC endpoint `/api/v0/rpc/`, calling a function `get_schema_table_details` with `database` and `schema` parameters. The database is identified by a Django id referring to a database model, and the schema is identified by an OID. + +1. The web service uses the `user` and `database` (the user is picked up from the request object) to acquire a connection string. + +1. Using that connection, the web service calls a PL/pgSQL function installed on the User DB called `get_schema_table_details` to gather + + - `name` + - `description` + - `has_dependents` + - `columns` + - `name` + - `type` + - `type_options` + - `preview_settings` -- describes how we should show each table's rows when it's linked to by a foreign key + +1. Using the returned info, the web service filters a `TableMetadata` model based on the passed `database` and returned `oid`s, and gathers + + - `import_verified` + - `is_temp` + - `import_target_id` + - `data_files` + - `column_order` -- describes the order in which columns should be displayed + +1. Using the same returned info, filter a `ColumnMetadata` model based on returned `oid, attnum` pairs to gather (for each column) + + - `display_options` + +We then join all of this together, and return it as a response from the API. + +The fundamental difference is that in the current version, we use foreign keys between Django models to find tables for the schema, then columns for each table. Then all queries on the User DB are initiated by functions on these model instances. In the new version, we instead run a query on the User DB to gather all relevant table and column info available on that DB, then enrich that data with metadata stored in non-foreign-key-linked metadata models in the Django DB. + +## Introduction to relevant layers + +The layers introduced here will be discussed in more detail in other sections. + +### User database + +For each API call, there should be an identifiable DB function that performs all User Database operations needed to satisfy that call. For example, + +- Calling the function `get_tables(database, schema)` should result in a call to some function `get_tables(sch_oid oid)` on the `database`. + +To achieve this, we will install the following on the user database(s) + +- Some custom Mathesar types. These are used to validate passed JSON at the User DB level. For example, we create the type + ``` + TYPE __msar.col_def AS ( + name_ text, -- The name of the column to create, quoted. + type_ text, -- The type of the column to create, fully specced with arguments. + not_null boolean, -- A boolean to describe whether the column is nullable or not. + default_ text, -- Text SQL giving the default value for the column. + identity_ boolean, -- A boolean giving whether the column is an identity pkey column. + description text -- A text that will become a comment for the column + ) + ``` + This type describes and validates the column info we need to add that column to a table. +- A set of functions that provide the bulk of Mathesar's back end logic and functionality. + +### Python `db` library + +This library should mostly serve to provide thin wrapper functions around the User DB layer functions. These functions should take parameters from requests (never request objects themselves) and engines, and then call underlying DB functions. They should then pass the results up towards the API + +### Web service + +This service should provide an JSON-RPC API for use by the front end. When an API function is called, the service should: + +- Grab an appropriate engine from the by combining the `user` associated with the request with the `database`. See the [models](models.md) page for more detail. +- Call the relevant `db` library function (should be only one in most cases). +- Gathers data from the service database via models if needed (this is for metadata that's inappropriate for storage in the User DB for some reason) +- Returns it to the API + +## Permissions and users + +- All permission checks for accessing a User DB object (e.g., table) should happen on the User DB. We should not add another layer of checks for these objects in the web service. +- Permission checks for accessing and managing info in the Django DB (e.g., Exploration definitions) are handled in the web service. +- We should, whenever possible, permissions on Django models based on access to the underlying DB object in real time. Details are [here](./permissions.md). + +### Example + +A user lists the columns for a table. Because they have access to read the columns of the table (checked on the DB), they can read display options for a table. If they have access to modify a column of a table, they have access to modify the relevant display options. This works as long as there isn't a dedicated `display_options` endpoint which could receive requests directly. Even in that case, we could add logic to check permissions on the relevant User DB. + +### Exceptions + +There are some metadata and other models that we'll be keeping which _can_ receive direct requests. Currently, these are: + +- Database connection and credential info +- Shareable links +- Explorations + +Access to these will be managed using the Django permissions framework (i.e., with access policies). diff --git a/docs/engineering/architecture/models.md b/docs/engineering/architecture/models.md new file mode 100644 index 000000000..7017b8fd6 --- /dev/null +++ b/docs/engineering/architecture/models.md @@ -0,0 +1,215 @@ +# Models + +Subject to minor changes. + +We should be able to handle anything being discussed for beta through simple extensions of this model framework. Also, these models are intended to get us to beta, while providing flexibility to move forward afterwards. There will be a brief discussion of a desired next iteration at the end. + +## User + +| Column | Type | Notes | +|--------------------------|--------------------------|------------------| +| id | integer | pkey | +| password | character varying(128) | not null | +| last\_login | timestamp with time zone | | +| is\_superuser | boolean | | +| username | character varying(150) | not null; unique | +| email | character varying(254) | | +| is\_staff | boolean | | +| is\_active | boolean | | +| date\_joined | timestamp with time zone | | +| full\_name | character varying(255) | | +| short\_name | character varying(255) | | +| password\_change\_needed | boolean | | + +## DBServer + +| Column | Type | Notes | +|-------------|--------------------------|----------| +| id | integer | pkey | +| created\_at | timestamp with time zone | | +| updated\_at | timestamp with time zone | | +| host | character varying | not null | +| port | integer | not null | + +`(host, port)` pair is unique. + +Theoretically, we could also split the host out, but that seems like premature optimization. + +We could consider making the `host` and `port` nullable when we're supporting `.pgpass`. + +## Database + +| Column | Type | Notes | +|---------------------------------|--------------------------|---------------------------------------------| +| id | integer | pkey | +| created\_at | timestamp with time zone | | +| updated\_at | timestamp with time zone | | +| db\_name | text | not null | +| display\_name | text | not null; unique | +| db\_server | integer | not null; references DBServer(id) | +| editable | boolean | | +| default\_db\_server\_credential | integer | not null; references DBServerCredential(id) | + +`(db_server, db_name)` is unique. We could consider making `db_name` nullable when supporting `.pgpass`. If a Mathesar Admin user doesn't have an entry in `UserDBRoleMap` for a given database, they will use the `default_credential` defined here to connect. + +## DBServerCredential + +| Column | Type | Notes | +|-------------|--------------------------|-----------------------------------| +| id | integer | pkey | +| created\_at | timestamp with time zone | | +| updated\_at | timestamp with time zone | | +| username | character varying | not null | +| password | character varying | encrypted; not null | +| db\_server | integer | not null; references DBServer(id) | + +We could consider making `username` and `password` nullable when supporting `.pgpass`. + +## UserDBRoleMap + +| Column | Type | Notes | +|-----------------------|--------------------------|-----------------------------------| +| id | integer | pkey | +| created\_at | timestamp with time zone | | +| updated\_at | timestamp with time zone | | +| user | integer | not null; references User(id) | +| database | integer | not null; references Database(id) | +| db\_server_credential | integer | references DBServerCredential(id) | +| metadata\_role | enum | ('read only', 'read write') | + +`(user, database)` pair is unique. The `metadata_role` isn't likely to be technically implemented as an `enum` on the DB for now. We'll use a Django-managed `TextChoices` field to save implementation time. See the current `DatabaseRole` model and its interaction with the `Role` class for an example. + +## Aside: Quick overview of connecting to a DB. + +The Django permissions infrastructure should handle CRUD operations on `Database`, `DBServerCredential`, `DBServer`, and `UserDBRoleMap` resources. When adding a `Database` for the first time, we'll also add a `DBServer` if one doesn't exist, and add or choose a `DBServerCredential` to be the default based on the credential provided when adding the `Database` entry. Actually accessing a database wouldn't require the permissions infrastructure; we'd instead construct a connection string by joining the appropriate `database` to the other info found by looking up the `user, database` pair. For example, given a `(user, database)` pair like `(3, 8)`, we'd look up the appropriate row in the `UserDBRoleMap` model to find the `db_server_credential` (referencing `DBServerCredential`). We also follow the foreign key to the `Database` to pick up the `db_name` and then the foreign key to `DBServer` to pick up the `host` and `port`. + +We should eventually add functionality to store some details in a [`.pgpass`](https://www.postgresql.org/docs/current/libpq-pgpass.html) dotfile (though probably in a custom location). `psycopg` can inject the password and/or other missing pieces automatically through these means. + +## Exploration + +| Column | Type | Notes | +|------------------|--------------------------|-------------------------| +| id | integer | pkey | +| created\_at | timestamp with time zone | | +| updated\_at | timestamp with time zone | | +| database | integer | references Database(id) | +| base\_table\_oid | integer | not null | +| name | character varying(128) | not null; unique | +| description | text | | +| initial\_columns | jsonb | not null | +| transformations | jsonb | | +| display\_options | jsonb | | +| display\_names | jsonb | | + +- The JSONB columns are the same format, except now they refer to DB-layer ids, e.g., OIDs and attnums rather than Django-layer IDs. +- We should consider changing `display_options` to refer to instances of `ColumnMetadata` within the JSONB +- Permissions on this object will be derived from the `UserDBRoleMap.metadata_role` via the `(database, user)` pair. + +## ColumnMetadata + +| Column | Type | Notes | +|-------------------------|--------------------------|-----------------------------------| +| id | integer | pkey | +| created\_at | timestamp with time zone | | +| updated\_at | timestamp with time zone | | +| database | integer | not null; References Database(id) | +| table\_oid | integer | not null | +| attnum | integer | not null | +| bool\_input | enum | ('dropdown', 'checkbox') | +| bool\_true | text | default: 'True' | +| bool\_false | text | default: 'False' | +| num\_min\_frac\_digits | integer | min: 0, max: 20 | +| num\_max\_frac\_digits | integer | min: 0, max: 20 | +| num\_show\_as\_perc | boolean | Default: false | +| mon\_currency\_symbol | text | Default? | +| mon\_currency\_location | enum | ('after-minus', 'end-with-space') | +| time\_format | text | | +| date\_format | text | | +| duration\_min | character varying(255) | | +| duration\_max | character varying(255) | | +| duration\_show\_units | boolean | | + +- The `(database, table_oid, attnum)` tuple should be unique. +- Depending on Django's support for multicolumn `CHECK` constraints, we should ensure that `num_min_frac_digits < num_max_frac_digits`. +- This has a number of fields to replace the current JSON storage of display options, and remove the need for the polymorphic serializer. +- The only foreign key we reference is the `Database(id)`, needed to map to a specific database where we find the relevant table and column. +- We don't need to reference any `schema_oid`, since a `(table_oid, attnum)` pair is unique per DB. +- Permissions to manipulate instances of this model would be derived from permissions to manipulate the relevant table and column in the underlying database. + +## TableMetadata + +| Column | Type | Notes | +|---------------------|--------------------------|-----------------------------------| +| id | integer | pkey | +| created\_at | timestamp with time zone | | +| updated\_at | timestamp with time zone | | +| database | integer | not null; references Database(id) | +| table\_oid | integer | not null | +| import\_verified | boolean | | +| is\_temp | boolean | | +| import\_target\_oid | integer | | +| column\_order | jsonb | | +| preview\_customized | boolean | | +| preview\_template | character varying(255) | | + +I've left the preview template in the Mathesar layer. The hope is that we can find a sufficiently featureful and also sufficiently efficient algorithm for getting the template, thereby avoiding needing to move this down into the user Database. There will be more discussion of this below. Permissions to manipulate this should be derived from permissions on the relevant table in the underlying database. + +## DataFile + +| Column | Type | Notes | +|---------------|--------------------------|----------| +| id | integer | pkey | +| created\_at | timestamp with time zone | | +| updated\_at | timestamp with time zone | | +| file | character varying(100) | not null | +| created\_from | character varying(128) | | +| base\_name | character varying(100) | | +| header | boolean | | +| delimiter | character varying(1) | | +| escapechar | character varying(1) | | +| quotechar | character varying(1) | | +| user | integer | | +| type | character varying(128) | | +| max\_level | integer | | +| sheet\_index | integer | | + +When we have our desired logic for cleaning this up sorted out, we should consider removing this model. It's currently only used ephemerally, but then the actuaul instance hangs around indefinitely. + +## SharedExploration + +| Column | Type | Notes | +|------------------------|--------------------------|--------------------------------------| +| id | integer | pkey | +| created\_at | timestamp with time zone | | +| updated\_at | timestamp with time zone | | +| slug | uuid | unique | +| enabled | boolean | | +| exploration | integer | not null; references Exploration(id) | +| db\_server\_credential | integer | references DBServerCredential(id) | + +I've chosen to store the `db_server_credential` id, rather than the creating user, for flexibility. We can derive this from a creating user at the time the Exploration is created, and could (theoretically) update it if the User's credential for a given DB changes (I wouldn't recommend this). + +## SharedTable + +| Column | Type | Notes | +|------------------------|--------------------------|-----------------------------------| +| id | integer | pkey | +| created\_at | timestamp with time zone | | +| updated\_at | timestamp with time zone | | +| slug | uuid | unique | +| enabled | boolean | | +| table\_oid | integer | not null | +| db\_server\_credential | integer | references DBServerCredential(id) | + +## After-beta-term vision + +For the beta, I'm hoping to avoid some work by keeping things in the Mathesar service models that I'd rather store in the underlying User Databases in a `msar_catalog` schema. The relevant models are `ColumnMetadata` and `TableMetadata`. A big motivation to move this info to the User DB is performance w.r.t. the table previews. Our current algorithm requires lots of back-and-forth between the service layer and the User DB in order to recursively build these preview templates, and to fill them. I also think it's more natural to keep these metadata models in the User DB, since they're segregated by User DB, and each instance only refers to objects on that underlying database. + +I also think in the even longer term that we should think about storing our `Exploration` info on the underlying database in the form of views (perhaps in a special `msar_queries` schema). This presents some technical problems, however, that we haven't yet solved. + +## What about names vs. OIDs? + +I thought about adding another model to store a general map of names to OIDs for use when resolving missing tables, etc. This would be useful if someone drops and recreates a table, or when trying to export your Mathesar Explorations or Display Settings. I didn't add that at this stage, since: + +- We'd use the underlying User DB for that map if we move the Metadata models down to the UserDB, and +- We aren't prioritizing the features requiring being able to export and reimport your Explorations for beta. diff --git a/docs/engineering/architecture/old_models.md b/docs/engineering/architecture/old_models.md new file mode 100644 index 000000000..a50821129 --- /dev/null +++ b/docs/engineering/architecture/old_models.md @@ -0,0 +1,233 @@ +# Deprecated Models + +This section contains ad-hoc notes on our current models, and intended changes. + +## Column + +| Column | Type | +|------------------|--------------------------| +| id | integer | +| created\_at | timestamp with time zone | +| updated\_at | timestamp with time zone | +| attnum | integer | +| display\_options | jsonb | +| table\_id | integer | + +The only actual info here is the display options for a given column, stored as a JSON blob. Rename to `ColumnMetadata`, restructure to validate display options, delete fkey fields. Consider moving to `ma_catalog` table on user DB. + +We need to handle updating the table preview template when a new column is added (or rethink the implementation of this functionality) + +We need to replace functionality to get `ui_type` from DB type. + +To replace the dependent-getting functionality, we need to move the dependents module to SQL. + +## Constraint + +| Column | Type | +|-------------|--------------------------| +| id | integer | +| created\_at | timestamp with time zone | +| updated\_at | timestamp with time zone | +| oid | integer | +| table\_id | integer | + +Nothing actually stored here. Delete this model. All functionality can be contained in User DB functions. + +## Database + +| Column | Type | +|-------------|--------------------------| +| id | integer | +| created\_at | timestamp with time zone | +| updated\_at | timestamp with time zone | +| name | character varying(128) | +| deleted | boolean | +| db\_name | character varying(128) | +| editable | boolean | +| host | character varying(255) | +| password | text | +| port | integer | +| username | text | + +Stores connection info to allow accessing a DB by creating an SQLAlchemy engine. + +Referenced by DatabaseRole and Schema models. + +Replace this with `Database`, `DatabaseServer`, `DatabaseServerCredential`, and `UserDatabaseRoleMap` models. See the [New models](./models.md) for details. + +## DatabaseRole + +| Column | Type | +|--------------|--------------------------| +| id | integer | +| created\_at | timestamp with time zone | +| updated\_at | timestamp with time zone | +| role | character varying(10) | +| database\_id | integer | +| user\_id | integer | + +This stores a role on a given database for a given user. We will repurpose this, and it will be applied (for now) only to `UIQuery` instances namespaced under a given database. + +## DataFile + +| Column | Type | +|-------------------------|--------------------------| +| id | integer | +| created\_at | timestamp with time zone | +| updated\_at | timestamp with time zone | +| file | character varying(100) | +| created\_from | character varying(128) | +| base\_name | character varying(100) | +| header | boolean | +| delimiter | character varying(1) | +| escapechar | character varying(1) | +| quotechar | character varying(1) | +| table\_imported\_to\_id | integer | +| user\_id | integer | +| type | character varying(128) | +| max\_level | integer | +| sheet\_index | integer | + +This stores metadata about files which have been uploaded for import into Mathesar. We should keep this model. `table_imported_to_id` should be removed (it's not used anywhere is it?). Also `max_level` seems like less of a data file attribute and more of an import setting. + +## PreviewColumnSettings + +| Column | Type | +|-------------|--------------------------| +| id | integer | +| created\_at | timestamp with time zone | +| updated\_at | timestamp with time zone | +| customized | boolean | +| template | character varying(255) | + +This stores the template defining what should be shown in a referencing fkey column for this table. This would be _much_ better as a `ma_catalog` table for efficiency reasons. + +In that case, a table's preview settings would be "global", i.e., it would be attached to the table rather than a user, table pair. + +Referenced by TableSettings. + +## Schema + +| Column | Type | +|--------------|--------------------------| +| id | integer | +| created\_at | timestamp with time zone | +| updated\_at | timestamp with time zone | +| oid | integer | +| database\_id | integer | + +Nothing stored here. + +Referenced by SchemaRole and Table models. + +Delete this model. All permissions handled by the referencing SchemaRole should instead be handled by the underlying user's permissions on the actual schema in the DB + +## SchemaRole + +| Column | Type | +|-------------|--------------------------| +| id | integer | +| created\_at | timestamp with time zone | +| updated\_at | timestamp with time zone | +| role | character varying(10) | +| schema\_id | integer | +| user\_id | integer | + +This should be deleted, and the permissions should be instead managed on the underlying DB. + +## SharedQuery + +| Column | Type | +|-------------|--------------------------| +| id | integer | +| created\_at | timestamp with time zone | +| updated\_at | timestamp with time zone | +| slug | uuid | +| enabled | boolean | +| query\_id | integer | + +This model should stay. No changes here. We need to add metadata about a credential for running the actual query. + +## SharedTable + +| Column | Type | +|-------------|--------------------------| +| id | integer | +| created\_at | timestamp with time zone | +| updated\_at | timestamp with time zone | +| slug | uuid | +| enabled | boolean | +| table\_id | integer | + +Only change is that we need to refer directly to a table OID, and handle permissions. + +## Table + +| Column | Type | +|--------------------|--------------------------| +| id | integer | +| created\_at | timestamp with time zone | +| updated\_at | timestamp with time zone | +| oid | integer | +| import\_verified | boolean | +| is\_temp | boolean | +| import\_target\_id | integer | +| schema\_id | integer | + +Stores info about: + +- whether the initial data import for the table has been manually verified by a user or not, and +- whether the table is actually a temporary holder for data intended for a preexisting table. + +Referenced by Column, Constraint, DataFile, SharedTable, Table, TableSettings, and UIQuery models + +We should combine this with the `TableSettings` model to create a `TableMetadata` model that just has that info, and drop all fkeys and references. + +## TableSettings + +| Column | Type | +|-----------------------|--------------------------| +| id | integer | +| created\_at | timestamp with time zone | +| updated\_at | timestamp with time zone | +| column\_order | jsonb | +| preview\_settings\_id | integer | +| table\_id | integer | + +This stores Mathesar-specific metadata about tables. Should be combined with remains of `Table` model. + +## UIQuery + +| Column | Type | +|------------------|--------------------------| +| id | integer | +| created\_at | timestamp with time zone | +| updated\_at | timestamp with time zone | +| name | character varying(128) | +| description | text | +| initial\_columns | jsonb | +| transformations | jsonb | +| display\_options | jsonb | +| display\_names | jsonb | +| base\_table\_id | integer | + +This stores a definition of a stored query that can be run on command. The main changes are that it should refer directly to DB-layer ids (oids and attnums) rather than Django-layer. + +## User + +| Column | Type | +|--------------------------|--------------------------| +| id | integer | +| password | character varying(128) | +| last\_login | timestamp with time zone | +| is\_superuser | boolean | +| username | character varying(150) | +| email | character varying(254) | +| is\_staff | boolean | +| is\_active | boolean | +| date\_joined | timestamp with time zone | +| full\_name | character varying(255) | +| short\_name | character varying(255) | +| password\_change\_needed | boolean | + +This stores user metadata. I think we should mostly keep it as is. It will be referenced by the `UserDatabaseRoleMap` model. diff --git a/docs/engineering/architecture/permissions.md b/docs/engineering/architecture/permissions.md new file mode 100644 index 000000000..232c925f9 --- /dev/null +++ b/docs/engineering/architecture/permissions.md @@ -0,0 +1,88 @@ +# Users and Permissions + +The big picture is that we will implement access management for DB objects (including databases themselves) by giving admin users of Mathesar the ability to use database-layer permissions management tools (e.g., `GRANT`). For web service resources (e.g., Django model instances), permissions will be managed in Django. + +## Database Model + +This model will be reduced to store nothing more than metadata about a given database, as well as its name for use when constructing an actual connection to that database. + +## Database Server Model + +This model holds the `host` and `port` information for a given database. + +## Database Server Credential Model + +As described in [the models section](./models.md), This will store the authentication information needed to create an engine, but won't provide a database (a required part of a connection definition in PostgreSQL). Because the information includes a role (to define the initial connection role), it necessarily defines a set of privileges available on the database with that connection. + +## User Database Role Map Model + +This map uses a `user, database` pair to look up the needed credentials, then provides a connection to the database using those credentials (if possible). + +## Adding Connections (backend perspective) + +Regardless of UI, the backend should receive a `POST` request to the new RPC endpoint defining a new connection. We'll use the same functions set up in [PR \#3348](https://github.com/mathesar-foundation/mathesar/pull/3348). The relevant info should be stored in the `Database`, `DatabaseServer`, and `DatabaseServerCredential` models. Also, if the user does not already have an entry defining their role on the given database, we could create such an entry automatically in the `UserDBRoleMap` model. This is optional, and doesn't really affect the architecture. Then, to let some other users access that connection, we will provide an RPC function that lets an admin set different users' credentials for the given database by creating or updating `UserDBRoleMap` resources. Note that this does _not_ directly modify anything to do with permissions on actual database objects (e.g., schemata or tables). + +## Granting database object privileges + +The backend will provide RPC functions that let an admin (who has access to a sufficiently-privileged database role via a connection) access database-level permission granting functionality directly. So, to grant access to create tables in a schema to a Mathesar user, the admin uses an RPC function that defines a privilege-granting query (via a PL/pgSQL function) on the database. Note that this request doesn't actually modify any model instance. + +- Privileges on DB objects are thus not granted directly to Mathesar users. +- Privileges on DB objects are granted to DB roles, which may be accessible to some (or all) Mathesar users. +- Anytime an RPC function requiring DB access runs, it runs using the connection defined via the `UserDBRoleMap`, with the associated privileges on that database. + +!!! question "UX Question" + Should the admin think in terms of groups of Mathesar users, or specifically in terms of connections when dealing with DB-level privileges? + +!!! danger "Potential Confusion" + In case the admin or DBA want multiple Mathesar users to be able to modify various DB objects, three options are available: + - Give all relevant Mathesar users access to connect as the owning DB user. + - `GRANT` the owning DB role (ostensibly a user) to DB users connectable by the relevant Mathesar users. + - Have at least one DB superuser available for use with a connection, and give relevant Mathesar users access to that DB superuser. + +## Mathesar object privileges + +Examples of such objects are Explorations, and table properties like preview columns or display options. Permissions on these will be: +- None, +- read only, or +- read write +These permissions will be tracked in the `UserDBRoleMap` model via the `metadata_role` attribute. Note that these permissions are only related to CRUD operations on these objects. In the case of Explorations, actually _running_ the exploration is dependent on: +- at least the read permission on the object, and +- the database-level permissions associated with the connection available to the user. + +### Current plan + +In the case of Explorations (`Exploration` model), this will be derived from a policy scoped via the `Database` associated with the Exploration: + +- A super user of Mathesar will set the policy for a given `User` on a given `Database` instance via the UI. + - When a `User` wants to view/edit/manage an Exploration, the web service will check the `user, database` pair (where `database`) is the `Database` associated with the Exploration to find a relevant policy. + - Based on the policy, the user can then act on the Exploration. + +### Alternative plan + +In the case of Explorations (`Exploration` model), this will be derived from a policy scoped via the `Database` associated with the Exploration: + +- A super user of Mathesar will set the policy for `DatabaseServerCredential` instances via the UI. + - Each credential instance represents a Role on the DB. + - When a User wants to view/edit/manage an Exploration, the web service will check the `user, database` pair (where `database`) is the Database associated with the Exploration to get a `DatabaseServerCredential` if one exists (otherwise, no permissions are granted) + - Based on the policy applied to that credential, the user can then act on the Exploration. + +This plan would require a minor change to the [models](models.md), but isn't very difficult to implement. The author considers it a UX question which way we go on this. + +## Shared links + +A shared table or exploration needs access to a `Database` and `DatabaseServerCredential` to run (or be viewed). We should bypass any permission checks when actually retrieving data. The shared object model should include an attribute giving the `DatabaseServerCredential` instance needed to run (or view). + +!!! danger "Tech/Product concern" + The safest (and easiest) implementation would be to have specialized view-only users who are `GRANT`ed `SELECT` on relevant DB objects when needed. Then, sharable links would _only_ use those users. But, we'll need to justify this choice in documentation somewhere. + +## Note on hierarchical permissions + +We can allow an "admin" DB user by automatically granting the role associated with each Mathesar-managed connection to to a given admin DB user. So, if we have a user `mathesaradmin`, and a regular user `joe`, we can run `GRANT joe TO mathesaradmin` (as a role with sufficient privileges; At least `ADMIN` is needed on `joe` to run this). This would let `mathesaradmin` act as a Manager on anything created by `joe`. + +Our `Manager` concept implies (co-)ownership of all managed sub-objects. I.e., a Database Manager owns all objects in that database (using the description in our docs) + +Our `Editor` concept implies `SELECT`, `INSERT`, `UPDATE`, `DELETE`, `TRUNCATE`, `REFERENCES` (sort of; with the way Mathesar currently treats fkeys) + +Our `Viewer` concept implies `SELECT` on objects (obviously) + +Thus, _if_ we want to recreate our current conceptual framework, it's possible (and not too difficult).