-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
/
Copy pathsearch-doc.json
1 lines (1 loc) · 969 KB
/
search-doc.json
1
{"searchDocs":[{"title":"RxDB as a Database in an Angular Application","type":0,"sectionRef":"#","url":"/articles/angular-database.html","content":"","keywords":"","version":"Next"},{"title":"Angular Web Applications","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#angular-web-applications","content":" Angular is a powerful JavaScript framework developed and maintained by Google. It enables developers to build single-page applications (SPAs) with a modular and component-based approach. Angular provides a comprehensive set of tools and features for creating dynamic and responsive web applications. ","version":"Next","tagName":"h2"},{"title":"Importance of Databases in Angular Applications","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#importance-of-databases-in-angular-applications","content":" Databases play a vital role in Angular applications by providing a structured and efficient way to store, retrieve, and manage data. Whether it's handling user authentication, caching data, or persisting application state, a robust database solution is essential for ensuring optimal performance and user experience. ","version":"Next","tagName":"h2"},{"title":"Introducing RxDB as a Database Solution","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#introducing-rxdb-as-a-database-solution","content":" RxDB stands for Reactive Database and is built on the principles of reactive programming. It combines the best features of NoSQL databases with the power of reactive programming to provide a scalable and efficient database solution. RxDB offers seamless integration with Angular applications and brings several unique features that make it an attractive choice for developers. ","version":"Next","tagName":"h2"},{"title":"Getting Started with RxDB","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#getting-started-with-rxdb","content":" To begin our journey with RxDB, let's understand its key concepts and features. ","version":"Next","tagName":"h2"},{"title":"What is RxDB?","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#what-is-rxdb","content":" RxDB is a client-side database that follows the principles of reactive programming. It is built on top of IndexedDB, the native browser database, and leverages the RxJS library for reactive data handling. RxDB provides a simple and intuitive API for managing data and offers features like data replication, multi-tab support, and efficient query handling. ","version":"Next","tagName":"h3"},{"title":"Reactive Data Handling","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#reactive-data-handling","content":" At the core of RxDB is the concept of reactive data handling. RxDB leverages observables and reactive streams to enable real-time updates and data synchronization. With RxDB, you can easily subscribe to data changes and react to them in a reactive and efficient manner. ","version":"Next","tagName":"h3"},{"title":"Alternatives for realtime offline-first JavaScript applications","type":0,"sectionRef":"#","url":"/alternatives.html","content":"","keywords":"","version":"Next"},{"title":"Alternatives to RxDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#alternatives-to-rxdb","content":" RxDB is an observable, replicating, local first, JavaScript database. So it makes only sense to list similar projects as alternatives, not just any database or JavaScript store library. However, I will list up some projects that RxDB is often compared with, even if it only makes sense for some use cases. Here are the alternatives to RxDB: ","version":"Next","tagName":"h2"},{"title":"Firebase","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#firebase","content":" Firebase is a platform developed by Google for creating mobile and web applications. Firebase has many features and products, two of which are client side databases. The Realtime Database and the Cloud Firestore. Firebase - Realtime Database The firebase realtime database was the first database in firestore. It has to be mentioned that in this context, "realtime" means "realtime replication", not "realtime computing". The firebase realtime database stores data as a big unstructured JSON tree that is replicated between clients and the backend. Firebase - Cloud Firestore The firestore is the successor to the realtime database. The big difference is that it behaves more like a 'normal' database that stores data as documents inside of collections. The conflict resolution strategy of firestore is always last-write-wins which might or might not be suitable for your use case. The biggest difference to RxDB is that firebase products are only able to be used on top of the Firebase cloud hosted backend, which creates a vendor lock-in. RxDB can replicate with any self hosted CouchDB server or custom GraphQL endpoints. You can even replicate Firestore to RxDB with the Firestore Replication Plugin. ","version":"Next","tagName":"h3"},{"title":"Meteor","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#meteor","content":" Meteor (since 2012) is one of the oldest technologies for JavaScript realtime applications. Meteor is not a library but a whole framework with its own package manager, database management and replication protocol. Because of how it works, it has proven to be hard to integrate it with other modern JavaScript frameworks like angular, vue.js or svelte. Meteor uses MongoDB in the backend and can replicate with a Minimongo database in the frontend. While testing, it has proven to be impossible to make a meteor app offline first capable. There are some projects that might do this, but all are unmaintained. ","version":"Next","tagName":"h3"},{"title":"Minimongo","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#minimongo","content":" Forked in Jan 2014 from meteorJSs' minimongo package, Minimongo is a client-side, in-memory, JavaScript version of MongoDB with backend replication over HTTP. Similar to MongoDB, it stores data in documents inside of collections and also has the same query syntax. Minimongo has different storage adapters for IndexedDB, WebSQL, LocalStorage and SQLite. Compared to RxDB, Minimongo has no concept of revisions or conflict handling, which might lead to undefined behavior when used with replication or in multiple browser tabs. Minimongo has no observable queries or changestream. ","version":"Next","tagName":"h3"},{"title":"WatermelonDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#watermelondb","content":" WatermelonDB is a reactive & asynchronous JavaScript database. While originally made for React and React Native, it can also be used with other JavaScript frameworks. The main goal of WatermelonDB is performance within an application with lots of data. In React Native, WatermelonDB uses the provided SQLite database. Also there is an Expo plugin for WatermelonDB. In a browser, WatermelonDB uses the LokiJS in-memory database to store and query data. WatermelonDB is one of the rare projects that support both Flow and Typescript at the same time. ","version":"Next","tagName":"h3"},{"title":"AWS Amplify","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#aws-amplify","content":" AWS Amplify is a collection of tools and libraries to develop web- and mobile frontend applications. Similar to firebase, it provides everything needed like authentication, analytics, a REST API, storage and so on. Everything hosted in the AWS Cloud, even when they state that "AWS Amplify is designed to be open and pluggable for any custom backend or service". For realtime replication, AWS Amplify can connect to an AWS App-Sync GraphQL endpoint. ","version":"Next","tagName":"h3"},{"title":"AWS Datastore","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#aws-datastore","content":" Since December 2019 the Amplify library includes the AWS Datastore which is a document-based, client side database that is able to replicate data via AWS AppSync in the background. The main difference to other projects is the complex project configuration via the amplify cli and the bit confusing query syntax that works over functions. Complex Queries with multiple OR/AND statements are not possible which might change in the future. Local development is hard because the AWS AppSync mock does not support realtime replication. It also is not really offline-first because a user login is always required. // An AWS datastore OR query const posts = await DataStore.query(Post, c => c.or( c => c.rating("gt", 4).status("eq", PostStatus.PUBLISHED) )); // An AWS datastore SORT query const posts = await DataStore.query(Post, Predicates.ALL, { sort: s => s.rating(SortDirection.ASCENDING).title(SortDirection.DESCENDING) }); The biggest difference to RxDB is that you have to use the AWS cloud backends. This might not be a problem if your data is at AWS anyway. ","version":"Next","tagName":"h3"},{"title":"RethinkDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#rethinkdb","content":" RethinkDB is a backend database that pushed dynamic JSON data to the client in realtime. It was founded in 2009 and the company shut down in 2016. Rethink db is not a client side database, it streams data from the backend to the client which of course does not work while offline. ","version":"Next","tagName":"h3"},{"title":"Horizon","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#horizon","content":" Horizon is the client side library for RethinkDB which provides useful functions like authentication, permission management and subscription to a RethinkDB backend. Offline support never made it to horizon. ","version":"Next","tagName":"h3"},{"title":"Supabase","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#supabase","content":" Supabase labels itself as "an open source Firebase alternative". It is a collection of open source tools that together mimic many Firebase features, most of them by providing a wrapper around a PostgreSQL database. While it has realtime queries that run over the wire, like with RethinkDB, Supabase has no client-side storage or replication feature and therefore is not offline first. ","version":"Next","tagName":"h3"},{"title":"CouchDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#couchdb","content":" Apache CouchDB is a server-side, document-oriented database that is mostly known for its multi-master replication feature. Instead of having a master-slave replication, with CouchDB you can run replication in any constellation without having a master server as bottleneck where the server even can go off- and online at any time. This comes with the drawback of having a slow replication with much network overhead. CouchDB has a changestream and a query syntax similar to MongoDB. ","version":"Next","tagName":"h3"},{"title":"PouchDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#pouchdb","content":" PouchDB is a JavaScript database that is compatible with most of the CouchDB API. It has an adapter system that allows you to switch out the underlying storage layer. There are many adapters like for IndexedDB, SQLite, the Filesystem and so on. The main benefit is to be able to replicate data with any CouchDB compatible endpoint. Because of the CouchDB compatibility, PouchDB has to do a lot of overhead in handling the revision tree of document, which is why it can show bad performance for bigger datasets. RxDB was originally build around PouchDB until the storage layer was abstracted out in version 10.0.0 so it now allows to use different RxStorage implementations. PouchDB has some performance issues because of how it has to store the document revision tree to stay compatible with the CouchDB API. ","version":"Next","tagName":"h3"},{"title":"Couchbase","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#couchbase","content":" Couchbase (originally known as Membase) is another NoSQL document database made for realtime applications. It uses the N1QL query language which is more SQL like compared to other NoSQL query languages. In theory you can achieve replication of a Couchbase with a PouchDB database, but this has shown to be not that easy. ","version":"Next","tagName":"h3"},{"title":"Cloudant","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#cloudant","content":" Cloudant is a cloud-based service that is based on CouchDB and has mostly the same features. It was originally designed for cloud computing where data can automatically be distributed between servers. But it can also be used to replicate with frontend PouchDB instances to create scalable web applications. It was bought by IBM in 2014 and since 2018 the Cloudant Shared Plan is retired and migrated to IBM Cloud. ","version":"Next","tagName":"h3"},{"title":"Hoodie","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#hoodie","content":" Hoodie is a backend solution that enables offline-first JavaScript frontend development without having to write backend code. Its main goal is to abstract away configuration into simple calls to the Hoodie API. It uses CouchDB in the backend and PouchDB in the frontend to enable offline-first capabilities. The last commit for hoodie was one year ago and the website (hood.ie) is offline which indicates it is not an active project anymore. ","version":"Next","tagName":"h3"},{"title":"LokiJS","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#lokijs","content":" LokiJS is a JavaScript embeddable, in-memory database. And because everything is handled in-memory, LokiJS has awesome performance when mutating or querying data. You can still persist to a permanent storage (IndexedDB, Filesystem etc.) with one of the provided storage adapters. The persistence happens after a timeout is reached after a write, or before the JavaScript process exits. This also means you could loose data when the JavaScript process exits ungracefully like when the power of the device is shut down or the browser crashes. While the project is not that active anymore, it is more finished than unmaintained. In the past, RxDB supported using LokiJS as RxStorage but because the LokiJS is not maintained anymore and had too many issues, this storage option was removed in RxDB version 16. ","version":"Next","tagName":"h3"},{"title":"Gundb","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#gundb","content":" GUN is a JavaScript graph database. While having many features, the decentralized replication is the main unique selling point. You can replicate data Peer-to-Peer without any centralized backend server. GUN has several other features that are useful on top of that, like encryption and authentication. While testing it was really hard to get basic things running. GUN is open source, but because of how the source code is written, it is very difficult to understand what is going wrong. ","version":"Next","tagName":"h3"},{"title":"sql.js","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#sqljs","content":" sql.js is a javascript library to run SQLite on the web. It uses a virtual database file stored in memory and does not have any persistence. All data is lost once the JavaScript process exits. sql.js is created by compiling SQLite to WebAssembly so it has about the same features as SQLite. For older browsers there is a JavaScript fallback. ","version":"Next","tagName":"h3"},{"title":"absurd-sQL","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#absurd-sql","content":" Absurd-sql is a project that implements an IndexedDB-based persistence for sql.js. Instead of directly writing data into the IndexedDB, it treats IndexedDB like a disk and stores data in blocks there which shows to have a much better performance, mostly because of how performance expensive IndexedDB transactions are. ","version":"Next","tagName":"h3"},{"title":"NeDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#nedb","content":" NeDB was a embedded persistent or in-memory database for Node.js, nw.js, Electron and browsers. It is document-oriented and had the same query syntax as MongoDB. Like LokiJS it has persistence adapters for IndexedDB etc. to persist the database state on the disc. The last commit to NeDB was in 2016. ","version":"Next","tagName":"h3"},{"title":"Dexie.js","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#dexiejs","content":" Dexie.js is a minimalistic wrapper for IndexedDB. While providing a better API than plain IndexedDB, Dexie also improves performance by batching transactions and other optimizations. It also adds additional non-IndexedDB features like observable queries or multi tab support or react hooks. Compared to RxDB, Dexie.js does not support complex (MongoDB-like) queries and requires a lot of fiddling when a document range of a specific index must be fetched. Dexie.js is used by Whatsapp Web, Microsoft To Do and Github Desktop. RxDB supports using Dexie.js as RxStorage which enhances IndexedDB with RxDB features like MongoDB-like queries etc. ","version":"Next","tagName":"h3"},{"title":"LowDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#lowdb","content":" LowDB is a small, local JSON database powered by the Lodash library. It is designed to be simple, easy to use, and straightforward. LowDB allows you to perform native JavaScript queries and persist data in a flat JSON file. Written in TypeScript, it's particularly well-suited for small projects, prototyping, or when you need a lightweight, file-based database. As an alternative to LowDB, RxDB offers real-time reactivity, allowing developers to subscribe to database changes, a feature not natively available in LowDB. Additionally, RxDB provides robust query capabilities, including the ability to subscribe to query results for automatic UI updates. These features make RxDB a strong alternative to LowDB for more complex and dynamic applications. ","version":"Next","tagName":"h3"},{"title":"localForage","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#localforage","content":" localForage is a popular JavaScript library for offline storage that provides a simple, promise-based API. It abstracts over different storage mechanisms such as IndexedDB, WebSQL, or localStorage, making it easier to write code once and have it work seamlessly across various browsers. While localForage is great for storing data locally in a key-value manner, it doesn't provide the real-time reactive queries, conflict handling, or revision-based replication that RxDB does. This makes localForage a useful choice for straightforward caching or persistent storage needs, but not ideal for advanced offline-first scenarios requiring multi-user collaboration or complex querying. ","version":"Next","tagName":"h3"},{"title":"MongoDB Realm","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#mongodb-realm","content":" Originally Realm was a mobile database for Android and iOS. Later they added support for other languages and runtimes, also for JavaScript. It was meant as replacement for SQLite but is more like an object store than a full SQL database. In 2019 MongoDB bought Realm and changed the projects focus. Now Realm is made for replication with the MongoDB Realm Sync based on the MongoDB Atlas Cloud platform. This tight coupling to the MongoDB cloud service is a big downside for most use cases. ","version":"Next","tagName":"h3"},{"title":"Apollo","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#apollo","content":" The Apollo GraphQL platform is made to transfer data between a server to UI applications over GraphQL endpoints. It contains several tools like GraphQL clients in different languages or libraries to create GraphQL endpoints. While it is has different caching features for offline usage, compared to RxDB it is not fully offline first because caching alone does not mean your application is fully usable when the user is offline. ","version":"Next","tagName":"h3"},{"title":"Replicache","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#replicache","content":" Replicache is a client-side sync framework for building realtime, collaborative, local-first web apps. It claims to work with most backend stacks. In contrast to other local first tools, replicache does not work like a local database. Instead it runs on so called mutators that unify behavior on the client and server side. So instead of implementing and calling REST routes on both sides of your stack, you will implement mutators that define a specific delta behavior based on the input data. To observe data in replicache, there are subscriptions that notify your frontend application about changes to the state. Replicache can be used in most frontend technologies like browsers, React/Remix, NextJS/Vercel and React Native. While Replicache can be installed and used from npm, the Replicache source code is not open source and the Replicache github repo does not allow you to inspect or debug it. Still you can use replicache for in non-commercial projects, or for companies with < $200k revenue (ARR) and < $500k in funding. (2024: Replicache will be free and Rocicorp are working on a new Zerosync product to succeed Replicache and Reflect.) ","version":"Next","tagName":"h3"},{"title":"InstantDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#instantdb","content":" InstantDB is designed for real-time data synchronization with built-in offline support, allowing changes to be queued locally and synced when the user reconnects. While it offers seamless optimistic updates and rollback capabilities, its offline-first design is not as mature or comprehensive as RxDB's - the offline data is more of a cache, not a full-database sync. The query language used is Datalog, and the backend sync service is written in Clojure. InstantDB is focused more on simplicity and real-time collaboration, with fewer customization options for storage or conflict resolution compared to RxDB, which supports various storage adapters and advanced conflict handling via CRDTs. ","version":"Next","tagName":"h3"},{"title":"Yjs","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#yjs","content":" Yjs is a CRDT-based (Conflict-free Replicated Data Type) library focused on enabling real-time collaboration - particularly for text editing, although it can handle other data types as well. While it provides powerful conflict resolution and peer-to-peer synchronization out of the box, Yjs itself is not a full-fledged database. Instead, you typically combine Yjs with other storage or networking layers to achieve a local-first architecture. This flexibility allows for sophisticated real-time features, but also means you must handle indexing, queries, and persistence on your own if you need them. Compared to RxDB, Yjs does not offer built-in replication adapters or a query system, so developers who require a more complete solution for conflict resolution, data persistence, and offline-first capabilities may find RxDB more convenient. ","version":"Next","tagName":"h3"},{"title":"ElectricSQL","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#electricsql","content":" 2024: ElectricSQL is being rewritten in a new Electric-Next branch, which focuses on partial syncing of ("shapes") of data from a remote Postgres DB to a local clients written in TypeScript/JS or Elixir. The write path is not yet implemented, neither is client-side reactivity. The ElectricSQL backend is written in Elixir. ","version":"Next","tagName":"h3"},{"title":"SignalDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#signaldb","content":" SignalDB provides a reactive, in-memory local-lirst JavaScript database with real-time sync, bit it doesn't offer the same level of multi-client replication or flexibility with storage backends that RxDB provides, and through a RxDB persistence adapters you can actually use SignalDB for the front-end reactivity while relying on RxDB for backend sync and persistence. ","version":"Next","tagName":"h3"},{"title":"PowerSync","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#powersync","content":" PowerSync is a flexible "framework" for implementing local-first solutions. It centralizes business logic and conflict resolution on a central, authoritative server (PostgreSQL or MongoDB), vs RxDB that also supports custom backends. Both RxDB and PowerSync can be used with a variety of storage backends, but PowerSync uses SQLite as the front-end database which has shown to be slow because the WASM-SQLite abstraction increases read and write latency. In terms of client SDKs, PowerSync offers Flutter, Kotlin, and Swift in addition to JS/TypeScript. PowerSync offers man client technologies, PowerSync is under a license that restricts commercial use that competes with PowerSync and the JourneyApps Platform. Read further Offline First Database Comparison ","version":"Next","tagName":"h3"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#offline-first-approach","content":" One of the standout features of RxDB is its offline-first approach. It allows you to build applications that can work seamlessly in offline scenarios. RxDB stores data locally and automatically synchronizes changes with the server when the network becomes available. This capability is particularly useful for applications that need to function in low-connectivity or unreliable network environments. ","version":"Next","tagName":"h3"},{"title":"Data Replication","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#data-replication","content":" RxDB provides built-in support for data replication between clients and servers. This means you can synchronize data across multiple devices or instances of your application effortlessly. RxDB handles conflict resolution and ensures that data remains consistent across all connected clients. ","version":"Next","tagName":"h3"},{"title":"Observable Queries","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#observable-queries","content":" RxDB offers a powerful querying mechanism with support for observable queries. This allows you to create dynamic queries that automatically update when the underlying data changes. By leveraging RxDB's observable queries, you can build reactive UI components that respond to data changes in real-time. ","version":"Next","tagName":"h3"},{"title":"Multi-Tab Support","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#multi-tab-support","content":" RxDB provides out-of-the-box support for multi-tab scenarios. This means that if your Angular application is running in multiple browser tabs, RxDB automatically keeps the data in sync across all tabs. It ensures that changes made in one tab are immediately reflected in others, providing a seamless user experience. ","version":"Next","tagName":"h3"},{"title":"RxDB vs. Other Angular Database Options","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#rxdb-vs-other-angular-database-options","content":" While there are other database options available for Angular applications, RxDB stands out with its reactive programming model, offline-first approach, and built-in synchronization capabilities. Unlike traditional SQL databases, RxDB's NoSQL-like structure and observables-based API make it well-suited for real-time applications and complex data scenarios. ","version":"Next","tagName":"h3"},{"title":"Using RxDB in an Angular Application","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#using-rxdb-in-an-angular-application","content":" Now that we have a good understanding of RxDB and its features, let's explore how to integrate it into an Angular application. ","version":"Next","tagName":"h2"},{"title":"Installing RxDB in an Angular App","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#installing-rxdb-in-an-angular-app","content":" To use RxDB in an Angular application, we first need to install the necessary dependencies. You can install RxDB using npm or yarn by running the following command: npm install rxdb --save Once installed, you can import RxDB into your Angular application and start using its API to create and manage databases. ","version":"Next","tagName":"h3"},{"title":"Patch Change Detection with zone.js","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#patch-change-detection-with-zonejs","content":" Angular uses change detection to detect and update UI elements when data changes. However, RxDB's data handling is based on observables, which can sometimes bypass Angular's change detection mechanism. To ensure that changes made in RxDB are detected by Angular, we need to patch the change detection mechanism using zone.js. Zone.js is a library that intercepts and tracks asynchronous operations, including observables. By patching zone.js, we can make sure that Angular is aware of changes happening in RxDB. warning RxDB creates rxjs observables outside of angulars zone So you have to import the rxjs patch to ensure the angular change detection works correctly.link //> app.component.ts import 'zone.js/plugins/zone-patch-rxjs'; ","version":"Next","tagName":"h3"},{"title":"Use the Angular async pipe to observe an RxDB Query","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#use-the-angular-async-pipe-to-observe-an-rxdb-query","content":" Angular provides the async pipe, which is a convenient way to subscribe to observables and handle the subscription lifecycle automatically. When working with RxDB, you can use the async pipe to observe an RxDB query and bind the results directly to your Angular template. This ensures that the UI stays in sync with the data changes emitted by the RxDB query. constructor( private dbService: DatabaseService, private dialog: MatDialog ) { this.heroes$ = this.dbService .db.hero // collection .find({ // query selector: {}, sort: [{ name: 'asc' }] }) .$; } <ul *ngFor="let hero of heroes$ | async as heroes;"> <li>{{hero.name}}</li> </ul> ","version":"Next","tagName":"h3"},{"title":"Different RxStorage layers for RxDB","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#different-rxstorage-layers-for-rxdb","content":" RxDB supports multiple storage layers for persisting data. Some of the available storage options include: Dexie.js RxStorage: Dexie.js is a minimalistic IndexedDB wrapper that provides a simple API for working with IndexedDB. RxDB leverages Dexie.js as its default storage layer.IndexedDB RxStorage: RxDB directly supports IndexedDB as a storage layer. IndexedDB is a low-level browser database that offers good performance and reliability.OPFS RxStorage: The OPFS RxStorage for RxDB is built on top of the File System Access API which is available in all modern browsers. It provides an API to access a sandboxed private file system to persistently store and retrieve data. Compared to other persistend storage options in the browser (like IndexedDB), the OPFS API has a way better performance.Memory RxStorage: In addition to persistent storage options, RxDB also provides a memory-based storage layer. This is useful for testing or scenarios where you don't need long-term data persistence. You can choose the storage layer that best suits your application's requirements and configure RxDB accordingly. ","version":"Next","tagName":"h3"},{"title":"Synchronizing Data with RxDB between Clients and Servers","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#synchronizing-data-with-rxdb-between-clients-and-servers","content":" Data replication between an Angular application and a server is a common requirement. RxDB simplifies this process and provides built-in support for data synchronization. Let's explore how to replicate data between an Angular application and a server using RxDB. ","version":"Next","tagName":"h2"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#offline-first-approach-1","content":" One of the key strengths of RxDB is its offline-first approach. It allows Angular applications to function seamlessly even in offline scenarios. RxDB stores data locally and automatically synchronizes changes with the server when the network becomes available. This capability is particularly useful for applications that need to operate in low-connectivity or unreliable network environments. ","version":"Next","tagName":"h3"},{"title":"Conflict Resolution","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#conflict-resolution","content":" In a distributed system, conflicts can arise when multiple clients modify the same data simultaneously. RxDB offers conflict resolution mechanisms to handle such scenarios. You can define conflict resolution strategies based on your application's requirements. RxDB provides hooks and events to detect conflicts and resolve them in a consistent manner. ","version":"Next","tagName":"h3"},{"title":"Bidirectional Synchronization","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#bidirectional-synchronization","content":" RxDB supports bidirectional data synchronization, allowing updates from both the client and server to be replicated seamlessly. This ensures that data remains consistent across all connected clients and the server. RxDB handles conflicts and resolves them based on the defined conflict resolution strategies. ","version":"Next","tagName":"h3"},{"title":"Real-Time Updates","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#real-time-updates","content":" RxDB provides real-time updates by leveraging reactive programming principles. Changes made to the data are automatically propagated to all connected clients in real-time. Angular applications can subscribe to these updates and update the user interface accordingly. This real-time capability enables collaborative features and enhances the overall user experience. ","version":"Next","tagName":"h3"},{"title":"Advanced RxDB Features and Techniques","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#advanced-rxdb-features-and-techniques","content":" RxDB offers several advanced features and techniques that can further enhance your Angular application. ","version":"Next","tagName":"h2"},{"title":"Indexing and Performance Optimization","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#indexing-and-performance-optimization","content":" To improve query performance, RxDB allows you to define indexes on specific fields of your documents. Indexing enables faster data retrieval and query execution, especially when working with large datasets. By strategically creating indexes, you can optimize the performance of your Angular application. ","version":"Next","tagName":"h3"},{"title":"Encryption of Local Data","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#encryption-of-local-data","content":" RxDB provides built-in support for encrypting local data using the Web Crypto API. With encryption, you can protect sensitive data stored in the client-side database. RxDB transparently encrypts the data, ensuring that it remains secure even if the underlying storage is compromised. ","version":"Next","tagName":"h3"},{"title":"Change Streams and Event Handling","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#change-streams-and-event-handling","content":" RxDB exposes change streams, which allow you to listen for data changes at a database or collection level. By subscribing to change streams, you can react to data modifications and perform specific actions, such as updating the UI or triggering notifications. Change streams enable real-time event handling in your Angular application. ","version":"Next","tagName":"h3"},{"title":"JSON Key Compression","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#json-key-compression","content":" To reduce the storage footprint and improve performance, RxDB supports JSON key compression. With key compression, RxDB replaces long keys with shorter aliases, reducing the overall storage size. This optimization is particularly useful when working with large datasets or frequently updating data. ","version":"Next","tagName":"h3"},{"title":"Best Practices for Using RxDB in Angular Applications","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#best-practices-for-using-rxdb-in-angular-applications","content":" To make the most of RxDB in your Angular application, consider the following best practices: ","version":"Next","tagName":"h2"},{"title":"Use Async Pipe for Subscriptions so you do not have to unsubscribe","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#use-async-pipe-for-subscriptions-so-you-do-not-have-to-unsubscribe","content":" Angular's async pipe is a powerful tool for handling observables in templates. By using the async pipe, you can avoid the need to manually subscribe and unsubscribe from RxDB observables. Angular takes care of the subscription lifecycle, ensuring that resources are released when they are no longer needed. Instead of manually subscribing to Observables, you should always prefer the async pipe. // WRONG: let amount; this.dbService .db.hero .find({ selector: {}, sort: [{ name: 'asc' }] }) .$.subscribe(docs => { amount = 0; docs.forEach(d => amount = d.points); }); // RIGHT: this.amount$ = this.dbService .db.hero .find({ selector: {}, sort: [{ name: 'asc' }] }) .$.pipe( map(docs => { let amount = 0; docs.forEach(d => amount = d.points); return amount; }) ); ","version":"Next","tagName":"h3"},{"title":"Use custom reactivity to have signals instead of rxjs observables","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#use-custom-reactivity-to-have-signals-instead-of-rxjs-observables","content":" RxDB supports adding custom reactivity factories that allow you to get angular signals out of the database instead of rxjs observables. read more. ","version":"Next","tagName":"h3"},{"title":"Use Angular Services for Database creation","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#use-angular-services-for-database-creation","content":" To ensure proper separation of concerns and maintain a clean codebase, it is recommended to create an Angular service responsible for managing the RxDB database instance. This service can handle database creation, initialization, and provide methods for interacting with the database throughout your application. ","version":"Next","tagName":"h3"},{"title":"Efficient Data Handling","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#efficient-data-handling","content":" RxDB provides various mechanisms for efficient data handling, such as batching updates, debouncing, and throttling. Leveraging these techniques can help optimize performance and reduce unnecessary UI updates. Consider the specific data handling requirements of your application and choose the appropriate strategies provided by RxDB. ","version":"Next","tagName":"h3"},{"title":"Data Synchronization Strategies","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#data-synchronization-strategies","content":" When working with data synchronization between clients and servers, it's important to consider strategies for conflict resolution and handling network failures. RxDB provides plugins and hooks that allow you to customize the replication behavior and implement specific synchronization strategies tailored to your application's needs. ","version":"Next","tagName":"h3"},{"title":"Conclusion","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#conclusion","content":" RxDB is a powerful database solution for Angular applications, offering reactive data handling, offline-first capabilities, and seamless data synchronization. By integrating RxDB into your Angular application, you can build responsive and scalable web applications that provide a rich user experience. Whether you're building real-time collaborative apps, progressive web applications, or offline-capable applications, RxDB's features and techniques make it a valuable addition to your Angular development toolkit. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#follow-up","content":" To explore more about RxDB and leverage its capabilities for browser database development, check out the following resources: RxDB GitHub Repository: Visit the official GitHub repository of RxDB to access the source code, documentation, and community support.RxDB Quickstart: Get started quickly with RxDB by following the provided quickstart guide, which provides step-by-step instructions for setting up and using RxDB in your projects.RxDB Angular Example at GitHub ","version":"Next","tagName":"h2"},{"title":"PouchDB Adapters","type":0,"sectionRef":"#","url":"/adapters.html","content":"","keywords":"","version":"Next"},{"title":"Memory","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#memory","content":" In any environment, you can use the memory-adapter. It stores the data in the javascript runtime memory. This means it is not persistent and the data is lost when the process terminates. Use this adapter when: You want to have really good performanceYou do not want persistent state, for example in your test suite import { createRxDatabase } from 'rxdb' import { getRxStoragePouch } from 'rxdb/plugins/pouchdb'; // npm install pouchdb-adapter-memory --save addPouchPlugin(require('pouchdb-adapter-memory')); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch('memory') }); ","version":"Next","tagName":"h2"},{"title":"Memdown","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#memdown","content":" With RxDB you can also use adapters that implement abstract-leveldown like the memdown-adapter. // npm install memdown --save // npm install pouchdb-adapter-leveldb --save addPouchPlugin(require('pouchdb-adapter-leveldb')); // leveldown adapters need the leveldb plugin to work const memdown = require('memdown'); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch(memdown) // the full leveldown-module }); Browser ","version":"Next","tagName":"h2"},{"title":"IndexedDB","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#indexeddb","content":" The IndexedDB adapter stores the data inside of IndexedDB use this in browsers environments as default. // npm install pouchdb-adapter-idb --save addPouchPlugin(require('pouchdb-adapter-idb')); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch('idb') }); ","version":"Next","tagName":"h2"},{"title":"IndexedDB","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#indexeddb-1","content":" A reimplementation of the indexeddb adapter which uses native secondary indexes. Should have a much better performance but can behave different on some edge cases. note Multiple users have reported problems with this adapter. It is not recommended to use this adapter. // npm install pouchdb-adapter-indexeddb --save addPouchPlugin(require('pouchdb-adapter-indexeddb')); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch('indexeddb') }); ","version":"Next","tagName":"h2"},{"title":"Websql","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#websql","content":" This adapter stores the data inside of websql. It has a different performance behavior. Websql is deprecated. You should not use the websql adapter unless you have a really good reason. // npm install pouchdb-adapter-websql --save addPouchPlugin(require('pouchdb-adapter-websql')); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch('websql') }); NodeJS ","version":"Next","tagName":"h2"},{"title":"leveldown","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#leveldown","content":" This adapter uses a LevelDB C++ binding to store that data on the filesystem. It has the best performance compared to other filesystem adapters. This adapter can not be used when multiple nodejs-processes access the same filesystem folders for storage. // npm install leveldown --save // npm install pouchdb-adapter-leveldb --save addPouchPlugin(require('pouchdb-adapter-leveldb')); // leveldown adapters need the leveldb plugin to work const leveldown = require('leveldown'); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch(leveldown) // the full leveldown-module }); // or use a specific folder to store the data const database = await createRxDatabase({ name: '/root/user/project/mydatabase', storage: getRxStoragePouch(leveldown) // the full leveldown-module }); ","version":"Next","tagName":"h2"},{"title":"Node-Websql","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#node-websql","content":" This adapter uses the node-websql-shim to store data on the filesystem. Its advantages are that it does not need a leveldb build and it can be used when multiple nodejs-processes use the same database-files. // npm install pouchdb-adapter-node-websql --save addPouchPlugin(require('pouchdb-adapter-node-websql')); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch('websql') // the name of your adapter }); // or use a specific folder to store the data const database = await createRxDatabase({ name: '/root/user/project/mydatabase', storage: getRxStoragePouch('websql') // the name of your adapter }); React-Native ","version":"Next","tagName":"h2"},{"title":"react-native-sqlite","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#react-native-sqlite","content":" Uses ReactNative SQLite as storage. Claims to be much faster than the asyncstorage adapter. To use it, you have to do some steps from this tutorial. First install pouchdb-adapter-react-native-sqlite and react-native-sqlite-2. npm install pouchdb-adapter-react-native-sqlite react-native-sqlite-2 Then you have to link the library. react-native link react-native-sqlite-2 You also have to add some polyfills which are need but not included in react-native. npm install base-64 events import { decode, encode } from 'base-64' if (!global.btoa) { global.btoa = encode; } if (!global.atob) { global.atob = decode; } // Avoid using node dependent modules process.browser = true; Then you can use it inside of your code. import { createRxDatabase } from 'rxdb'; import { addPouchPlugin, getRxStoragePouch } from 'rxdb/plugins/pouchdb'; import SQLite from 'react-native-sqlite-2' import SQLiteAdapterFactory from 'pouchdb-adapter-react-native-sqlite' const SQLiteAdapter = SQLiteAdapterFactory(SQLite) addPouchPlugin(SQLiteAdapter); addPouchPlugin(require('pouchdb-adapter-http')); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch('react-native-sqlite') // the name of your adapter }); ","version":"Next","tagName":"h2"},{"title":"asyncstorage","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#asyncstorage","content":" Uses react-native's asyncstorage. note There are known problems with this adapter and it is not recommended to use it. // npm install pouchdb-adapter-asyncstorage --save addPouchPlugin(require('pouchdb-adapter-asyncstorage')); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch('node-asyncstorage') // the name of your adapter }); ","version":"Next","tagName":"h2"},{"title":"asyncstorage-down","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#asyncstorage-down","content":" A leveldown adapter that stores on asyncstorage. // npm install pouchdb-adapter-asyncstorage-down --save addPouchPlugin(require('pouchdb-adapter-leveldb')); // leveldown adapters need the leveldb plugin to work const asyncstorageDown = require('asyncstorage-down'); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch(asyncstorageDown) // the full leveldown-module }); Cordova / Phonegap / Capacitor ","version":"Next","tagName":"h2"},{"title":"cordova-sqlite","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#cordova-sqlite","content":" Uses cordova's global cordova.sqlitePlugin. It can be used with cordova and capacitor. // npm install pouchdb-adapter-cordova-sqlite --save addPouchPlugin(require('pouchdb-adapter-cordova-sqlite')); /** * In capacitor/cordova you have to wait until all plugins are loaded and 'window.sqlitePlugin' * can be accessed. * This function waits until document deviceready is called which ensures that everything is loaded. * @link https://cordova.apache.org/docs/de/latest/cordova/events/events.deviceready.html */ export function awaitCapacitorDeviceReady(): Promise<void> { return new Promise(res => { document.addEventListener('deviceready', () => { res(); }); }); } async function getDatabase(){ // first wait until the deviceready event is fired await awaitCapacitorDeviceReady(); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch( 'cordova-sqlite', // pouch settings are passed as second parameter { // for ios devices, the cordova-sqlite adapter needs to know where to save the data. iosDatabaseLocation: 'Library' } ) }); } ","version":"Next","tagName":"h2"},{"title":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","type":0,"sectionRef":"#","url":"/articles/angular-indexeddb.html","content":"","keywords":"","version":"Next"},{"title":"What Is IndexedDB?","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#what-is-indexeddb","content":" IndexedDB is a low-level JavaScript API for client-side storage of large amounts of structured data. It allows you to create key-value or object store-based data storage right in the user's browser. IndexedDB supports transactions and indexing but lacks a robust query API and can be complex to use due to its callback-based nature. ","version":"Next","tagName":"h2"},{"title":"Why Use IndexedDB in Angular","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#why-use-indexeddb-in-angular","content":" Offline-First / Local-First: If your app needs to function with limited or no internet connectivity, IndexedDB provides a reliable local storage layer. Users can continue using the application offline, and data can sync when the connection is restored. Performance: Local data access comes with near-zero latency, removing the need for constant server requests and eliminating most loading spinners. Easier to Implement: By replicating all necessary data to the client once, you avoid implementing numerous backend endpoints for each user interaction. Scalability: Local data queries remove processing load from your servers and reduce bandwidth usage by handling queries on the client side. ","version":"Next","tagName":"h2"},{"title":"Why Using Plain IndexedDB is a Problem","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#why-using-plain-indexeddb-is-a-problem","content":" Despite the advantages, directly working with IndexedDB has several drawbacks: Callback-Based: IndexedDB was originally designed around a callback-based API, which can be unwieldy compared to modern Promise or RxJS-based flows. Difficult to Implement: IndexedDB is often described as a "low-level" API. It's more suitable for library authors rather than application developers who simply need a robust local store. Rudimentary Query API: Complex or dynamic queries are cumbersome with IndexedDB's basic get/put approach and limited indexes. TypeScript Support: Maintaining strong TypeScript types for all document structures is not straightforward with IndexedDB's untyped object stores. No Observable API: IndexedDB cannot directly emit live data changes. With RxDB, you can subscribe to changes on a collection or even a single document field. Cross-Tab Synchronization: Handling concurrent data changes across multiple browser tabs is difficult in IndexedDB. RxDB has built-in multi-tab support that keeps all tabs in sync. Advanced Features Missing: IndexedDB lacks built-in support for encryption, compression, or other advanced data management features. Browser-Only: IndexedDB works in the browser but not in environments like React Native or Electron. RxDB offers storage adapters to seamlessly reuse the same code on different platforms. ","version":"Next","tagName":"h2"},{"title":"Set Up RxDB in Angular","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#set-up-rxdb-in-angular","content":" ","version":"Next","tagName":"h2"},{"title":"Installing RxDB","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#installing-rxdb","content":" You can install RxDB into your Angular application via npm: npm install rxdb --save ","version":"Next","tagName":"h3"},{"title":"Patch Change Detection with zone.js","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#patch-change-detection-with-zonejs","content":" RxDB creates RxJS observables outside of Angular's zone, meaning Angular won't automatically trigger change detection when new data arrives. You must patch RxJS with zone.js: //> app.component.ts /** * IMPORTANT: RxDB creates rxjs observables outside of Angular's zone * So you have to import the rxjs patch to ensure change detection works correctly. * @link https://www.bennadel.com/blog/3448-binding-rxjs-observable-sources-outside-of-the-ngzone-in-angular-6-0-2.htm */ import 'zone.js/plugins/zone-patch-rxjs'; ","version":"Next","tagName":"h3"},{"title":"Create a Database and Collections","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#create-a-database-and-collections","content":" RxDB supports multiple storage options. The free and simple approach is using the Dexie.js-based storage. For higher performance, there's a premium plain IndexedDB storage. import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // Define your schema const heroSchema = { title: 'hero schema', version: 0, description: 'Describes a hero in your app', primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 }, name: { type: 'string' }, power: { type: 'string' } }, required: ['id', 'name'] }; export async function initDB() { // Create a database const db = await createRxDatabase({ name: 'heroesdb', // the name of the database storage: getRxStorageDexie() }); // Add collections await db.addCollections({ heroes: { schema: heroSchema } }); return db; } It's recommended to encapsulate database creation logic in an Angular service, such as in a DatabaseService. A full example is available in RxDB's Angular example. ","version":"Next","tagName":"h3"},{"title":"CRUD Operations","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#crud-operations","content":" Once your database is initialized, you can perform all CRUD operations: // insert await db.heroes.insert({ name: 'Iron Man', power: 'Genius-level intellect' }); // bulk insert await db.heroes.bulkInsert([ { name: 'Thor', power: 'God of Thunder' }, { name: 'Hulk', power: 'Superhuman Strength' } ]); // find and findOne const heroes = await db.heroes.find().exec(); const ironMan = await db.heroes.findOne({ selector: { name: 'Iron Man' } }).exec(); // update const doc = await db.heroes.findOne({ selector: { name: 'Hulk' } }).exec(); await doc.update({ $set: { power: 'Unlimited Strength' } }); // delete const doc = await db.heroes.findOne({ selector: { name: 'Thor' } }).exec(); await doc.remove(); ","version":"Next","tagName":"h3"},{"title":"Reactive Queries and Live Updates","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#reactive-queries-and-live-updates","content":" A key benefit of RxDB is reactivity. You can subscribe to changes and have your UI automatically reflect updates in real time even across browser tabs. ","version":"Next","tagName":"h2"},{"title":"With RxJS Observables and Async Pipes","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#with-rxjs-observables-and-async-pipes","content":" In Angular, you can display this data with the AsyncPipe: constructor(private dbService: DatabaseService) { this.heroes$ = this.dbService.db.heroes.find({ selector: {}, sort: [{ name: 'asc' }] }).$; } <ul> <li *ngFor="let hero of heroes$ | async"> {{ hero.name }} </li> </ul> ","version":"Next","tagName":"h3"},{"title":"With Angular Signals","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#with-angular-signals","content":" Angular Signals are a newer approach for reactivity. RxDB supports them via a custom reactivity factory. You can convert RxJS Observables to Signals using Angular's toSignal: import { RxReactivityFactory } from 'rxdb/plugins/core'; import { Signal, untracked, Injector } from '@angular/core'; import { toSignal } from '@angular/core/rxjs-interop'; export function createReactivityFactory(injector: Injector): RxReactivityFactory<Signal<any>> { return { fromObservable(observable$, initialValue) { return untracked(() => toSignal(observable$, { initialValue, injector, rejectErrors: true }) ); } }; } Pass this factory when creating your RxDatabase: import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; import { inject, Injector } from '@angular/core'; const database = await createRxDatabase({ name: 'mydb', storage: getRxStorageDexie(), reactivity: createReactivityFactory(inject(Injector)) }); Use the double-dollar sign ($$) to get a Signal instead of an Observable: const heroesSignal = database.heroes.find().$$; <ul> <li *ngFor="let hero of heroesSignal()"> {{ hero.name }} </li> </ul> ","version":"Next","tagName":"h3"},{"title":"Angular IndexedDB Example with RxDB","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#angular-indexeddb-example-with-rxdb","content":" A comprehensive example of RxDB in an Angular application is available in the RxDB GitHub repository. It demonstrates database creation, queries, and Angular integration using best practices. ","version":"Next","tagName":"h2"},{"title":"Advanced RxDB Features","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#advanced-rxdb-features","content":" Beyond simple CRUD and local data storage, RxDB supports: Replication: Sync your local data with a remote database. Learn more at RxDB Replication. Data Migration on Schema Changes: RxDB supports automatic or manual schema migrations to manage backward-compatibility and evolve your data structure. See RxDB Migration. Encryption: Easily encrypt sensitive data at rest. See RxDB Encryption. Compression: Reduce storage and bandwidth usage using key compression. Learn more at RxDB Key Compression. ","version":"Next","tagName":"h2"},{"title":"Limitations of IndexedDB","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#limitations-of-indexeddb","content":" While IndexedDB works well for many use cases, it does have a few constraints: Potentially Slow: While adequate for most use cases, IndexedDB performance can degrade for very large datasets. More details at RxDB Slow IndexedDB. Storage Limits: Browsers may cap the amount of data you can store in IndexedDB. For more info, see Local Storage Limits of IndexedDB. ","version":"Next","tagName":"h2"},{"title":"Alternatives to IndexedDB","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#alternatives-to-indexeddb","content":" Depending on your needs, you might explore: Origin Private File System (OPFS): A newer browser storage mechanism that can offer better performance. RxDB supports OPFS storage. SQLite: When building a mobile or hybrid app (e.g., with Capacitor or Ionic), you can use SQLite locally. See RxDB with SQLite. ","version":"Next","tagName":"h2"},{"title":"Performance comparison with other browser storages","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#performance-comparison-with-other-browser-storages","content":" Here is a performance overview of the various browser based storage implementation of RxDB: ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"Build Smarter Offline-First Angular Apps: How RxDB Beats IndexedDB Alone","url":"/articles/angular-indexeddb.html#follow-up","content":" Continue your deep dive into RxDB with official quickstart guides and star the repository on GitHub to stay updated. RxDB Quickstart: Get started quickly with the RxDB Quickstart. RxDB GitHub: Explore the source, open issues, and star ⭐ the project at RxDB GitHub Repo. By combining IndexedDB's local storage with RxDB's powerful features, you can build performant, robust, and offline-capable Angular applications. RxDB takes care of the lower-level complexities, letting you focus on delivering a great user experience-online or off. ","version":"Next","tagName":"h2"},{"title":"Browser Storage - RxDB as a Database for Browsers","type":0,"sectionRef":"#","url":"/articles/browser-storage.html","content":"","keywords":"","version":"Next"},{"title":"Localstorage","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#localstorage","content":" Localstorage is a straightforward way to store small amounts of data in the user's web browser. It operates on a simple key-value basis and is relatively easy to use. While it has limitations, it is suitable for basic data storage requirements. ","version":"Next","tagName":"h3"},{"title":"IndexedDB","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#indexeddb","content":" IndexedDB, on the other hand, offers a more robust and structured approach to browser-based data storage. It can handle larger datasets and complex queries, making it a valuable choice for more advanced web applications. ","version":"Next","tagName":"h3"},{"title":"Why Store Data in the Browser","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#why-store-data-in-the-browser","content":" Now that we've explored the methods of storing data in the browser, let's delve into why this is a beneficial strategy for web developers: Caching: Storing data in the browser allows you to cache frequently used information. This means that your web application can access essential data more quickly because it doesn't need to repeatedly fetch it from a server. This results in a smoother and more responsive user experience. Offline Access: One significant advantage of browser storage is that data becomes portable and remains accessible even when the user is offline. This feature ensures that users can continue to use your application, view their saved information, and make changes, irrespective of their internet connection status. Faster Real-time Applications: For real-time applications, having data stored locally in the browser significantly enhances performance. Local data allows your application to respond faster to user interactions, creating a more seamless and responsive user interface. Low Latency Queries: When you run queries locally within the browser, you minimize the latency associated with network requests. This results in near-instant access to data, which is particularly crucial for applications that require rapid data retrieval. Faster Initial Application Start Time: By preloading essential data into browser storage, you can reduce the initial load time of your web application. Users can start using your application more swiftly, which is essential for making a positive first impression. Store Local Data with Encryption: For applications that deal with sensitive data, browser storage allows you to implement encryption to secure the stored information. This ensures that even if data is stored on the user's device, it remains confidential and protected. In summary, storing data in the browser offers several advantages, including improved performance, offline access, and enhanced user experiences. Localstorage and IndexedDB are two valuable tools that developers can utilize to leverage these benefits and create web applications that are more responsive and user-friendly. ","version":"Next","tagName":"h2"},{"title":"Browser Storage Limitations","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#browser-storage-limitations","content":" While browser storage, such as Localstorage and IndexedDB, offers many advantages, it's important to be aware of its limitations: Slower Performance Compared to Native Databases: Browser-based storage solutions can't match the performance of native server-side databases. They may experience slower data retrieval and processing, especially for large datasets or complex operations. Storage Space Limitations: Browsers impose restrictions on the amount of data that can be stored locally. This limitation can be problematic for applications with extensive data storage requirements, potentially necessitating creative solutions to manage data effectively. ","version":"Next","tagName":"h2"},{"title":"Why SQL Databases Like SQLite Aren't a Good Fit for the Browser","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#why-sql-databases-like-sqlite-arent-a-good-fit-for-the-browser","content":" SQL databases like SQLite, while powerful in server environments, may not be the best choice for browser-based applications due to various reasons: ","version":"Next","tagName":"h2"},{"title":"Push/Pull Based vs. Reactive","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#pushpull-based-vs-reactive","content":" SQL databases often use a push/pull model for data synchronization. This approach is less reactive and may not align well with the real-time nature of web applications, where immediate updates to the user interface are crucial. ","version":"Next","tagName":"h3"},{"title":"Build Size of Server-Side Databases","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#build-size-of-server-side-databases","content":" Server-side databases like SQLite have a significant build size, which can increase the initial load time of web applications. This can result in a suboptimal user experience, particularly for users with slower internet connections. ","version":"Next","tagName":"h3"},{"title":"Initialization Time and Performance","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#initialization-time-and-performance","content":" SQL databases are optimized for server environments, and their initialization processes and performance characteristics may not align with the needs of web applications. They might not offer the swift performance required for seamless user interactions. ","version":"Next","tagName":"h3"},{"title":"Why RxDB Is a Good Fit as Browser Storage","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#why-rxdb-is-a-good-fit-as-browser-storage","content":" RxDB is an excellent choice for browser-based storage due to its numerous features and advantages: ","version":"Next","tagName":"h2"},{"title":"Flexible Storage Layer for Various Platforms","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#flexible-storage-layer-for-various-platforms","content":" RxDB offers a flexible storage layer that can seamlessly integrate with different platforms, making it versatile and adaptable to various application needs. ","version":"Next","tagName":"h3"},{"title":"NoSQL JSON Documents Are a Better Fit for UIs","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#nosql-json-documents-are-a-better-fit-for-uis","content":" NoSQL JSON documents, used by RxDB, are well-suited for user interfaces. They provide a natural and efficient way to structure and display data in web applications. ","version":"Next","tagName":"h3"},{"title":"NoSQL Has Better TypeScript Support Compared to SQL","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#nosql-has-better-typescript-support-compared-to-sql","content":" RxDB boasts robust TypeScript support, which is beneficial for developers who prefer type safety and code predictability in their projects. ","version":"Next","tagName":"h3"},{"title":"Observable Document Fields","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#observable-document-fields","content":" RxDB enables developers to observe individual document fields, offering fine-grained control over data tracking and updates. ","version":"Next","tagName":"h3"},{"title":"Made in JavaScript, Optimized for JavaScript Applications","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#made-in-javascript-optimized-for-javascript-applications","content":" Being built in JavaScript and optimized for JavaScript applications, RxDB seamlessly integrates into web development stacks, minimizing compatibility issues. ","version":"Next","tagName":"h3"},{"title":"Observable Queries (rxjs) to Automatically Update the UI on Changes","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#observable-queries-rxjs-to-automatically-update-the-ui-on-changes","content":" RxDB's support for Observable Queries allows the user interface to update automatically in real-time when data changes. This reactivity enhances the user experience and simplifies UI development. const query = myCollection.find({ selector: { age: { $gt: 21 } } }); const querySub = query.$.subscribe(results => { console.log('got results: ' + results.length); }); ","version":"Next","tagName":"h3"},{"title":"Optimized Observed Queries with the EventReduce Algorithm","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#optimized-observed-queries-with-the-eventreduce-algorithm","content":" RxDB's EventReduce Algorithm ensures efficient data handling and rendering, improving overall performance and responsiveness. ","version":"Next","tagName":"h3"},{"title":"Handling of Schema Changes","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#handling-of-schema-changes","content":" RxDB provides built-in support for handling schema changes, simplifying database management when updates are required. ","version":"Next","tagName":"h3"},{"title":"Built-In Multi-Tab Support","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#built-in-multi-tab-support","content":" For applications requiring multi-tab support, RxDB natively handles data consistency across different browser tabs, streamlining data synchronization. ","version":"Next","tagName":"h3"},{"title":"Storing Documents Compressed","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#storing-documents-compressed","content":" Efficient data storage is achieved through document compression, reducing storage space requirements and enhancing overall performance. ","version":"Next","tagName":"h3"},{"title":"Replication Algorithm for Compatibility with Any Backend","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#replication-algorithm-for-compatibility-with-any-backend","content":" RxDB's Replication Algorithm facilitates compatibility with various backend systems, ensuring seamless data synchronization between the browser and server. ","version":"Next","tagName":"h3"},{"title":"Summary","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#summary","content":" In conclusion, RxDB is a powerful and feature-rich solution for browser-based storage. Its adaptability, real-time capabilities, TypeScript support, and optimization for JavaScript applications make it an ideal choice for modern web development projects, addressing the limitations of traditional SQL databases in the browser. Developers can harness RxDB to create efficient, responsive, and user-friendly web applications that leverage the full potential of browser storage. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#follow-up","content":" To explore more about RxDB and leverage its capabilities for browser storage, check out the following resources: RxDB GitHub Repository: Visit the official GitHub repository of RxDB to access the source code, documentation, and community support.RxDB Quickstart: Get started quickly with RxDB by following the provided quickstart guide, which provides step-by-step instructions for setting up and using RxDB in your projects. ","version":"Next","tagName":"h2"},{"title":"RxDB: The benefits of Browser Databases","type":0,"sectionRef":"#","url":"/articles/browser-database.html","content":"","keywords":"","version":"Next"},{"title":"Why you might want to store data in the browser","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#why-you-might-want-to-store-data-in-the-browser","content":" There are compelling reasons to consider storing data in the browser: ","version":"Next","tagName":"h2"},{"title":"Use the database for caching","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#use-the-database-for-caching","content":" By leveraging a browser database, you can harness the power of caching. Storing frequently accessed data locally enables you to reduce server requests and greatly improve application performance. Caching provides a faster and smoother user experience, enhancing overall user satisfaction. ","version":"Next","tagName":"h3"},{"title":"Data is offline accessible","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#data-is-offline-accessible","content":" Storing data in the browser allows for offline accessibility. Regardless of an active internet connection, users can access and interact with the application, ensuring uninterrupted productivity and user engagement. ","version":"Next","tagName":"h3"},{"title":"Easier implementation of replicating database state","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#easier-implementation-of-replicating-database-state","content":" Browser databases simplify the replication of database state across multiple devices or instances of the application. Compared to complex REST routes, replicating data becomes easier and more streamlined. This capability enables the development of real-time and collaborative applications, where changes are seamlessly synchronized among users. ","version":"Next","tagName":"h3"},{"title":"Building real-time applications is easier with local data","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#building-real-time-applications-is-easier-with-local-data","content":" With a local browser database, building real-time applications becomes more straightforward. The availability of local data allows for reactive data flows and dynamic user interfaces that instantly reflect changes in the underlying data. Real-time features can be seamlessly implemented, providing a rich and interactive user experience. ","version":"Next","tagName":"h3"},{"title":"Browser databases can scale better","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#browser-databases-can-scale-better","content":" Browser databases distribute the query workload to users' devices, allowing queries to run locally instead of relying solely on server resources. This decentralized approach improves scalability by reducing the burden on the server, resulting in a more efficient and responsive application. ","version":"Next","tagName":"h3"},{"title":"Running queries locally has low latency","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#running-queries-locally-has-low-latency","content":" Browser databases offer the advantage of running queries locally, resulting in low latency. Eliminating the need for server round-trips significantly improves query performance, ensuring faster data retrieval and a more responsive application. ","version":"Next","tagName":"h3"},{"title":"Faster initial application start time","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#faster-initial-application-start-time","content":" Storing data in the browser reduces the initial application start time. Instead of waiting for data to be fetched from the server, the application can leverage the local database, resulting in faster initialization and improved user satisfaction right from the start. ","version":"Next","tagName":"h3"},{"title":"Easier integration with JavaScript frameworks","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#easier-integration-with-javascript-frameworks","content":" Browser databases, including RxDB, seamlessly integrate with popular JavaScript frameworks such as Angular, React.js, Vue.js, and Svelte. This integration allows developers to leverage the power of a database while working within the familiar environment of their preferred framework, enhancing productivity and ease of development. ","version":"Next","tagName":"h3"},{"title":"Store local data with encryption","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#store-local-data-with-encryption","content":" Security is a crucial aspect of data storage, especially when handling sensitive information. Browser databases, like RxDB, offer the capability to store local data with encryption, ensuring the confidentiality and protection of sensitive user data. ","version":"Next","tagName":"h3"},{"title":"Using a local database for state management","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#using-a-local-database-for-state-management","content":" Utilizing a local browser database for state management eliminates the need for traditional state management libraries like Redux or NgRx. This approach simplifies the application's architecture by leveraging the database's capabilities to handle state-related operations efficiently. ","version":"Next","tagName":"h3"},{"title":"Data is portable and always accessible by the user","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#data-is-portable-and-always-accessible-by-the-user","content":" When data is stored in the browser, it becomes portable and always accessible by the user. This ensures that users have control and ownership of their data, enhancing data privacy and accessibility. ","version":"Next","tagName":"h3"},{"title":"Why SQL databases like SQLite are not a good fit for the browser","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#why-sql-databases-like-sqlite-are-not-a-good-fit-for-the-browser","content":" While SQL databases, such as SQLite, excel in server-side scenarios, they are not always the optimal choice for browser-based applications. Here are some reasons why SQL databases may not be the best fit for the browser: ","version":"Next","tagName":"h2"},{"title":"Push/Pull based vs. reactive","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#pushpull-based-vs-reactive","content":" SQL databases typically rely on a push/pull mechanism, where the server pushes updates to the client or the client pulls data from the server. This approach is not inherently reactive and requires additional effort to implement real-time data updates. In contrast, browser databases like RxDB provide built-in reactive mechanisms, allowing the application to react to data changes seamlessly. ","version":"Next","tagName":"h3"},{"title":"Build size of server-side databases","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#build-size-of-server-side-databases","content":" Server-side databases, designed to handle large-scale applications, often have significant build sizes that are unsuitable for browser applications. In contrast, browser databases are specifically optimized for browser environments and leverage browser APIs like IndexedDB, OPFS, and Webworker, resulting in smaller build sizes. ","version":"Next","tagName":"h3"},{"title":"Initialization time and performance","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#initialization-time-and-performance","content":" The initialization time and performance of server-side databases can be suboptimal in browser applications. Browser databases, on the other hand, are designed to provide fast initialization and efficient performance within the browser environment, ensuring a smooth user experience. ","version":"Next","tagName":"h3"},{"title":"Why RxDB is a good fit for the browser","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#why-rxdb-is-a-good-fit-for-the-browser","content":" RxDB stands out as an excellent choice for implementing a browser database solution. Here's why RxDB is a perfect fit for browser applications: ","version":"Next","tagName":"h2"},{"title":"Observable Queries (rxjs) to automatically update the UI on changes","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#observable-queries-rxjs-to-automatically-update-the-ui-on-changes","content":" RxDB provides Observable Queries, powered by RxJS, enabling automatic UI updates when data changes occur. This reactive approach eliminates the need for manual data synchronization and ensures a real-time and responsive user interface. const query = myCollection.find({ selector: { age: { $gt: 21 } } }); const querySub = query.$.subscribe(results => { console.log('got results: ' + results.length); }); ","version":"Next","tagName":"h3"},{"title":"NoSQL JSON documents are a better fit for UIs","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#nosql-json-documents-are-a-better-fit-for-uis","content":" RxDB utilizes NoSQL JSON documents, which align naturally with UI development in JavaScript. JavaScript's native handling of JSON objects makes working with NoSQL documents more intuitive, simplifying UI-related operations. ","version":"Next","tagName":"h3"},{"title":"NoSQL has better TypeScript support compared to SQL","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#nosql-has-better-typescript-support-compared-to-sql","content":" TypeScript is widely used in modern JavaScript development. NoSQL databases, including RxDB, offer excellent TypeScript support, making it easier to build type-safe applications and leverage the benefits of static typing. ","version":"Next","tagName":"h3"},{"title":"Observable document fields","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#observable-document-fields","content":" RxDB allows observing individual document fields, providing granular reactivity. This feature enables efficient tracking of specific data changes and fine-grained UI updates, optimizing performance and responsiveness. ","version":"Next","tagName":"h3"},{"title":"Made in JavaScript, optimized for JavaScript applications","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#made-in-javascript-optimized-for-javascript-applications","content":" RxDB is built entirely in JavaScript, optimized for JavaScript applications. This ensures seamless integration with JavaScript codebases and maximizes performance within the browser environment. ","version":"Next","tagName":"h3"},{"title":"Optimized observed queries with the EventReduce Algorithm","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#optimized-observed-queries-with-the-eventreduce-algorithm","content":" RxDB employs the EventReduce Algorithm to optimize observed queries. This algorithm intelligently reduces unnecessary data transmissions, resulting in efficient query execution and improved performance. ","version":"Next","tagName":"h3"},{"title":"Built-in multi-tab support","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#built-in-multi-tab-support","content":" RxDB natively supports multi-tab applications, allowing data synchronization and replication across different tabs or instances of the same application. This feature ensures consistent data across the application and enhances collaboration and real-time experiences. ","version":"Next","tagName":"h3"},{"title":"Handling of schema changes","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#handling-of-schema-changes","content":" RxDB excels in handling schema changes, even when data is stored on multiple client devices. It provides mechanisms to handle schema migrations seamlessly, ensuring data integrity and compatibility as the application evolves. ","version":"Next","tagName":"h3"},{"title":"Storing documents compressed","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#storing-documents-compressed","content":" To optimize storage space, RxDB allows the compression of documents. Storing compressed documents reduces storage requirements and improves overall performance, especially in scenarios with large data volumes. ","version":"Next","tagName":"h3"},{"title":"Flexible storage layer for various platforms","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#flexible-storage-layer-for-various-platforms","content":" RxDB offers a flexible storage layer, enabling code reuse across different platforms, including Electron.js, React Native, hybrid apps (e.g., Capacitor.js), and web browsers. This flexibility streamlines development efforts and ensures consistent data management across multiple platforms. ","version":"Next","tagName":"h3"},{"title":"Replication Algorithm for compatibility with any backend","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#replication-algorithm-for-compatibility-with-any-backend","content":" RxDB incorporates a Replication Algorithm that is open-source and can be made compatible with various backend systems. This compatibility allows seamless data synchronization with different backend architectures, such as own servers, Firebase, CouchDB, NATS or WebSocket. ","version":"Next","tagName":"h3"},{"title":"Follow Up","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#follow-up","content":" To explore more about RxDB and leverage its capabilities for browser database development, check out the following resources: RxDB GitHub Repository: Visit the official GitHub repository of RxDB to access the source code, documentation, and community support.RxDB Quickstart: Get started quickly with RxDB by following the provided quickstart guide, which provides step-by-step instructions for setting up and using RxDB in your projects. RxDB empowers developers to unlock the power of browser databases, enabling efficient data management, real-time applications, and enhanced user experiences. By leveraging RxDB's features and benefits, you can take your browser-based applications to the next level of performance, scalability, and responsiveness. ","version":"Next","tagName":"h2"},{"title":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","type":0,"sectionRef":"#","url":"/articles/data-base.html","content":"","keywords":"","version":"Next"},{"title":"Overview of Web Applications that can benefit from RxDB","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#overview-of-web-applications-that-can-benefit-from-rxdb","content":" Before diving into the specifics of RxDB, let's take a moment to understand the scope of web applications that can leverage its capabilities. Any web application that requires real-time data updates, offline functionality, and synchronization between clients and servers can greatly benefit from RxDB. Whether it's a collaborative document editing tool, a task management app, or a chat application, RxDB offers a robust foundation for building these types of applications. ","version":"Next","tagName":"h2"},{"title":"Importance of data bases in Mobile Applications","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#importance-of-data-bases-in-mobile-applications","content":" Mobile applications have become an integral part of our lives, providing us with instant access to information and services. Behind the scenes, data bases play a pivotal role in storing and managing the data that powers these applications. data bases enable efficient data retrieval, updates, and synchronization, ensuring a smooth user experience even in challenging network conditions. ","version":"Next","tagName":"h2"},{"title":"Introducing RxDB as a data base Solution","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#introducing-rxdb-as-a-data-base-solution","content":" RxDB, short for Reactive data base, is a client-side data base solution designed specifically for web and mobile applications. Built on the principles of reactive programming, RxDB brings the power of observables and event-driven architecture to data management. With RxDB, developers can create applications that are responsive, offline-ready, and capable of seamless data synchronization between clients and servers. ","version":"Next","tagName":"h2"},{"title":"Getting Started with RxDB","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#getting-started-with-rxdb","content":" ","version":"Next","tagName":"h2"},{"title":"What is RxDB?","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#what-is-rxdb","content":" RxDB is an open-source JavaScript data base that leverages reactive programming and provides a seamless API for handling data. It is built on top of existing popular data base technologies, such as IndexedDB, and adds a layer of reactive features to enable real-time data updates and synchronization. ","version":"Next","tagName":"h3"},{"title":"Reactive Data Handling","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#reactive-data-handling","content":" One of the standout features of RxDB is its reactive data handling. It utilizes observables to provide a stream of data that automatically updates whenever a change occurs. This reactive approach allows developers to build applications that respond instantly to data changes, ensuring a highly interactive and real-time user experience. ","version":"Next","tagName":"h3"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#offline-first-approach","content":" RxDB embraces an offline-first approach, enabling applications to work seamlessly even when there is no internet connectivity. It achieves this by caching data locally on the client-side and synchronizing it with the server when the connection is available. This ensures that users can continue working with the application and have their data automatically synchronized when they come back online. ","version":"Next","tagName":"h3"},{"title":"Data Replication","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#data-replication","content":" RxDB simplifies the process of data replication between clients and servers. It provides replication plugins that handle the synchronization of data in real-time. These plugins allow applications to keep data consistent across multiple clients, enabling collaborative features and ensuring that each client has the most up-to-date information. ","version":"Next","tagName":"h3"},{"title":"Observable Queries","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#observable-queries","content":" RxDB introduces the concept of observable queries, which are powerful tools for efficiently querying data. With observable queries, developers can subscribe to specific data queries and receive automatic updates whenever the underlying data changes. This eliminates the need for manual polling and ensures that applications always have access to the latest data. ","version":"Next","tagName":"h3"},{"title":"Multi-Tab support","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#multi-tab-support","content":" RxDB offers multi-tab support, allowing applications to function seamlessly across multiple browser tabs. This feature ensures that data changes in one tab are immediately reflected in all other open tabs, enabling a consistent user experience across different browser windows. ","version":"Next","tagName":"h3"},{"title":"RxDB vs. Other data base Options","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#rxdb-vs-other-data-base-options","content":" When considering data base options for web applications, developers often encounter choices like Dexie.js, IndexedDB, OPFS, and Memory-based solutions. RxDB, while built on top of IndexedDB, stands out due to its reactive data handling capabilities and advanced synchronization features. Compared to other options, RxDB offers a more streamlined and powerful approach to managing data in web applications. ","version":"Next","tagName":"h3"},{"title":"Different RxStorage layers for RxDB","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#different-rxstorage-layers-for-rxdb","content":" RxDB provides various storage layers, known as RxStorage, that serve as interfaces to different underlying storage technologies. These layers include: Dexie.js RxStorage: Built on top of Dexie.js, this storage layer leverages IndexedDB as its backend.IndexedDB RxStorage: This layer directly utilizes IndexedDB as its backend, providing a robust and widely supported storage option.OPFS RxStorage: OPFS (Operational Transformation File System) is a file system-like storage layer that allows for efficient conflict resolution and real-time collaboration.Memory RxStorage: Primarily used for testing and development, this storage layer keeps data in memory without persisting it to disk. Each RxStorage layer has its strengths and is suited for different scenarios, enabling developers to choose the most appropriate option for their specific use case. ","version":"Next","tagName":"h3"},{"title":"Synchronizing Data with RxDB between Clients and Servers","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#synchronizing-data-with-rxdb-between-clients-and-servers","content":" ","version":"Next","tagName":"h2"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#offline-first-approach-1","content":" As mentioned earlier, RxDB adopts an offline-first approach, allowing applications to function seamlessly in disconnected environments. By caching data locally, applications can continue to operate and make updates even without an internet connection. Once the connection is restored, RxDB's replication plugins take care of synchronizing the data with the server, ensuring consistency across all clients. ","version":"Next","tagName":"h3"},{"title":"RxDB Replication Plugins","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#rxdb-replication-plugins","content":" RxDB provides a range of replication plugins that simplify the process of synchronizing data between clients and servers. These plugins enable real-time replication using various protocols, such as WebSocket or HTTP, and handle conflict resolution strategies to ensure data integrity. By leveraging these replication plugins, developers can easily implement robust and scalable synchronization capabilities in their applications. ","version":"Next","tagName":"h3"},{"title":"Advanced RxDB Features and Techniques","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#advanced-rxdb-features-and-techniques","content":" Indexing and Performance Optimization To achieve optimal performance, RxDB offers indexing capabilities. Indexing allows for efficient data retrieval and faster query execution. By strategically defining indexes on frequently accessed fields, developers can significantly enhance the overall performance of their RxDB-powered applications. ","version":"Next","tagName":"h3"},{"title":"Encryption of Local Data","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#encryption-of-local-data","content":" In scenarios where data security is paramount, RxDB provides options for encrypting local data. By encrypting the data base contents, developers can ensure that sensitive information remains secure even if the underlying storage is compromised. RxDB integrates seamlessly with encryption libraries, making it easy to implement end-to-end encryption in applications. ","version":"Next","tagName":"h3"},{"title":"Change Streams and Event Handling","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#change-streams-and-event-handling","content":" RxDB offers change streams and event handling mechanisms, enabling developers to react to data changes in real-time. With change streams, applications can listen to specific collections or documents and trigger custom logic whenever a change occurs. This capability opens up possibilities for building real-time collaboration features, notifications, or other reactive behaviors. ","version":"Next","tagName":"h3"},{"title":"JSON Key Compression","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#json-key-compression","content":" In scenarios where storage size is a concern, RxDB provides JSON key compression. By applying compression techniques to JSON keys, developers can significantly reduce the storage footprint of their data bases. This feature is particularly beneficial for applications dealing with large datasets or limited storage capacities. ","version":"Next","tagName":"h3"},{"title":"Conclusion","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#conclusion","content":" RxDB provides an exceptional data base solution for web and mobile applications, empowering developers to create reactive, offline-ready, and synchronized applications. With its reactive data handling, offline-first approach, and replication plugins, RxDB simplifies the challenges of building real-time applications with data synchronization requirements. By embracing advanced features like indexing, encryption, change streams, and JSON key compression, developers can optimize performance, enhance security, and reduce storage requirements. As web and mobile applications continue to evolve, RxDB proves to be a reliable and powerful ","version":"Next","tagName":"h2"},{"title":"Using RxDB as an Embedded Database","type":0,"sectionRef":"#","url":"/articles/embedded-database.html","content":"","keywords":"","version":"Next"},{"title":"What is an Embedded Database?","type":1,"pageTitle":"Using RxDB as an Embedded Database","url":"/articles/embedded-database.html#what-is-an-embedded-database","content":" An embedded database refers to a client-side database system that is integrated directly within an application. It is designed to operate within the client environment, such as a web browser or a mobile app. This approach eliminates the need for a separate database server and allows the database to run locally on the client device. ","version":"Next","tagName":"h2"},{"title":"Embedded Database in UI Applications","type":1,"pageTitle":"Using RxDB as an Embedded Database","url":"/articles/embedded-database.html#embedded-database-in-ui-applications","content":" In the context of UI applications, an embedded database serves as a local data storage solution. It enables applications to efficiently manage data, facilitate real-time updates, and enhance performance. Let's explore some of the benefits of using an embedded database compared to a traditional server database: Replicating database state becomes easier: Implementing real-time data synchronization and replication is simpler with an embedded database compared to complex REST routes. The embedded nature allows for efficient replication of the database state across multiple instances of the application.Use the database for caching: An embedded database can be utilized for caching frequently accessed data. This caching mechanism enhances performance and reduces the need for repeated network requests, resulting in faster data retrieval.Building real-time applications is easier with local data: By leveraging local data storage, real-time applications can easily update the user interface in response to data changes. This approach simplifies the development of real-time features and enhances the responsiveness of the application.Store local data with encryption: Embedded databases, like RxDB, offer the ability to store local data with encryption. This ensures that sensitive information remains protected even when stored locally on the client device.Data is offline accessible: With an embedded database, data remains accessible even when the application is offline. Users can continue to interact with the application and access their data seamlessly, irrespective of their internet connectivity.Faster initial application start time: Since the data is already stored locally, there is no need for initial data fetching from a remote server. This significantly reduces the application's startup time and allows users to engage with the application more quickly.Improved scalability with local queries: Embedded databases, such as RxDB, perform queries locally on the client device instead of relying on server round-trips. This reduces latency and enhances scalability, particularly when dealing with large datasets or high query volumes.Seamless integration with JavaScript frameworks: Embedded databases, including RxDB, integrate seamlessly with popular JavaScript frameworks like Angular, React.js, Vue.js, and Svelte. This compatibility allows developers to leverage the capabilities of these frameworks while benefiting from embedded database functionality.Running queries locally has low latency: With an embedded database, queries are executed locally on the client device, resulting in minimal latency. This improves the overall performance and responsiveness of the application.Data is portable and always accessible by the user: Embedded databases enable data portability, allowing users to seamlessly transition between devices while maintaining their data and application state. This ensures that data is always accessible and available to the user.Using a local database for state management: Instead of relying on additional state management libraries like Redux or NgRx, an embedded database can be used for local state management. This simplifies state management and ensures data consistency within the application. ","version":"Next","tagName":"h2"},{"title":"Why RxDB as an Embedded Database for Real-time Applications","type":1,"pageTitle":"Using RxDB as an Embedded Database","url":"/articles/embedded-database.html#why-rxdb-as-an-embedded-database-for-real-time-applications","content":" RxDB is a JavaScript-based embedded database that offers numerous advantages for building real-time applications. Let's explore why RxDB is a compelling choice: Observable Queries (RxJS): RxDB leverages the power of Observables through RxJS, enabling developers to create queries that automatically update the user interface on data changes. This reactive approach simplifies UI updates and ensures real-time synchronization of data.NoSQL JSON Documents for UIs: RxDB utilizes NoSQL (JSON) documents as its data model, aligning seamlessly with the requirements of modern UI development. JavaScript's native support for JSON objects makes NoSQL documents a natural fit for UI-driven applications.Better TypeScript Support Compared to SQL: RxDB's NoSQL approach provides excellent TypeScript support. The flexibility of working with JSON objects enables robust typing and enhanced development experiences, ensuring type safety and reducing runtime errors.Observable Document Fields: RxDB allows developers to observe individual fields within documents. This granularity enables efficient tracking of specific data changes and facilitates targeted UI updates, enhancing performance and responsiveness.Made in JavaScript, Optimized for JavaScript Applications: Being built entirely in JavaScript, RxDB is optimized for JavaScript applications. It leverages JavaScript's capabilities and integrates seamlessly with JavaScript frameworks and libraries, making it a natural choice for JavaScript developers.Optimized Observed Queries with the EventReduce Algorithm: RxDB incorporates the EventReduce algorithm to optimize observed queries. This algorithm reduces the number of emitted events during query execution, resulting in enhanced query performance and reduced overhead.Built-in Multi-tab Support: RxDB provides built-in multi-tab support, allowing multiple instances of an application to share and synchronize data seamlessly. This feature enables collaborative and real-time scenarios across multiple browser tabs or windows.Handling of Schema Changes across Multiple Client Devices: With RxDB, handling schema changes across multiple client devices becomes straightforward. RxDB's schema migration capabilities ensure that applications can seamlessly adapt to evolving data structures, providing a consistent experience across different devices.Storing Documents Compressed: RxDB offers the ability to store documents in a compressed format. This reduces the storage footprint and improves performance, especially when dealing with large datasets.Flexible Storage Layer and Cross-platform Compatibility: RxDB provides a flexible storage layer that can be reused across various platforms, including Electron.js, React Native, hybrid apps (via Capacitor.js), and browsers. This cross-platform compatibility simplifies development and enables code reuse across different environments.Replication Algorithm for Backend Compatibility: RxDB's replication algorithm is open-source and can be made compatible with various backend solutions, such as self-hosted servers, Firebase, CouchDB, NATS, WebSockets, and more. This flexibility allows developers to choose their preferred backend infrastructure while benefiting from RxDB's embedded database capabilities. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"Using RxDB as an Embedded Database","url":"/articles/embedded-database.html#follow-up","content":" To further explore RxDB and leverage its capabilities as an embedded database, the following resources can be helpful: RxDB GitHub Repository: Visit the official GitHub repository of RxDB to access the source code, documentation, and community support.RxDB Quickstart: Get started quickly with RxDB by following the provided quickstart guide, which offers step-by-step instructions for setting up and using RxDB in your projects. By utilizing RxDB as an embedded database in UI applications, developers can harness the power of efficient data management, real-time updates, and enhanced user experiences. RxDB's features and benefits make it a compelling choice for building modern, responsive, and scalable applications. ","version":"Next","tagName":"h2"},{"title":"RxDB - The Firebase Realtime Database Alternative That Can Sync With Your Own Backend","type":0,"sectionRef":"#","url":"/articles/firebase-realtime-database-alternative.html","content":"","keywords":"","version":"Next"},{"title":"Why RxDB Is an Excellent Firebase Realtime Database Alternative","type":1,"pageTitle":"RxDB - The Firebase Realtime Database Alternative That Can Sync With Your Own Backend","url":"/articles/firebase-realtime-database-alternative.html#why-rxdb-is-an-excellent-firebase-realtime-database-alternative","content":" ","version":"Next","tagName":"h2"},{"title":"1. Complete Offline-First Experience","type":1,"pageTitle":"RxDB - The Firebase Realtime Database Alternative That Can Sync With Your Own Backend","url":"/articles/firebase-realtime-database-alternative.html#1-complete-offline-first-experience","content":" Unlike Firebase Realtime Database, which relies on central infrastructure to process data, RxDB is fully embedded within your client application (including browsers, Node.js, Electron, and React Native). This design means your app stays completely functional offline, since all data reads and writes happen locally. When connectivity is restored, RxDB's syncing framework automatically reconciles local changes with your remote backend. ","version":"Next","tagName":"h3"},{"title":"2. Freedom to Use Any Server or Cloud","type":1,"pageTitle":"RxDB - The Firebase Realtime Database Alternative That Can Sync With Your Own Backend","url":"/articles/firebase-realtime-database-alternative.html#2-freedom-to-use-any-server-or-cloud","content":" While Firebase Realtime Database ties you into Google's ecosystem, RxDB allows you to choose any hosting environment. You can: Host your data on your own servers or private cloud.Integrate with relational databases like PostgreSQL or other NoSQL options such as CouchDB.Build custom endpoints using REST, GraphQL, or any other protocol. This flexibility ensures you're not locked into a single vendor and can adapt your backend strategy as your project evolves. ","version":"Next","tagName":"h3"},{"title":"3. Advanced Conflict Handling","type":1,"pageTitle":"RxDB - The Firebase Realtime Database Alternative That Can Sync With Your Own Backend","url":"/articles/firebase-realtime-database-alternative.html#3-advanced-conflict-handling","content":" Firebase Realtime Database typically updates data with a simple last-in-wins approach. RxDB, on the other hand, lets you implement more sophisticated conflict resolution logic. Using revisions and conflict handlers, RxDB can merge concurrent edits or preserve multiple versions—ensuring your application remains consistent even when multiple clients modify the same data at the same time. ","version":"Next","tagName":"h3"},{"title":"4. Lower Cloud Costs for Read-Heavy Apps","type":1,"pageTitle":"RxDB - The Firebase Realtime Database Alternative That Can Sync With Your Own Backend","url":"/articles/firebase-realtime-database-alternative.html#4-lower-cloud-costs-for-read-heavy-apps","content":" When you rely on Firebase Realtime Database, each query or listener can translate into ongoing reads, potentially running up your monthly bill. With RxDB, all queries are performed locally. Your app only communicates with the backend to sync document changes, significantly reducing bandwidth and hosting expenses for applications that frequently read data. ","version":"Next","tagName":"h3"},{"title":"5. Powerful Local Queries","type":1,"pageTitle":"RxDB - The Firebase Realtime Database Alternative That Can Sync With Your Own Backend","url":"/articles/firebase-realtime-database-alternative.html#5-powerful-local-queries","content":" If you've hit Firebase Realtime Database's querying limits, RxDB offers a far more robust approach to data retrieval. You can: Define custom indexes for faster local lookups.Perform sophisticated filters, joins, or full-text searches right on the client.Subscribe to real-time data updates through RxDB's reactive query engine. Because these operations happen locally, your UI updates instantly, providing a snappy user experience. ","version":"Next","tagName":"h3"},{"title":"6. True Offline Initialization","type":1,"pageTitle":"RxDB - The Firebase Realtime Database Alternative That Can Sync With Your Own Backend","url":"/articles/firebase-realtime-database-alternative.html#6-true-offline-initialization","content":" While Firebase offers some offline caching, it often requires an initial connection for authentication or to seed local data. RxDB, however, is built to handle an offline-start scenario. Users can begin working with the application immediately, regardless of connectivity, and any modifications they make will sync once the network is available again. ","version":"Next","tagName":"h3"},{"title":"7. Works Everywhere JavaScript Runs","type":1,"pageTitle":"RxDB - The Firebase Realtime Database Alternative That Can Sync With Your Own Backend","url":"/articles/firebase-realtime-database-alternative.html#7-works-everywhere-javascript-runs","content":" One of RxDB's core strengths is its ability to run in any JavaScript environment. Whether you're building a web app that uses IndexedDB in the browser, an Electron desktop program, or a React Native mobile application, RxDB's swappable storage adapts to your runtime of choice. This consistency makes code-sharing and cross-platform development far simpler than being tied to a single backend system. ","version":"Next","tagName":"h3"},{"title":"How RxDB's Syncing Mechanism Operates","type":1,"pageTitle":"RxDB - The Firebase Realtime Database Alternative That Can Sync With Your Own Backend","url":"/articles/firebase-realtime-database-alternative.html#how-rxdbs-syncing-mechanism-operates","content":" RxDB employs its own Replication Protocol to manage data flow between your client and remote servers. Replication revolves around: Pull: Retrieving updated or newly created documents from the server.Push: Sending local changes to the backend for persistence.Live Updates: Continuously streaming changes to and from the backend for real-time synchronization. ","version":"Next","tagName":"h2"},{"title":"Sample Code: Sync RxDB With a Custom Endpoint","type":1,"pageTitle":"RxDB - The Firebase Realtime Database Alternative That Can Sync With Your Own Backend","url":"/articles/firebase-realtime-database-alternative.html#sample-code-sync-rxdb-with-a-custom-endpoint","content":" import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; import { replicateRxCollection } from 'rxdb/plugins/replication'; async function initDB() { const db = await createRxDatabase({ name: 'localdb', storage: getRxStorageDexie(), multiInstance: true, eventReduce: true }); await db.addCollections({ tasks: { schema: { title: 'task schema', version: 0, type: 'object', primaryKey: 'id', properties: { id: { type: 'string', maxLength: 100 }, title: { type: 'string' }, complete: { type: 'boolean' } } } } }); // Start a custom replication replicateRxCollection({ collection: db.tasks, replicationIdentifier: 'custom-tasks-api', push: { handler: async (docs) => { // post local changes to your server const resp = await fetch('https://yourapi.com/tasks/push', { method: 'POST', body: JSON.stringify({ changes: docs }) }); return await resp.json(); // return conflicting documents if any } }, pull: { handler: async (lastCheckpoint, batchSize) => { // fetch new/updated items from your server const response = await fetch( `https://yourapi.com/tasks/pull?checkpoint=${JSON.stringify( lastCheckpoint )}&limit=${batchSize}` ); return await response.json(); } }, live: true }); return db; } ","version":"Next","tagName":"h2"},{"title":"Setting Up P2P Replication Over WebRTC","type":1,"pageTitle":"RxDB - The Firebase Realtime Database Alternative That Can Sync With Your Own Backend","url":"/articles/firebase-realtime-database-alternative.html#setting-up-p2p-replication-over-webrtc","content":" In addition to using a centralized backend, RxDB supports peer-to-peer synchronization through WebRTC, enabling devices to share data directly. import { replicateWebRTC, getConnectionHandlerSimplePeer } from 'rxdb/plugins/replication-webrtc'; const webrtcPool = await replicateWebRTC({ collection: db.tasks, topic: 'p2p-topic-123', connectionHandlerCreator: getConnectionHandlerSimplePeer({ signalingServerUrl: 'wss://signaling.rxdb.info/', wrtc: require('node-datachannel/polyfill'), webSocketConstructor: require('ws').WebSocket }) }); webrtcPool.error$.subscribe((error) => { console.error('P2P error:', error); }); Here, any client that joins the same topic communicates changes to other peers, all without requiring a traditional client-server model. ","version":"Next","tagName":"h3"},{"title":"Quick Steps to Get Started","type":1,"pageTitle":"RxDB - The Firebase Realtime Database Alternative That Can Sync With Your Own Backend","url":"/articles/firebase-realtime-database-alternative.html#quick-steps-to-get-started","content":" Install RxDB npm install rxdb rxjs Create a Local Database import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const db = await createRxDatabase({ name: 'myLocalDB', storage: getRxStorageDexie() }); Add a Collection ts Kopieren await db.addCollections({ notes: { schema: { title: 'notes schema', version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLenght: 100 }, content: { type: 'string' } } } } }); Synchronize Use one of the Replication Plugins to connect with your preferred backend. ","version":"Next","tagName":"h2"},{"title":"Is RxDB the Right Solution for You?","type":1,"pageTitle":"RxDB - The Firebase Realtime Database Alternative That Can Sync With Your Own Backend","url":"/articles/firebase-realtime-database-alternative.html#is-rxdb-the-right-solution-for-you","content":" Long Offline Use: If your users need to work without an internet connection, RxDB's built-in offline-first design stands out compared to Firebase Realtime Database's partial offline approach.Custom or Complex Queries: RxDB lets you perform your queries locally, define indexing, and handle even complex transformations locally - no extra call to an external API.Avoid Vendor Lock-In: If you anticipate needing to move or adapt your backend later, you can do so without rewriting how your client manages its data.Peer-to-Peer Collaboration: Whether you need quick demos or real production use, WebRTC replication can link your users directly without central coordination of data storage. ","version":"Next","tagName":"h3"},{"title":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","type":0,"sectionRef":"#","url":"/articles/frontend-database.html","content":"","keywords":"","version":"Next"},{"title":"Why you might want to store data in the frontend","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#why-you-might-want-to-store-data-in-the-frontend","content":" ","version":"Next","tagName":"h2"},{"title":"Offline accessibility","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#offline-accessibility","content":" One compelling reason to store data in the frontend is to enable offline accessibility. By leveraging a frontend database, applications can cache essential data locally, allowing users to continue using the application even when an internet connection is unavailable. This feature is particularly useful for mobile applications or web apps with limited or intermittent connectivity. ","version":"Next","tagName":"h3"},{"title":"Caching","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#caching","content":" Frontend databases also serve as efficient caching mechanisms. By storing frequently accessed data locally, applications can minimize network requests and reduce latency, resulting in faster and more responsive user experiences. Caching is particularly beneficial for applications that heavily rely on remote data or perform computationally intensive operations. ","version":"Next","tagName":"h3"},{"title":"Decreased initial application start time","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#decreased-initial-application-start-time","content":" Storing data in the frontend decreases the initial application start time because the data is already present locally. By eliminating the need to fetch data from a server during startup, applications can quickly render the UI and provide users with an immediate interactive experience. This is especially advantageous for applications with large datasets or complex data retrieval processes. ","version":"Next","tagName":"h3"},{"title":"Password encryption for local data","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#password-encryption-for-local-data","content":" Security is a crucial aspect of data storage. With a front end database, developers can encrypt sensitive local data, such as user credentials or personal information, using encryption algorithms. This ensures that even if the device is compromised, the data remains securely stored and protected. ","version":"Next","tagName":"h3"},{"title":"Local database for state management","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#local-database-for-state-management","content":" Frontend databases provide an alternative to traditional state management libraries like Redux or NgRx. By utilizing a local database, developers can store and manage application state directly in the frontend, eliminating the need for additional libraries. This approach simplifies the codebase, reduces complexity, and provides a more straightforward data flow within the application. ","version":"Next","tagName":"h3"},{"title":"Low-latency local queries","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#low-latency-local-queries","content":" Frontend databases enable low-latency queries that run entirely on the client's device. Instead of relying on server round-trips for each query, the database executes queries locally, resulting in faster response times. This is particularly beneficial for applications that require real-time updates or frequent data retrieval. ","version":"Next","tagName":"h3"},{"title":"Building realtime applications with local data","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#building-realtime-applications-with-local-data","content":" Realtime applications often require immediate updates based on data changes. By storing data locally and utilizing a frontend database, developers can build realtime applications more easily. The database can observe data changes and automatically update the UI, providing a seamless and responsive user experience. ","version":"Next","tagName":"h3"},{"title":"Easier integration with JavaScript frameworks","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#easier-integration-with-javascript-frameworks","content":" Frontend databases, including RxDB, are designed to integrate seamlessly with popular JavaScript frameworks such as Angular, React.js, Vue.js, and Svelte. These databases offer well-defined APIs and support that align with the specific requirements of these frameworks, enabling developers to leverage the full potential of the frontend database within their preferred development environment. ","version":"Next","tagName":"h3"},{"title":"Simplified replication of database state","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#simplified-replication-of-database-state","content":" Replicating database state between the frontend and backend can be challenging, especially when dealing with complex REST routes. Frontend databases, however, provide simple mechanisms for replicating database state. They offer intuitive replication algorithms that facilitate data synchronization between the frontend and backend, reducing the complexity and potential pitfalls associated with complex REST-based replication. ","version":"Next","tagName":"h3"},{"title":"Improved scalability","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#improved-scalability","content":" Frontend databases offer improved scalability compared to traditional SQL databases. By leveraging the computational capabilities of client devices, the burden on server resources is reduced. Queries and operations are performed locally, minimizing the need for server round-trips and enabling applications to scale more efficiently. ","version":"Next","tagName":"h3"},{"title":"Why SQL databases are not a good fit for the front end of an application","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#why-sql-databases-are-not-a-good-fit-for-the-front-end-of-an-application","content":" While SQL databases excel in server-side scenarios, they pose limitations when used on the frontend. Here are some reasons why SQL databases are not well-suited for frontend applications: ","version":"Next","tagName":"h2"},{"title":"Push/Pull based vs. reactive","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#pushpull-based-vs-reactive","content":" SQL databases typically rely on a push/pull model, where the server pushes data to the client upon request. This approach is not inherently reactive, as it requires explicit requests for data updates. In contrast, frontend applications often require reactive data flows, where changes in data trigger automatic updates in the UI. Frontend databases, like RxDB, provide reactive capabilities that seamlessly integrate with the dynamic nature of frontend development. ","version":"Next","tagName":"h3"},{"title":"Initialization time and performance","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#initialization-time-and-performance","content":" SQL databases designed for server-side usage tend to have larger build sizes and initialization times, making them less efficient for browser-based applications. Frontend databases, on the other hand, directly leverage browser APIs like IndexedDB, OPFS, and WebWorker, resulting in leaner builds and faster initialization times. Often the queries are such fast, that it is not even necessary to implement a loading spinner. ","version":"Next","tagName":"h3"},{"title":"Build size considerations","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#build-size-considerations","content":" Server-side SQL databases typically come with a significant build size, which can be impractical for browser applications where code size optimization is crucial. Frontend databases, on the other hand, are specifically designed to operate within the constraints of browser environments, ensuring efficient resource utilization and smaller build sizes. For example the SQLite Webassembly file alone has a size of over 0.8 Megabyte with an additional 0.2 Megabyte in JavaScript code for connection. ","version":"Next","tagName":"h3"},{"title":"Why RxDB is a good fit for the frontend","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#why-rxdb-is-a-good-fit-for-the-frontend","content":" RxDB is a powerful frontend JavaScript database that addresses the limitations of SQL databases and provides an optimal solution for frontend data storage. Let's explore why RxDB is an excellent fit for frontend applications: ","version":"Next","tagName":"h2"},{"title":"Made in JavaScript, optimized for JavaScript applications","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#made-in-javascript-optimized-for-javascript-applications","content":" RxDB is designed and optimized for JavaScript applications. Built using JavaScript itself, RxDB offers seamless integration with JavaScript frameworks and libraries, allowing developers to leverage their existing JavaScript knowledge and skills. ","version":"Next","tagName":"h3"},{"title":"NoSQL (JSON) documents for UIs","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#nosql-json-documents-for-uis","content":" RxDB adopts a NoSQL approach, using JSON documents as its primary data structure. This aligns well with the JavaScript ecosystem, as JavaScript natively works with JSON objects. By using NoSQL documents, RxDB provides a more natural and intuitive data model for UI-centric applications. ","version":"Next","tagName":"h3"},{"title":"Better TypeScript support compared to SQL","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#better-typescript-support-compared-to-sql","content":" TypeScript has become increasingly popular for building frontend applications. RxDB provides excellent TypeScript support, allowing developers to leverage static typing and benefit from enhanced code quality and tooling. This is particularly advantageous when compared to SQL databases, which often have limited TypeScript support. ","version":"Next","tagName":"h3"},{"title":"Observable Queries for automatic UI updates","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#observable-queries-for-automatic-ui-updates","content":" RxDB introduces the concept of observable queries, powered by RxJS. Observable queries automatically update the UI whenever there are changes in the underlying data. This reactive approach eliminates the need for manual UI updates and ensures that the frontend remains synchronized with the database state. ","version":"Next","tagName":"h3"},{"title":"Optimized observed queries with the EventReduce Algorithm","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#optimized-observed-queries-with-the-eventreduce-algorithm","content":" RxDB optimizes observed queries with its EventReduce Algorithm. This algorithm intelligently reduces redundant events and ensures that UI updates are performed efficiently. By minimizing unnecessary re-renders, RxDB significantly improves performance and responsiveness in frontend applications. const query = myCollection.find({ selector: { age: { $gt: 21 } } }); const querySub = query.$.subscribe(results => { console.log('got results: ' + results.length); }); ","version":"Next","tagName":"h3"},{"title":"Observable document fields","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#observable-document-fields","content":" RxDB supports observable document fields, enabling developers to track changes at a granular level within documents. By observing specific fields, developers can reactively update the UI when those fields change, ensuring a responsive and synchronized frontend interface. myDocument.firstName$.subscribe(newName => console.log('name is: ' + newName)); ","version":"Next","tagName":"h3"},{"title":"Storing Documents Compressed","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#storing-documents-compressed","content":" RxDB provides the option to store documents in a compressed format, reducing storage requirements and improving overall database performance. Compressed storage offers benefits such as reduced disk space usage, faster data read/write operations, and improved network transfer speeds, making it an essential feature for efficient frontend data storage. ","version":"Next","tagName":"h3"},{"title":"Built-in Multi-tab support","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#built-in-multi-tab-support","content":" RxDB offers built-in multi-tab support, allowing data synchronization and state management across multiple browser tabs. This feature ensures consistent data access and synchronization, enabling users to work seamlessly across different tabs without conflicts or data inconsistencies. ","version":"Next","tagName":"h3"},{"title":"Replication Algorithm can be made compatible with any backend","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#replication-algorithm-can-be-made-compatible-with-any-backend","content":" RxDB's realtime replication algorithm is designed to be flexible and compatible with various backend systems. Whether you're using your own servers, Firebase, CouchDB, NATS, WebSocket, or any other backend, RxDB can be seamlessly integrated and synchronized with the backend system of your choice. ","version":"Next","tagName":"h3"},{"title":"Flexible storage layer for code reuse","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#flexible-storage-layer-for-code-reuse","content":" RxDB provides a flexible storage layer that enables code reuse across different platforms. Whether you're building applications with Electron.js, React Native, hybrid apps using Capacitor.js, or traditional web browsers, RxDB allows you to reuse the same codebase and leverage the power of a frontend database across different environments. ","version":"Next","tagName":"h3"},{"title":"Handling schema changes in distributed environments","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#handling-schema-changes-in-distributed-environments","content":" In distributed environments where data is stored on multiple client devices, handling schema changes can be challenging. RxDB tackles this challenge by providing robust mechanisms for handling schema changes. It ensures that schema updates propagate smoothly across devices, maintaining data integrity and enabling seamless schema evolution. ","version":"Next","tagName":"h3"},{"title":"Follow Up","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#follow-up","content":" To further explore RxDB and get started with using it in your frontend applications, consider the following resources: RxDB Quickstart: A step-by-step guide to quickly set up RxDB in your project and start leveraging its features.RxDB GitHub Repository: The official repository for RxDB, where you can find the code, examples, and community support. By adopting RxDB as your frontend database, you can unlock the full potential of frontend data storage and empower your applications with offline accessibility, caching, improved performance, and seamless data synchronization. RxDB's JavaScript-centric approach and powerful features make it an ideal choice for frontend developers seeking efficient and scalable data storage solutions. ","version":"Next","tagName":"h2"},{"title":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","type":0,"sectionRef":"#","url":"/articles/firestore-alternative.html","content":"","keywords":"","version":"Next"},{"title":"What Makes RxDB a Great Firestore Alternative?","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#what-makes-rxdb-a-great-firestore-alternative","content":" Firestore is convenient for many projects, but it does lock you into Google's ecosystem. Below are some of the key advantages you gain by choosing RxDB: ","version":"Next","tagName":"h2"},{"title":"1. Fully Offline-First","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#1-fully-offline-first","content":" RxDB runs directly in your client application (browser, Node.js, Electron, React Native, etc.). Data is stored locally, so your application remains fully functional even when offline. When the device returns online, RxDB's flexible replication protocol synchronizes your local changes with any remote endpoint. ","version":"Next","tagName":"h3"},{"title":"2. Freedom to Use Any Backend","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#2-freedom-to-use-any-backend","content":" Unlike Firestore, RxDB doesn't require a proprietary hosting service. You can: Host your data on your own server (Node.js, Go, Python, etc.).Use existing databases like PostgreSQL, CouchDB, or MongoDB with custom endpoints.Implement a custom GraphQL or REST-based API for syncing. This backend-agnostic approach protects you from vendor lock-in. Your application's client-side data storage remains consistent; only your replication logic (or plugin) changes if you switch servers. ","version":"Next","tagName":"h3"},{"title":"3. Advanced Conflict Resolution","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#3-advanced-conflict-resolution","content":" Firestore enforces a last-write-wins conflict resolution strategy. This might cause issues if multiple users or devices update the same data in complex ways. RxDB lets you: Implement custom conflict resolution via revisions.Store partial merges, track versions, or preserve multiple user edits.Fine-tune how your data merges to ensure consistency across distributed systems. ","version":"Next","tagName":"h3"},{"title":"4. Reduced Cloud Costs","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#4-reduced-cloud-costs","content":" Firestore queries often count as billable reads. With RxDB, queries run locally against your local state - no repeated network calls or extra charges. You pay only for the data actually synced, not every read. For read-heavy apps, using RxDB as a Firestore alternative can significantly reduce costs. ","version":"Next","tagName":"h3"},{"title":"5. No Limits on Query Features","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#5-no-limits-on-query-features","content":" Firestore's query engine is limited by certain constraints (e.g., no advanced joins, limited indexing). With RxDB: NoSQL data is stored locally, and you can define any indexes you need.Perform complex queries, run full-text search, or do aggregated transformations or even vector search.Use RxDB's reactivity to subscribe to query results in real time. ","version":"Next","tagName":"h3"},{"title":"6. True Offline-Start Support","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#6-true-offline-start-support","content":" While Firestore does have offline caching, it often requires an online check at app initialization for authentication. RxDB is truly offline-first; you can launch the app and write data even if the device never goes online initially. It's ready whenever the user is. ","version":"Next","tagName":"h3"},{"title":"7. Cross-Platform: Any JavaScript Runtime","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#7-cross-platform-any-javascript-runtime","content":" RxDB is designed to run in any environment that can execute JavaScript. Whether you’re building a web app in the browser, an Electron desktop application, a React Native mobile app, or a command-line tool with Node.js, RxDB’s storage layer is swappable to fit your runtime’s capabilities. In the browser, store data in IndexedDB or OPFS.In Node.js, use LevelDB or other supported storages.In React Native, pick from a range of adapters suited for mobile devices.In Electron, rely on fast local storage with zero changes to your application code. ","version":"Next","tagName":"h3"},{"title":"How Does RxDB's Sync Work?","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#how-does-rxdbs-sync-work","content":" RxDB replication is powered by its Replication Protocol. This simple yet robust protocol enables: Pull: Fetch new or updated documents from the server.Push: Send local changes back to the server.Live Real-Time: Once you're caught up, you can opt for event-based streaming instead of continuous polling. Code Example: Sync RxDB with a Custom Backend import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; import { replicateRxCollection } from 'rxdb/plugins/replication'; async function initDB() { const db = await createRxDatabase({ name: 'mydb', storage: getRxStorageDexie(), multiInstance: true, eventReduce: true }); await db.addCollections({ tasks: { schema: { title: 'task schema', version: 0, type: 'object', primaryKey: 'id', properties: { id: { type: 'string', maxLength: 100 }, title: { type: 'string' }, done: { type: 'boolean' } } } } }); // Start a custom REST-based replication replicateRxCollection({ collection: db.tasks, replicationIdentifier: 'my-tasks-rest-api', push: { handler: async (documents) => { // Send docs to your REST endpoint const res = await fetch('https://myapi.com/push', { method: 'POST', body: JSON.stringify({ docs: documents }) }); // Return conflicts if any return await res.json(); } }, pull: { handler: async (lastCheckpoint, batchSize) => { // Fetch from your REST endpoint const res = await fetch(`https://myapi.com/pull?checkpoint=${JSON.stringify(lastCheckpoint)}&limit=${batchSize}`); return await res.json(); } }, live: true // keep watching for changes }); return db; } By swapping out the handler implementations or using an official plugin (e.g., GraphQL, CouchDB, Firestore replication, etc.), you can adapt to any backend or data source. RxDB thus becomes a flexible alternative to Firestore while maintaining real-time capabilities. ","version":"Next","tagName":"h2"},{"title":"Getting Started with RxDB as a Firestore Alternative","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#getting-started-with-rxdb-as-a-firestore-alternative","content":" ","version":"Next","tagName":"h2"},{"title":"Install RxDB:","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#install-rxdb","content":" npm install rxdb rxjs ","version":"Next","tagName":"h3"},{"title":"Create a Database:","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#create-a-database","content":" import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const db = await createRxDatabase({ name: 'mydb', storage: getRxStorageDexie() }); ","version":"Next","tagName":"h3"},{"title":"Define Collections:","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#define-collections","content":" await db.addCollections({ items: { schema: { title: 'items schema', version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 }, text: { type: 'string' } } } } }); ","version":"Next","tagName":"h3"},{"title":"Sync","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#sync","content":" Use a Replication Plugin to connect with a custom backend or existing database. For a Firestore-specific approach, RxDB Firestore Replication also exists if you want to combine local indexing and advanced queries with a Cloud Firestore backend. But if you really want to replace Firestore entirely - just point RxDB to your new backend. ","version":"Next","tagName":"h3"},{"title":"Example: Start a WebRTC P2P Replication","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#example-start-a-webrtc-p2p-replication","content":" In addition to syncing with a central server, RxDB also supports pure peer-to-peer replication using WebRTC. This can be invaluable for scenarios where clients need to sync data directly without a master server. import { replicateWebRTC, getConnectionHandlerSimplePeer } from 'rxdb/plugins/replication-webrtc'; const replicationPool = await replicateWebRTC({ collection: db.tasks, topic: 'my-p2p-room', // Clients with the same topic will sync with each other. connectionHandlerCreator: getConnectionHandlerSimplePeer({ // Use your own or the official RxDB signaling server signalingServerUrl: 'wss://signaling.rxdb.info/', // Node.js requires a polyfill for WebRTC & WebSocket wrtc: require('node-datachannel/polyfill'), webSocketConstructor: require('ws').WebSocket }), pull: {}, // optional pull config push: {} // optional push config }); // The replicationPool manages all connected peers replicationPool.error$.subscribe(err => { console.error('P2P Sync Error:', err); }); This example sets up a live P2P replication where any new peers joining the same topic automatically sync local data with each other, eliminating the need for a dedicated central server for the actual data exchange. ","version":"Next","tagName":"h3"},{"title":"Is RxDB Right for Your Project?","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#is-rxdb-right-for-your-project","content":" You want offline-first: If you need an offline-first app that starts offline, RxDB's local database approach and sync protocol excel at this.Your project is read-heavy: Reading from Firestore for every query can get expensive. With RxDB, reads are free and local; you only pay for writes or sync overhead.You need advanced queries: Firestore's query constraints may not suit complex data. With RxDB, you can define your own indexing logic or run arbitrary queries locally.You want no vendor lock-in: Easily transition from Firestore to your own server or another vendor - just change the replication layer. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"RxDB - The Firestore Alternative That Can Sync with Your Own Backend","url":"/articles/firestore-alternative.html#follow-up","content":" If you've been searching for a Firestore alternative that gives you the freedom to sync your data with any backend, offers robust offline-first capabilities, and supports truly customizable conflict resolution and queries, RxDB is worth exploring. You can adopt it seamlessly, ensure local reads, reduce costs, and stay in complete control of your data layer. Ready to dive in? Check out the RxDB Quickstart Guide, join our Discord community, and experience how RxDB can be the perfect local-first, real-time database solution for your next project. More resources: Replication ProtocolFirestore Replication PluginCustom Conflict ResolutionRxDB GitHub Repository ","version":"Next","tagName":"h2"},{"title":"ideas for articles","type":0,"sectionRef":"#","url":"/articles/ideas","content":"","keywords":"","version":"Next"},{"title":"Seo keywords:","type":1,"pageTitle":"ideas for articles","url":"/articles/ideas#seo-keywords","content":" X- "optimistic ui" X- "local database" (rddt done) X- "react-native encryption" X- "vue database" (rddt done) X- "jquery database" X- "vue indexeddb" X- "firebase realtime database alternative" (rddt done) X- "firestore alternative" (rddt done) X- "ionic storage" (rddt done) X- "local database" X- "offline database" X- "zero local first" X- "webrtc p2p" - 390 http://localhost:3000/replication-webrtc.html "supabase alternative" "reactjs storage" "store local storage" "react localstorage" "react-native storage" "supabase offline" - 260 "store array in localstorage", "localStorage array of objects" "real time web apps" - 170 "reactive database" - 210 "electron sqlite" "in browser database" - 90 "offline first app" - 260 "json based database" "react native sql" - 110 "sqlite electron" "localstorage vs indexeddb" "react native nosql database" - 30 "indexeddb library" - 260 "indexeddb encryption" - 90 "client side database" - 140 "webtransport vs websocket" "local first development" - 210 "local storage examples" "local vector database" - 590 "nosql json database" - 140 "mobile app database" - 590 "web based database" "json vs database" "livequery" - 210 "indexeddb storage limit" - 590 "indexeddb size limit" - 260 "indexeddb max size" - 590 "indexeddb limits" - 170 ","version":"Next","tagName":"h2"},{"title":"RxDB as a Database in a Flutter Application","type":0,"sectionRef":"#","url":"/articles/flutter-database.html","content":"","keywords":"","version":"Next"},{"title":"Overview of Flutter Mobile Applications","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#overview-of-flutter-mobile-applications","content":" Flutter is an open-source UI software development kit created by Google that allows developers to build high-performance mobile applications for iOS and Android platforms using a single codebase. Flutter's framework provides a wide range of widgets and tools that enable developers to create visually appealing and responsive applications. ","version":"Next","tagName":"h3"},{"title":"Importance of Databases in Flutter Applications","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#importance-of-databases-in-flutter-applications","content":" Databases play a vital role in Flutter applications by providing a persistent and reliable storage solution for storing and retrieving data. Whether it's user profiles, app settings, or complex data structures, a database helps in efficiently managing and organizing the application's data. Choosing the right database for a Flutter application can significantly impact the performance, scalability, and user experience of the app. ","version":"Next","tagName":"h3"},{"title":"Introducing RxDB as a Database Solution","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#introducing-rxdb-as-a-database-solution","content":" RxDB is a powerful NoSQL database solution that is designed to work seamlessly with JavaScript-based frameworks, such as Flutter. It stands for Reactive Database and offers a variety of features that make it an excellent choice for building Flutter applications. RxDB combines the simplicity of JavaScript's document-based database model with the reactive programming paradigm, enabling developers to build real-time and offline-first applications with ease. ","version":"Next","tagName":"h3"},{"title":"Getting Started with RxDB","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#getting-started-with-rxdb","content":" To understand how RxDB can be utilized in a Flutter application, let's explore its core features and advantages. ","version":"Next","tagName":"h2"},{"title":"What is RxDB?","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#what-is-rxdb","content":" RxDB is a client-side database built on top of IndexedDB, which is a low-level browser-based database API. It provides a simple and intuitive API for performing CRUD operations (Create, Read, Update, Delete) on documents. RxDB's underlying architecture allows for efficient handling of data synchronization between multiple clients and servers. ","version":"Next","tagName":"h3"},{"title":"Reactive Data Handling","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#reactive-data-handling","content":" One of the key strengths of RxDB is its reactive data handling. It leverages the power of Observables, a concept from reactive programming, to automatically update the UI in response to data changes. With RxDB, developers can define queries and subscribe to their results, ensuring that the UI is always in sync with the database. ","version":"Next","tagName":"h3"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#offline-first-approach","content":" RxDB follows an offline-first approach, making it ideal for building Flutter applications that need to function even without an internet connection. It allows data to be stored locally and seamlessly synchronizes it with the server when a connection is available. This ensures that users can access and interact with their data regardless of network availability. ","version":"Next","tagName":"h3"},{"title":"Data Replication","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#data-replication","content":" Data replication is a critical aspect of building distributed applications. RxDB provides robust replication capabilities that enable synchronization of data between different clients and servers. With its replication plugins, RxDB simplifies the process of setting up real-time data synchronization, ensuring consistency across all connected devices. ","version":"Next","tagName":"h3"},{"title":"Observable Queries","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#observable-queries","content":" RxDB introduces the concept of observable queries, which are queries that automatically update when the underlying data changes. This feature is particularly useful for keeping the UI up to date with the latest data. By subscribing to an observable query, developers can receive real-time updates and reflect them in the user interface without manual intervention. ","version":"Next","tagName":"h3"},{"title":"RxDB vs. Other Flutter Database Options","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#rxdb-vs-other-flutter-database-options","content":" When considering database options for Flutter applications, developers often come across alternatives such as SQLite or LokiJS. While these databases have their merits, RxDB offers several advantages over them. RxDB's seamless integration with Flutter, its offline-first approach, reactive data handling, and built-in data replication make it a compelling choice for building feature-rich and scalable Flutter applications. ","version":"Next","tagName":"h3"},{"title":"Using RxDB in a Flutter Application","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#using-rxdb-in-a-flutter-application","content":" Now that we understand the core features of RxDB, let's explore how to integrate it into a Flutter application. ","version":"Next","tagName":"h2"},{"title":"How RxDB can run in Flutter","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#how-rxdb-can-run-in-flutter","content":" RxDB is written in TypeScript and compiled to JavaScript. To run it in a Flutter application, the flutter_qjs library is used to spawn a QuickJS JavaScript runtime. RxDB itself runs in that runtime and communicates with the flutter dart runtime. To store data persistent, the LokiJS RxStorage is used together with a custom storage adapter that persists the database inside of the shared_preferences data. To use RxDB, you have to create a compatible JavaScript file that creates your RxDatabase and starts some connectors which are used by Flutter to communicate with the JavaScript RxDB database via setFlutterRxDatabaseConnector(). import { createRxDatabase } from 'rxdb'; import { getRxStorageLoki } from 'rxdb/plugins/storage-lokijs'; import { setFlutterRxDatabaseConnector, getLokijsAdapterFlutter } from 'rxdb/plugins/flutter'; // do all database creation stuff in this method. async function createDB(databaseName) { // create the RxDatabase const db = await createRxDatabase({ // the database.name is variable so we can change it on the flutter side name: databaseName, storage: getRxStorageLoki({ adapter: getLokijsAdapterFlutter() }), multiInstance: false }); await db.addCollections({ heroes: { schema: { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 }, name: { type: 'string', maxLength: 100 }, color: { type: 'string', maxLength: 30 } }, indexes: ['name'], required: ['id', 'name', 'color'] } } }); return db; } // start the connector so that flutter can communicate with the JavaScript process setFlutterRxDatabaseConnector( createDB ); Before you can use the JavaScript code, you have to bundle it into a single .js file. In this example we do that with webpack in a npm script here which bundles everything into the javascript/dist/index.js file. To allow Flutter to access that file during runtime, add it to the assets inside of your pubspec.yaml: flutter: assets: - javascript/dist/index.js Also you need to install RxDB in your flutter part of the application. First you have to use the rxdb dart package as a flutter dependency. Currently the package is not published at the dart pub.dev. Instead you have to install it from the local filesystem inside of your RxDB npm installation. # inside of pubspec.yaml dependencies: rxdb: path: path/to/your/node_modules/rxdb/src/plugins/flutter/dart Afterwards you can import the rxdb library in your dart code and connect to the JavaScript process from there. For reference, check out the lib/main.dart file. import 'package:rxdb/rxdb.dart'; // start the javascript process and connect to the database RxDatabase database = await getRxDatabase("javascript/dist/index.js", databaseName); // get a collection RxCollection collection = database.getCollection('heroes'); // insert a document RxDocument document = await collection.insert({ "id": "zflutter-${DateTime.now()}", "name": nameController.text, "color": colorController.text }); // create a query RxQuery<RxHeroDocType> query = RxDatabaseState.collection.find(); // create list to store query results List<RxDocument<RxHeroDocType>> documents = []; // subscribe to a query query.$().listen((results) { setState(() { documents = results; }); }); ","version":"Next","tagName":"h2"},{"title":"Different RxStorage layers for RxDB","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#different-rxstorage-layers-for-rxdb","content":" RxDB offers multiple storage options, known as RxStorage layers, to store data locally. These options include: LokiJS RxStorage: LokiJS is an in-memory database that can be used as a storage layer for RxDB. It provides fast and efficient in-memory data management capabilities.SQLite RxStorage: SQLite is a popular and widely used embedded database that offers robust storage capabilities. RxDB utilizes SQLite as a storage layer to persist data on the device.Memory RxStorage: As the name suggests, Memory RxStorage stores data in memory. While this option does not provide persistence, it can be useful for temporary or cache-based data storage. By choosing the appropriate RxStorage layer based on the specific requirements of the application, developers can optimize performance and storage efficiency. ","version":"Next","tagName":"h3"},{"title":"Synchronizing Data with RxDB between Clients and Servers","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#synchronizing-data-with-rxdb-between-clients-and-servers","content":" One of the key strengths of RxDB is its ability to synchronize data between multiple clients and servers seamlessly. Let's explore how this synchronization can be achieved. ","version":"Next","tagName":"h2"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#offline-first-approach-1","content":" RxDB's offline-first approach ensures that data can be accessed and modified even when there is no internet connection. Changes made offline are automatically synchronized with the server once a connection is reestablished. This ensures data consistency across all devices, providing a seamless user experience. ","version":"Next","tagName":"h3"},{"title":"RxDB Replication Plugins","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#rxdb-replication-plugins","content":" RxDB provides replication plugins that simplify the process of setting up data synchronization between clients and servers. These plugins offer various synchronization strategies, such as one-way replication, two-way replication, and conflict resolution mechanisms. By configuring the appropriate replication plugin, developers can easily establish real-time data synchronization in their Flutter applications. ","version":"Next","tagName":"h3"},{"title":"Advanced RxDB Features and Techniques","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#advanced-rxdb-features-and-techniques","content":" RxDB offers a range of advanced features and techniques that enhance its functionality and performance. Let's explore a few of these features: ","version":"Next","tagName":"h2"},{"title":"Indexing and Performance Optimization","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#indexing-and-performance-optimization","content":" Indexing is a technique used to optimize query performance by creating indexes on specific fields. RxDB allows developers to define indexes on document fields, improving the efficiency of queries and data retrieval. ","version":"Next","tagName":"h3"},{"title":"Encryption of Local Data","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#encryption-of-local-data","content":" To ensure data privacy and security, RxDB supports encryption of local data. By encrypting the data stored on the device, developers can protect sensitive information and prevent unauthorized access. ","version":"Next","tagName":"h3"},{"title":"Change Streams and Event Handling","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#change-streams-and-event-handling","content":" RxDB provides change streams, which emit events whenever data changes occur. By leveraging change streams, developers can implement custom event handling logic, such as updating the UI or triggering background processes, in response to specific data changes. ","version":"Next","tagName":"h3"},{"title":"JSON Key Compression","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#json-key-compression","content":" To minimize storage requirements and optimize performance, RxDB offers JSON key compression. This feature reduces the size of keys used in the database, resulting in more efficient storage and improved query performance. ","version":"Next","tagName":"h3"},{"title":"Conclusion","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#conclusion","content":" RxDB offers a powerful and flexible database solution for Flutter applications. With its offline-first approach, real-time data synchronization, and reactive data handling capabilities, RxDB simplifies the development of feature-rich and scalable Flutter applications. By integrating RxDB into your Flutter projects, you can leverage its advanced features and techniques to build responsive and data-driven applications that provide an exceptional user experience. note You can find the source code for an example RxDB Flutter Application at the github repo ","version":"Next","tagName":"h2"},{"title":"RxDB as In-memory NoSQL Database: Empowering Real-Time Applications","type":0,"sectionRef":"#","url":"/articles/in-memory-nosql-database.html","content":"","keywords":"","version":"Next"},{"title":"Speed and Performance Benefits","type":1,"pageTitle":"RxDB as In-memory NoSQL Database: Empowering Real-Time Applications","url":"/articles/in-memory-nosql-database.html#speed-and-performance-benefits","content":" One of the key advantages of using RxDB as an in-memory NoSQL database is its ability to leverage in-memory storage for faster database operations. By storing data directly in memory, database operations can be performed significantly faster compared to traditional disk-based databases. This is especially important for real-time applications where every millisecond counts. With RxDB, developers can achieve near-instantaneous data access and manipulation, enabling highly responsive user experiences. Additionally, RxDB eliminates disk I/O bottlenecks that are typically associated with traditional databases. In traditional databases, disk reads and writes can become a bottleneck as the amount of data grows. In contrast, an in-memory database like RxDB keeps the entire dataset in RAM, eliminating disk access overhead. This makes it an excellent choice for applications dealing with real-time analytics, high-throughput data processing, and caching. ","version":"Next","tagName":"h2"},{"title":"Persistence Options","type":1,"pageTitle":"RxDB as In-memory NoSQL Database: Empowering Real-Time Applications","url":"/articles/in-memory-nosql-database.html#persistence-options","content":" While RxDB offers an in-memory storage adapter, it also offers persistence storages. Adapters such as IndexedDB, SQLite, and OPFS enable developers to persist data locally in the browser, making applications accessible even when offline. This hybrid approach combines the benefits of in-memory performance with data durability, providing the best of both worlds. Developers can choose the adapter that best suits their needs, balancing the speed of in-memory storage with the long-term data persistence required for certain applications. import { createRxDatabase } from 'rxdb'; import { getRxStorageMemory } from 'rxdb/plugins/storage-memory'; const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageMemory() }); Also the memory mapped RxStorage exists as a wrapper around any other RxStorage. The wrapper creates an in-memory storage that is used for query and write operations. This memory instance is replicated with the underlying storage for persistence. The main reason to use this is to improve initial page load and query/write times. This is mostly useful in browser based applications. ","version":"Next","tagName":"h2"},{"title":"Use Cases for RxDB","type":1,"pageTitle":"RxDB as In-memory NoSQL Database: Empowering Real-Time Applications","url":"/articles/in-memory-nosql-database.html#use-cases-for-rxdb","content":" RxDB's capabilities make it well-suited for various real-time applications. Some notable use cases include: Chat Applications and Real-Time Messaging: RxDB's in-memory performance and real-time synchronization capabilities make it an excellent choice for building chat applications and real-time messaging systems. Developers can ensure that messages are delivered and synchronized across multiple clients in real-time, providing a seamless and responsive chat experience. Collaborative Document Editors: RxDB's ability to handle data streams and propagate changes in real-time makes it ideal for collaborative document editing. Multiple users can simultaneously edit a document, and their changes are instantly synchronized, allowing for real-time collaboration and ensuring that everyone has the most up-to-date version of the document. Real-Time Analytics Dashboards: RxDB's speed and scalability make it a valuable tool for real-time analytics dashboards. It can handle high volumes of data and perform complex analytics operations in real-time, providing instant insights and visualizations to users. In conclusion, RxDB serves as a powerful in-memory NoSQL database that empowers developers to build real-time applications with exceptional speed, flexibility, and scalability. Its ability to leverage in-memory storage, eliminate disk I/O bottlenecks, and provide persistence options make it an attractive choice for a wide range of real-time use cases. Whether it's chat applications, collaborative document editors, or real-time analytics dashboards, RxDB provides the foundation for building responsive and interactive software that meets the demands of today's users. ","version":"Next","tagName":"h2"},{"title":"IndexedDB Max Storage Size Limit","type":0,"sectionRef":"#","url":"/articles/indexeddb-max-storage-limit.html","content":"","keywords":"","version":"Next"},{"title":"Why IndexedDB Has a Storage Limit","type":1,"pageTitle":"IndexedDB Max Storage Size Limit","url":"/articles/indexeddb-max-storage-limit.html#why-indexeddb-has-a-storage-limit","content":" Browsers need a way to curb runaway disk usage and safeguard user resources. This is accomplished through quota management policies, which can vary among Chrome, Firefox, Safari, Edge, and others. Some browsers use a percentage of your total disk space, while others rely on a fixed maximum or dynamic approach per origin. These policies are designed to prevent malicious or poorly optimized web pages from consuming an unreasonable amount of user storage. Chrome (and Chromium-based browsers) typically allow you to use a percentage of the user’s free disk space, whereas Firefox historically prompts users to allow more than 5 MB in mobile or 50 MB in desktop. Safari often sets tighter maximum caps, especially on iOS devices. Edge aligns closely with Chrome’s rules but can also include enterprise or corporate policy overrides. Understanding these default or dynamic limits prepares you to plan your app’s storage needs appropriately. ","version":"Next","tagName":"h2"},{"title":"Browser-Specific IndexedDB Limits","type":1,"pageTitle":"IndexedDB Max Storage Size Limit","url":"/articles/indexeddb-max-storage-limit.html#browser-specific-indexeddb-limits","content":" IndexedDB size quotas differ significantly across browsers and platforms. While there isn’t a universal rule, the following table summarizes approximate limits and any notes or caveats you should be aware of: Browser\tApprox. Limit\tNotesChrome/Chromium\tUp to ~80% of free disk, per origin cap\tOften cited as 60 GB on a 100 GB drive. Shared pool approach. Quota usage can prompt partial or extended user approvals. Firefox\t~2 GB (desktop) or ~5 MB initial for mobile\tOlder versions asked permission at 50 MB for desktop. Ephemeral/incognito sessions may require repeated user prompts. Safari (iOS)\t~1 GB per origin (variable)\tHistorically stricter. iOS devices limit quotas further. Behavior can differ between iOS Safari versions or iPadOS. Edge\tSimilar to Chrome’s 80% of free space\tCan be influenced by Windows enterprise policies. Generally aligned with Chromium approach. iOS Safari\tTypically 1 GB, can be less on older iOS\tEarly iOS versions were known for more aggressive quotas and data eviction on low space. Android Chrome\tSimilar to desktop Chrome\tMay exhibit warnings in especially low-storage devices. The same 80% free space logic generally applies. Historically, these limits have evolved. For instance, older Firefox versions included dom.indexedDB.warningQuota, showing a 50 MB prompt on desktop or a 5 MB prompt on mobile—many developers wrote about these notifications on Stack Overflow. Since around 2015, Firefox has changed its quota approach significantly. Likewise, Safari used to limit data more aggressively on older iOS versions. Some older tutorials suggest comparing IndexedDB to localStorage, but modern browsers allow far larger and more flexible storage with IndexedDB than the old localStorage or cookie-based setups. ","version":"Next","tagName":"h2"},{"title":"Checking Your Current IndexedDB Usage","type":1,"pageTitle":"IndexedDB Max Storage Size Limit","url":"/articles/indexeddb-max-storage-limit.html#checking-your-current-indexeddb-usage","content":" To assess where your app stands relative to these storage limits, you can use the Storage Estimation API. The snippet below shows how to estimate both your used storage and the total space allocated to your origin: const quota = await navigator.storage.estimate(); const totalSpace = quota.quota; const usedSpace = quota.usage; console.log('Approx total allocated space:', totalSpace); console.log('Approx used space:', usedSpace); Some browsers (all modern ones) also provide a navigator.storage.persist() method to request persistent storage, preventing the browser from automatically clearing your data if the user’s device runs low on space. Note that users might deny such requests, or the request might fail silently on stricter environments. Always handle these outcomes gracefully and design your app to degrade if persistent storage is unavailable. ","version":"Next","tagName":"h2"},{"title":"Testing Your App’s IndexedDB Quotas","type":1,"pageTitle":"IndexedDB Max Storage Size Limit","url":"/articles/indexeddb-max-storage-limit.html#testing-your-apps-indexeddb-quotas","content":" The best way to handle real-world usage is to test for low storage conditions and large data sets in different environments. You can fill up the space manually by writing repetitive test data or running scripts that bulk-insert documents until an error occurs. Real-time usage monitors or dashboards can keep track of your navigator.storage.estimate() results, letting you see how close you are to the max limit in production. Developer tools in Chrome or Firefox can simulate limited storage situations, which is crucial for QA: This short tutorial shows how you can artificially reduce available storage in Google Chrome’s dev tools to see how your app behaves when nearing or exceeding the quota. ","version":"Next","tagName":"h2"},{"title":"Handling Errors When Limits Are Reached","type":1,"pageTitle":"IndexedDB Max Storage Size Limit","url":"/articles/indexeddb-max-storage-limit.html#handling-errors-when-limits-are-reached","content":" When the user’s device is too full or your app exceeds the allotted quota, most browsers will throw a QuotaExceededError (or similarly named exception) when trying to store additional data. Often, the request to IndexedDB simply fails with an error event. Handling this gracefully is essential to avoid crashes or data corruption. A typical approach is to wrap your write operations in try/catch blocks or in onsuccess / onerror event callbacks. If you detect a quota error, you can prompt the user to clear out old items or reduce the scope of offline data. Some apps implement a fallback system that removes less critical documents to free space and then retries the write. try { const tx = db.transaction('largeStore', 'readwrite'); const store = tx.objectStore('largeStore'); await store.add(hugeData, someKey); await tx.done; } catch (error) { if (error.name === 'QuotaExceededError') { console.warn('IndexedDB quota exceeded. Cleanup or prompt user to free space.'); // Optionally remove older data or show a UI hint: // removeOldDocuments(); // displayStorageFullDialog(); } else { // handle other errors console.error('IndexedDB write error:', error); } } ","version":"Next","tagName":"h2"},{"title":"Tricks to Exceed the Storage Size Limitation","type":1,"pageTitle":"IndexedDB Max Storage Size Limit","url":"/articles/indexeddb-max-storage-limit.html#tricks-to-exceed-the-storage-size-limitation","content":" Even if you plan well, your app might need more storage than a single origin typically allows. There are a few advanced tactics you can use: If you store binary data such as images or videos, consider compressing them via the Compression Streams API. For textual or JSON data, a library like RxDB supports built-in key-compression to shorten field names or entire documents. This can be extremely helpful when storing large sets of objects: // Example: How key-compression can transform your documents internally const uncompressed = { "firstName": "Corrine", "lastName": "Ziemann", "shoppingCartItems": [ { "productNumber": 29857, "amount": 1 }, { "productNumber": 53409, "amount": 6 } ] }; const compressed = { "|e": "Corrine", "|g": "Ziemann", "|i": [ { "|h": 29857, "|b": 1 }, { "|h": 53409, "|b": 6 } ] }; Sharding data across multiple subdomains or iframes is another trick, though it complicates communication. When you need truly massive offline data, you might store part of the data under sub1.yoursite.com and another chunk under sub2.yoursite.com, using postMessage() to coordinate. This can circumvent single-origin limitations, but it introduces extra complexity. Another effective method is to let data expire automatically—perhaps older records are removed if they haven’t been accessed for a certain period. ","version":"Next","tagName":"h2"},{"title":"IndexedDB Max Size of a Single Object","type":1,"pageTitle":"IndexedDB Max Storage Size Limit","url":"/articles/indexeddb-max-storage-limit.html#indexeddb-max-size-of-a-single-object","content":" There is no explicit cap on how large an individual object or record in IndexedDB can be, other than the overall disk quota. If you attempt to store one extremely large object, you will eventually hit browser memory constraints or the global storage quota. In practice, you’ll encounter out-of-memory issues in JavaScript before IndexedDB itself refuses a single large write. A helpful test can be seen in this JSFiddle experiment where you see browsers can crash when creating massive in-memory objects. ","version":"Next","tagName":"h2"},{"title":"Is There a Time Limit for Data Stored in IndexedDB?","type":1,"pageTitle":"IndexedDB Max Storage Size Limit","url":"/articles/indexeddb-max-storage-limit.html#is-there-a-time-limit-for-data-stored-in-indexeddb","content":" IndexedDB data can remain indefinitely as long as the user does not clear the browser’s data or the origin does not run afoul of automated eviction policies (e.g., Safari or Android might remove large caches for sites unused over a long period when space is needed). Typically, there is no “time limit,” but ephemeral modes or incognito sessions have their own rules. If you rely on permanent offline data, request persistent storage and handle the possibility that the user or the OS could still remove your data under extreme conditions. Especially Safari is known to be very fast in deleting local data. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"IndexedDB Max Storage Size Limit","url":"/articles/indexeddb-max-storage-limit.html#follow-up","content":" Learn more by checking the IndexedDB official docs, which detail store design, error handling, and quota usage. If you need a straightforward way to manage large offline data with compression and conflict resolution, explore the RxDB Quickstart. You can also join the community on GitHub to share tips on overcoming the IndexedDB max storage size limit in production environments. ","version":"Next","tagName":"h2"},{"title":"Ionic Storage - RxDB as database for hybrid apps","type":0,"sectionRef":"#","url":"/articles/ionic-database.html","content":"","keywords":"","version":"Next"},{"title":"What are Ionic Hybrid Apps?","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#what-are-ionic-hybrid-apps","content":" Ionic (aka Ionic 2 ) hybrid apps combine the strengths of web technologies (HTML, CSS, JavaScript) with native app development to deliver cross-platform applications. They are built using web technologies and then wrapped in a native container to be deployed on various platforms like iOS, Android, and the web. These apps provide a consistent user experience across devices while benefiting from the efficiency and familiarity of web development. ","version":"Next","tagName":"h2"},{"title":"Storing and Querying Data in an Ionic App","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#storing-and-querying-data-in-an-ionic-app","content":" Storing and querying data is a fundamental aspect of any application, including hybrid apps. These apps often need to operate offline, store user-generated content, and provide responsive user interfaces. Therefore, having a reliable and efficient way to manage data on the client's device is crucial. ","version":"Next","tagName":"h2"},{"title":"Introducing RxDB as a Client-Side Database for Ionic Apps","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#introducing-rxdb-as-a-client-side-database-for-ionic-apps","content":" RxDB steps in as a powerful solution to address the data management needs of ionic hybrid apps. It's a NoSQL client-side database that offers exceptional performance and features tailored to the unique requirements of client-side applications. Let's delve into the key features of RxDB that make it a great fit for these apps. ","version":"Next","tagName":"h2"},{"title":"Getting Started with RxDB","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#getting-started-with-rxdb","content":" ","version":"Next","tagName":"h3"},{"title":"What is RxDB?","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#what-is-rxdb","content":" At its core, RxDB is a NoSQL database that operates with a local-first approach. This means that your app's data is stored and processed primarily on the client's device, reducing the dependency on constant network connectivity. By doing so, RxDB ensures your app remains responsive and functional, even when offline. ","version":"Next","tagName":"h3"},{"title":"Local-First Approach","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#local-first-approach","content":" The local-first approach adopted by RxDB is a game-changer for hybrid applications. Storing data locally allows your app to function seamlessly without an internet connection, providing users with uninterrupted access to their data. When connectivity is restored, RxDB handles the synchronization of data, ensuring that any changes made offline are appropriately propagated. ","version":"Next","tagName":"h3"},{"title":"Observable Queries","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#observable-queries","content":" One of RxDB's standout features is its implementation of observable queries. This concept allows your app's user interface to be dynamically updated in real time as data changes within the database. RxDB's observables create a bridge between your database and user interface, keeping them in sync effortlessly. ","version":"Next","tagName":"h3"},{"title":"NoSQL Query Engine","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#nosql-query-engine","content":" RxDB's NoSQL query engine empowers you to perform powerful queries on your app's data, without the constraints imposed by traditional relational databases. This flexibility is particularly valuable when dealing with unstructured or semi-structured data. With the NoSQL query engine, you can retrieve, filter, and manipulate data according to your app's unique requirements. const foundDocuments = await myDatabase.todos.find({ selector: { done: { $eq: false } } }).exec(); ","version":"Next","tagName":"h3"},{"title":"Great Observe Performance with EventReduce","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#great-observe-performance-with-eventreduce","content":" RxDB introduces a concept called EventReduce, which optimizes the observation process. Instead of overwhelming your app's UI with every data change, EventReduce filters and batches these changes to provide a smooth and efficient experience. This leads to enhanced app performance, lower resource usage, and ultimately, happier users. ","version":"Next","tagName":"h3"},{"title":"Why NoSQL is a Better Fit for Client-Side Applications Compared to relational databases like SQLite","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#why-nosql-is-a-better-fit-for-client-side-applications-compared-to-relational-databases-like-sqlite","content":" When it comes to choosing the right database solution for your client-side applications, NoSQL RxDB presents compelling advantages over traditional options like SQLite. Let's delve into the key reasons why NoSQL RxDB is a superior fit for your ionic hybrid app development. ","version":"Next","tagName":"h2"},{"title":"Easier Document-Based Replication","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#easier-document-based-replication","content":" NoSQL databases, like RxDB, inherently embrace a document-based approach to data storage. This design choice simplifies data replication between clients and servers. With documents representing discrete units of data, you can easily synchronize individual pieces of information without the complexity that can arise when dealing with rows and tables in a relational database like SQLite. This document-centric replication model streamlines the synchronization process and ensures that your app's data remains consistent across devices. ","version":"Next","tagName":"h3"},{"title":"Offline Capable","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#offline-capable","content":" One of the defining features of client-side applications is the ability to function even when offline. NoSQL RxDB excels in this area by supporting a local-first approach. Data is cached on the client's device, enabling the app to remain fully functional even without an internet connection. As connectivity is restored, RxDB handles data synchronization with the server seamlessly. This offline capability ensures a smooth user experience, critical for ionic hybrid apps catering to users in various network conditions. ","version":"Next","tagName":"h3"},{"title":"NoSQL Has Better TypeScript Support","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#nosql-has-better-typescript-support","content":" TypeScript, a popular superset of JavaScript, is renowned for its static typing and enhanced developer experience. NoSQL databases like RxDB are inherently flexible, making them well-suited for TypeScript integration. With well-defined data structures and clear typings, NoSQL RxDB offers improved type safety and easier development when compared to traditional SQL databases like SQLite. This results in reduced debugging time and increased code reliability. ","version":"Next","tagName":"h3"},{"title":"Easier Schema Migration with NoSQL Documents","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#easier-schema-migration-with-nosql-documents","content":" Schema changes are a common occurrence in application development, and dealing with them can be challenging. NoSQL databases, including RxDB, are more forgiving in this aspect. Since documents in NoSQL databases don't enforce a rigid structure like tables in relational databases, schema changes are often simpler to manage. This flexibility makes it easier to evolve your app's data structure over time without the need for complex migration scripts, a notable advantage when compared to SQLite. ","version":"Next","tagName":"h3"},{"title":"Great Performance","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#great-performance","content":" RxDB's excellent performance stems from its advanced indexing capabilities, which streamline data retrieval and ensure swift query execution. Additionally, the JSON key compression employed by RxDB minimizes storage overhead, enabling efficient data transfer and quicker loading times. The incorporation of real-time updates through change streams and the EventReduce mechanism further enhances RxDB's performance, delivering a responsive user experience even as data changes are propagated seamlessly. ","version":"Next","tagName":"h2"},{"title":"Using RxDB in an Ionic Hybrid App","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#using-rxdb-in-an-ionic-hybrid-app","content":" RxDB's integration into your ionic hybrid app opens up a world of possibilities for efficient data management. Let's explore how to set up RxDB, use it with popular JavaScript frameworks, and take advantage of its diverse storage options. ","version":"Next","tagName":"h2"},{"title":"Setup RxDB","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#setup-rxdb","content":" Getting started with RxDB is a straightforward process. By including the RxDB library in your project, you can quickly start harnessing its capabilities. Begin by installing the RxDB package from the npm registry. Then, configure your database instance to suit your app's needs. This setup process paves the way for seamless data management in your ionic hybrid app. For a full instruction, follow the RxDB Quickstart. ","version":"Next","tagName":"h3"},{"title":"Using RxDB in Frameworks (React, Angular, Vue.js)","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#using-rxdb-in-frameworks-react-angular-vuejs","content":" RxDB seamlessly integrates with various JavaScript frameworks, ensuring compatibility with your preferred development environment. Whether you're building your ionic hybrid app with React, Angular, or Vue.js, RxDB offers bindings and tools that enable you to leverage its features effortlessly. This compatibility allows you to stay within the comfort zone of your chosen framework while benefiting from RxDB's powerful data management capabilities. ","version":"Next","tagName":"h3"},{"title":"Different RxStorage Layers for RxDB","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#different-rxstorage-layers-for-rxdb","content":" RxDB doesn't limit you to a single storage solution. Instead, it provides a range of RxStorage layers to accommodate diverse use cases. These storage layers offer flexibility and customization, enabling you to tailor your data management strategy to match your app's requirements. Let's explore some of the available RxStorage options: Dexie.js RxStorage: Dexie.js is a popular JavaScript library for indexedDB, and RxDB offers a compatible RxStorage layer. This option leverages indexedDB's capabilities to provide efficient data storage and retrieval.IndexedDB RxStorage: Leveraging the native browser storage, IndexedDB RxStorage offers reliable data persistence. This storage option is suitable for a wide range of scenarios and is supported by most modern browsers.OPFS RxStorage: Operating within the browser's file system, OPFS RxStorage is a unique choice that can handle larger data volumes efficiently. It's particularly useful for applications that require substantial data storage.Memory RxStorage: Memory RxStorage is perfect for temporary or cache-like data storage. It keeps data in memory, which can result in rapid data access but doesn't provide long-term persistence.SQLite RxStorage: SQLite is the goto database for mobile applications. It is build in on android and iOS devices. The SQLite RxDB storage layer is build upon SQLite and offers the best performance on hybrid apps, like ionic. ","version":"Next","tagName":"h3"},{"title":"Replication of Data with RxDB between Clients and Servers","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#replication-of-data-with-rxdb-between-clients-and-servers","content":" Efficient data replication between clients and servers is the backbone of modern application development, ensuring that data remains consistent and up-to-date across various devices and platforms. RxDB provides a suite of replication methods that facilitate seamless communication between clients and servers, ensuring that your data is always in sync. ","version":"Next","tagName":"h2"},{"title":"RxDB Replication Algorithm","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#rxdb-replication-algorithm","content":" At the heart of RxDB's replication capabilities lies a sophisticated algorithm designed to manage data synchronization between clients and servers. This algorithm intelligently handles data changes, conflict resolution, and network connectivity fluctuations, resulting in reliable and efficient data replication. With the RxDB replication algorithm, your application can maintain data consistency across devices without unnecessary complexities. CouchDB Replication: RxDB's integration with CouchDB replication presents a powerful way to synchronize data between clients and servers. CouchDB, a well-established NoSQL database, excels at distributed and decentralized data scenarios. By utilizing RxDB's CouchDB replication, you can establish bidirectional synchronization between your RxDB-powered client and a CouchDB server. This synchronization ensures that data updates made on either end are seamlessly propagated to the other, facilitating collaboration and data sharing. Firestore Replication: Firestore, Google's cloud-hosted NoSQL database, offers another avenue for data replication in RxDB. With Firestore replication, you can establish a connection between your RxDB-powered app and Firestore's cloud infrastructure. This integration provides real-time updates to data across multiple instances of your application, ensuring that users always have access to the latest information. RxDB's support for Firestore replication empowers you to build dynamic and responsive applications that thrive in today's fast-paced digital landscape. WebRTC Replication: Peer-to-peer (P2P) replication via WebRTC introduces a cutting-edge approach to data synchronization in RxDB. P2P replication allows devices to communicate directly with each other, bypassing the need for a central server. This method proves invaluable in scenarios where network connectivity is limited or unreliable. With WebRTC replication, devices can exchange data directly, enabling collaboration and information sharing even in challenging network conditions. ","version":"Next","tagName":"h3"},{"title":"RxDB as an Alternative for Ionic Secure Storage","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#rxdb-as-an-alternative-for-ionic-secure-storage","content":" When it comes to securing sensitive data in your Ionic applications, RxDB emerges as a powerful alternative to traditional secure storage solutions. Let's delve into why RxDB is an exceptional choice for safeguarding your data while providing additional benefits. ","version":"Next","tagName":"h2"},{"title":"RxDB On-Device Encryption Plugin","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#rxdb-on-device-encryption-plugin","content":" RxDB offers an on-device encryption plugin, adding an extra layer of security to your app's data. This means that data stored within the RxDB database can be encrypted, ensuring that even if the device falls into the wrong hands, the sensitive information remains inaccessible without the proper decryption key. This level of data protection is crucial for applications that deal with personal or confidential information. Encryption runs either with AES on crypto-js or with the Web Crypto API which is faster and more secure. ","version":"Next","tagName":"h3"},{"title":"Works Offline","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#works-offline","content":" Security should never compromise functionality. RxDB excels in this area by allowing your application to operate seamlessly even when offline. The locally stored encrypted data remains accessible and functional, enabling users to interact with the app's features even without an active internet connection. This offline capability ensures that user data is secure, while the app continues to deliver a responsive and uninterrupted experience. ","version":"Next","tagName":"h3"},{"title":"Easy-to-Setup Replication with Your Backend","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#easy-to-setup-replication-with-your-backend","content":" Ensuring data consistency between your client-side application and backend is a key concern for developers. RxDB simplifies this process with its straightforward replication setup. You can effortlessly configure data synchronization between your local RxDB instance and your backend server. This replication capability ensures that encrypted data remains up-to-date and aligned with the central database, enhancing data integrity and security. ","version":"Next","tagName":"h3"},{"title":"Compression of Client-Side Stored Data","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#compression-of-client-side-stored-data","content":" In addition to security and offline capabilities, RxDB also offers data compression. This means that the data stored on the client's device is efficiently compressed, reducing storage requirements and improving overall app performance. This compression ensures that your app remains responsive and efficient, even as data volumes grow. ","version":"Next","tagName":"h3"},{"title":"Cost-Effective Solution","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#cost-effective-solution","content":" In addition to its security features, RxDB offers cost-effective benefits. RxDB is priced more affordably compared to some other secure storage solutions, making it an attractive option for developers seeking robust security without breaking the bank. For many users, the free version of RxDB provides ample features to meet their application's security and data management needs. ","version":"Next","tagName":"h3"},{"title":"Follow Up","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#follow-up","content":" Try out the RxDB ionic example projectTry out the RxDB QuickstartJoin the RxDB Chat ","version":"Next","tagName":"h2"},{"title":"RxDB - Local Ionic Storage with Encryption, Compression & Sync","type":0,"sectionRef":"#","url":"/articles/ionic-storage.html","content":"","keywords":"","version":"Next"},{"title":"Why RxDB for Ionic Storage?","type":1,"pageTitle":"RxDB - Local Ionic Storage with Encryption, Compression & Sync","url":"/articles/ionic-storage.html#why-rxdb-for-ionic-storage","content":" ","version":"Next","tagName":"h2"},{"title":"1. Offline-Ready NoSQL Storage","type":1,"pageTitle":"RxDB - Local Ionic Storage with Encryption, Compression & Sync","url":"/articles/ionic-storage.html#1-offline-ready-nosql-storage","content":" Offline functionality is crucial for modern mobile applications, particularly when devices encounter unreliable or slow networks. RxDB stores all data locally so your Ionic app can run seamlessly without needing a continuous internet connection. When a network is available again, RxDB automatically synchronizes changes with your backend - no extra code required. ","version":"Next","tagName":"h3"},{"title":"2. Powerful Encryption","type":1,"pageTitle":"RxDB - Local Ionic Storage with Encryption, Compression & Sync","url":"/articles/ionic-storage.html#2-powerful-encryption","content":" Securing on-device data is paramount when handling sensitive information. RxDB includes encryption plugins that let you: Encrypt data fields at rest with AESInvalidate data access by simply withholding the passwordKeep your users' data confidential, even if the device is stolen This built-in encryption sets RxDB apart from many other Ionic storage options that lack integrated security. ","version":"Next","tagName":"h3"},{"title":"3. Built-In Data Compression","type":1,"pageTitle":"RxDB - Local Ionic Storage with Encryption, Compression & Sync","url":"/articles/ionic-storage.html#3-built-in-data-compression","content":" Large or repetitive data can significantly slow down devices with minimal memory. RxDB's key-compression feature decreases document size stored on the device, improving overall performance by: Reducing disk usageAccelerating queriesMinimizing network overhead when syncing ","version":"Next","tagName":"h3"},{"title":"4. Real-Time Sync & Conflict Handling","type":1,"pageTitle":"RxDB - Local Ionic Storage with Encryption, Compression & Sync","url":"/articles/ionic-storage.html#4-real-time-sync--conflict-handling","content":" In addition to functioning fully offline, RxDB supports advanced replication options. Your Ionic app can instantly sync updates with any backend (CouchDB, Firestore, GraphQL, or custom REST), maintaining a real-time user experience. Plus, RxDB handles conflicts gracefully - meaning less worry about clashing user edits. ","version":"Next","tagName":"h3"},{"title":"5. Easy to Adopt and Extend","type":1,"pageTitle":"RxDB - Local Ionic Storage with Encryption, Compression & Sync","url":"/articles/ionic-storage.html#5-easy-to-adopt-and-extend","content":" RxDB runs with a NoSQL approach and integrates seamlessly into Ionic Angular or other frameworks you might use with Ionic. You can extend or replace storage backends, add encryption, or build advanced offline-first features with minimal overhead. ","version":"Next","tagName":"h3"},{"title":"Quick Start: Implementing RxDB with Dexie Storage","type":1,"pageTitle":"RxDB - Local Ionic Storage with Encryption, Compression & Sync","url":"/articles/ionic-storage.html#quick-start-implementing-rxdb-with-dexie-storage","content":" For a simple proof-of-concept or testing environment in Ionic, you can use Dexie.js as your underlying storage. Later, if you need better native performance, you can switch to the SQLite storage offered by the RxDB Premium plugins. Install RxDB and Dexie-based Storage npm install rxdb rxjs dexie Initialize the Database import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; async function initDB() { const db = await createRxDatabase({ name: 'myionicdb', storage: getRxStorageDexie(), // Using Dexie for local storage multiInstance: false // or true if you plan multi-tab usage // Note: If you need encryption, set `password` here }); await db.addCollections({ notes: { schema: { title: 'notes schema', version: 0, type: 'object', primaryKey: 'id', properties: { id: { type: 'string' }, content: { type: 'string' }, timestamp: { type: 'number' } }, required: ['id'] } } }); return db; } Ready to Upgrade Later? When you need the best performance on mobile devices, purchase the RxDB PremiumSQLite Storage and replace getRxStorageDexie() with getRxStorageSQLite() - your app logic remains largely the same. You only have to change the configuration. ","version":"Next","tagName":"h2"},{"title":"Encryption Example","type":1,"pageTitle":"RxDB - Local Ionic Storage with Encryption, Compression & Sync","url":"/articles/ionic-storage.html#encryption-example","content":" To secure local data, add the crypto-js encryption plugin (free version) or the premium web-crypto plugin. Below is an example using the free crypto-js plugin: import { wrappedKeyEncryptionCryptoJsStorage } from 'rxdb/plugins/encryption-crypto-js'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; import { createRxDatabase } from 'rxdb/plugins/core'; async function initEncryptedDB() { const encryptedDexieStorage = wrappedKeyEncryptionCryptoJsStorage({ storage: getRxStorageDexie() }); const db = await createRxDatabase({ name: 'secureIonicDB', storage: encryptedDexieStorage, password: 'myS3cretP4ssw0rd' }); await db.addCollections({ secrets: { schema: { title: 'secret schema', version: 0, type: 'object', primaryKey: 'id', properties: { id: { type: 'string' }, text: { type: 'string' } }, required: ['id'], // all fields in this array will be stored encrypted: encrypted: ['text'] } } }); return db; } With encryption enabled: text is automatically encrypted at rest.Queries on encrypted fields are not directly possible (since data is encrypted), but once a document is loaded, RxDB decrypts it for normal usage. ","version":"Next","tagName":"h2"},{"title":"Compression Example","type":1,"pageTitle":"RxDB - Local Ionic Storage with Encryption, Compression & Sync","url":"/articles/ionic-storage.html#compression-example","content":" To minimize the storage footprint, RxDB offers a key-compression feature. You can enable it in your schema: await db.addCollections({ logs: { schema: { title: 'logs schema', version: 0, keyCompression: true, // enable compression type: 'object', primaryKey: 'id', properties: { id: { type: 'string' }, message: { type: 'string' }, createdAt: { type: 'string', format: 'date-time' } } } } }); With keyCompression: true, RxDB shortens field names internally, significantly reducing document size. This helps both stored data and network transport during replication. ","version":"Next","tagName":"h2"},{"title":"RxDB vs. Other Ionic Storage Options","type":1,"pageTitle":"RxDB - Local Ionic Storage with Encryption, Compression & Sync","url":"/articles/ionic-storage.html#rxdb-vs-other-ionic-storage-options","content":" Ionic Native Storage or Capacitor-based key-value stores may handle small amounts of data but lack advanced features like: Complex queriesFull NoSQL document modelOffline-firstsyncEncryption & key compression out of the boxRxDB stands out by delivering all these capabilities in a unified library. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"RxDB - Local Ionic Storage with Encryption, Compression & Sync","url":"/articles/ionic-storage.html#follow-up","content":" For Ionic storage that supports offline-first operations, built-in encryption, optional data compression, and live syncing with any backend, RxDB provides a powerful solution. Start quickly with Dexie for local development and testing - then scale up to the premium SQLite storage for optimal performance on production mobile devices. Ready to learn more? Explore the RxDB Quickstart GuideCheck out RxDB Encryption to protect user dataLearn about SQLite Storage in RxDB Premium for top performance on mobile.Join our community on the RxDB Chat RxDB - The ultimate toolkit for Ionic developers seeking offline-first, secure, and compressed local data, with real-time sync to any server. ","version":"Next","tagName":"h2"},{"title":"RxDB as a Database in a jQuery Application","type":0,"sectionRef":"#","url":"/articles/jquery-database.html","content":"","keywords":"","version":"Next"},{"title":"jQuery Web Applications","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#jquery-web-applications","content":" jQuery provides a simple API for DOM manipulation, event handling, and AJAX calls. It has been widely adopted due to its ease of use and strong community support. Many projects continue to rely on jQuery for handling client-side functionality, UI interactions, and animations. As these applications evolve, the need for a robust database solution that can manage data locally (and offline) becomes increasingly important. ","version":"Next","tagName":"h2"},{"title":"Importance of Databases in jQuery Applications","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#importance-of-databases-in-jquery-applications","content":" Modern, data-driven jQuery applications often need to: Store and retrieve data locally for quick and responsive user experiences.Synchronize data between clients or with a central server.Handle offline scenarios seamlessly.Handle large or complex data structures without repeatedly hitting the server. Relying solely on server endpoints or basic browser storage (like localStorage) can quickly become unwieldy for larger or more complex use cases. Enter RxDB, a dedicated solution that manages data on the client side while offering real-time synchronization and offline-first capabilities. ","version":"Next","tagName":"h2"},{"title":"Introducing RxDB as a Database Solution","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#introducing-rxdb-as-a-database-solution","content":" RxDB (short for Reactive Database) is built on top of IndexedDB and leverages RxJS to provide a modern, reactive approach to handling data in the browser. With RxDB, you can store documents locally, query them in real-time, and synchronize changes with a remote server whenever an internet connection is available. ","version":"Next","tagName":"h2"},{"title":"Key Features","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#key-features","content":" Reactive Data Handling: RxDB emits real-time updates whenever your data changes, allowing you to instantly reflect these changes in the DOM with jQuery.Offline-First Approach: Keep your application usable even when the user's network is unavailable. Data is automatically synchronized once connectivity is restored.Data Replication: Enable multi-device or multi-tab synchronization with minimal effort.Observable Queries: Reduce code complexity by subscribing to queries instead of constantly polling for changes.Multi-Tab Support: If a user opens your jQuery application in multiple tabs, RxDB keeps data in sync across all sessions. ","version":"Next","tagName":"h3"},{"title":"Getting Started with RxDB","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#getting-started-with-rxdb","content":" ","version":"Next","tagName":"h2"},{"title":"What is RxDB?","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#what-is-rxdb","content":" RxDB is a client-side NoSQL database that stores data in the browser (or node.js) and synchronizes changes with other instances or servers. Its design embraces reactive programming principles, making it well-suited for real-time applications, offline scenarios, and multi-tab use cases. ","version":"Next","tagName":"h3"},{"title":"Reactive Data Handling","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#reactive-data-handling","content":" RxDB's use of observables enables an event-driven architecture where data mutations automatically trigger UI updates. In a jQuery application, you can subscribe to these changes and update DOM elements as soon as data changes occur - no need for manual refresh or complicated change detection logic. ","version":"Next","tagName":"h3"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#offline-first-approach","content":" One of RxDB's distinguishing traits is its emphasis on offline-first design. This means your jQuery application continues to function, display, and update data even when there's no network connection. When connectivity is restored, RxDB synchronizes updates with the server or other peers, ensuring consistency across all instances. ","version":"Next","tagName":"h3"},{"title":"Data Replication","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#data-replication","content":" RxDB supports real-time data replication with different backends. By enabling replication, you ensure that multiple clients - be they multiple browser tabs or separate devices - stay in sync. RxDB's conflict resolution strategies help keep the data consistent even when multiple users make changes simultaneously. ","version":"Next","tagName":"h3"},{"title":"Observable Queries","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#observable-queries","content":" Instead of static queries, RxDB provides observable queries. Whenever data relevant to a query changes, RxDB re-emits the new result set. You can subscribe to these updates within your jQuery code and instantly reflect them in the UI. ","version":"Next","tagName":"h3"},{"title":"Multi-Tab Support","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#multi-tab-support","content":" Running your jQuery app in multiple tabs? RxDB automatically synchronizes changes between those tabs. Users can freely switch windows without missing real-time updates. ","version":"Next","tagName":"h3"},{"title":"RxDB vs. Other jQuery Database Options","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#rxdb-vs-other-jquery-database-options","content":" Historically, jQuery developers might use localStorage or raw IndexedDB for storing data. However, these solutions can require significant boilerplate, lack reactivity, and offer no built-in sync or conflict resolution. RxDB fills these gaps with an out-of-the-box solution, abstracting away low-level database complexities and providing an event-driven, offline-capable approach. ","version":"Next","tagName":"h3"},{"title":"Using RxDB in a jQuery Application","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#using-rxdb-in-a-jquery-application","content":" ","version":"Next","tagName":"h2"},{"title":"Installing RxDB","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#installing-rxdb","content":" Install RxDB (and rxjs) via npm or yarn: npm install rxdb rxjs If your project isn't set up with a build process, you can still use bundlers like Webpack or Rollup, or serve RxDB as a UMD bundle. Once included, you'll have access to RxDB globally or via import statements. ","version":"Next","tagName":"h3"},{"title":"Creating and Configuring a Database","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#creating-and-configuring-a-database","content":" Below is a minimal example of how to create an RxDB instance and collection. You can call this when your page initializes, then store the db object for later use: import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; async function initDatabase() { const db = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageDexie(), // Dexie-based IndexedDB password: 'myPassword', // optional encryption password multiInstance: true, // multi-tab support eventReduce: true // optimizes event handling }); await db.addCollections({ hero: { schema: { title: 'hero schema', version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string' }, name: { type: 'string' }, points: { type: 'number' } } } } }); return db; } ","version":"Next","tagName":"h2"},{"title":"Updating the DOM with jQuery","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#updating-the-dom-with-jquery","content":" Once you have your RxDB instance, you can query data reactively and use jQuery to manipulate the DOM: // Example: Displaying heroes using jQuery $(document).ready(async function () { const db = await initDatabase(); // Subscribing to all hero documents db.hero .find() .$ // the observable .subscribe((heroes) => { // Clear the list $('#heroList').empty(); // Append each hero to the DOM heroes.forEach((hero) => { $('#heroList').append(` <li> <strong>${hero.name}</strong> - Points: ${hero.points} </li> `); }); }); // Example of adding a new hero $('#addHeroBtn').on('click', async () => { const heroName = $('#heroName').val(); const heroPoints = parseInt($('#heroPoints').val(), 10); await db.hero.insert({ id: Date.now().toString(), name: heroName, points: heroPoints }); }); }); With this approach, any time data in the hero collection changes - like when a new hero is added - your jQuery code re-renders the list of heroes automatically. ","version":"Next","tagName":"h2"},{"title":"Different RxStorage layers for RxDB","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#different-rxstorage-layers-for-rxdb","content":" RxDB supports multiple storage backends (RxStorage layers). Some popular ones: Dexie.js RxStorage: A friendly wrapper around IndexedDB, commonly used for improved dev experience.IndexedDB RxStorage: Direct IndexedDB usage, suitable for modern browsers.OPFS RxStorage: Uses the File System Access API for better performance in supported browsers.Memory RxStorage: Stores data in memory, handy for tests or ephemeral data.SQLite RxStorage: Uses SQLite (potentially via WebAssembly). In typical browser-based scenarios, Dexie or IndexedDB storage is usually more straightforward. ","version":"Next","tagName":"h2"},{"title":"Synchronizing Data with RxDB between Clients and Servers","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#synchronizing-data-with-rxdb-between-clients-and-servers","content":" ","version":"Next","tagName":"h2"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#offline-first-approach-1","content":" RxDB's offline-first approach allows your jQuery application to store and query data locally. Users can continue interacting, even offline. When connectivity returns, RxDB syncs to the server. ","version":"Next","tagName":"h3"},{"title":"Conflict Resolution","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#conflict-resolution","content":" Should multiple clients update the same document, RxDB offers conflict handling strategies. You decide how to resolve conflicts - like keeping the latest edit or merging changes - ensuring data integrity across distributed systems. ","version":"Next","tagName":"h3"},{"title":"Bidirectional Synchronization","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#bidirectional-synchronization","content":" With RxDB, data changes flow both ways: from client to server and from server to client. This real-time synchronization ensures that all users or tabs see consistent, up-to-date data. ","version":"Next","tagName":"h3"},{"title":"Advanced RxDB Features and Techniques","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#advanced-rxdb-features-and-techniques","content":" ","version":"Next","tagName":"h2"},{"title":"Indexing and Performance Optimization","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#indexing-and-performance-optimization","content":" Create indexes on frequently queried fields to speed up performance. For large data sets, indexing can drastically improve query times, keeping your jQuery UI snappy. ","version":"Next","tagName":"h3"},{"title":"Encryption of Local Data","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#encryption-of-local-data","content":" RxDB supports encryption to secure data stored in the browser. This is crucial if your application handles sensitive user information. ","version":"Next","tagName":"h3"},{"title":"Change Streams and Event Handling","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#change-streams-and-event-handling","content":" Use change streams to listen for data modifications at the database or collection level. This can trigger real-timeUI updates, notifications, or custom logic whenever the data changes. ","version":"Next","tagName":"h3"},{"title":"JSON Key Compression","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#json-key-compression","content":" If your data model has large or repetitive field names, JSON key compression can minimize stored document size and potentially boost performance. ","version":"Next","tagName":"h3"},{"title":"Best Practices for Using RxDB in jQuery Applications","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#best-practices-for-using-rxdb-in-jquery-applications","content":" Centralize Your Database: Initialize and configure RxDB in one place. Expose the instance where needed or store it globally to avoid re-creating it on every script.Leverage Observables: Instead of polling or manually refreshing data, rely on RxDB's reactivity. Subscribe to queries and let RxDB inform you when data changes.Handle Subscriptions: If you create subscriptions in a single-page context, ensure you don't re-subscribe endlessly or create memory leaks. Clean them up if you're navigating away or removing DOM elements.Offline Testing: Thoroughly test how your jQuery app behaves without a network connection. Simulate offline states in your browser's dev tools or with flight mode to ensure the user experience remains smooth.Performance Profiling: For large data sets or frequent data updates, add indexes and carefully measure query performance. Optimize only where needed. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"RxDB as a Database in a jQuery Application","url":"/articles/jquery-database.html#follow-up","content":" To explore more about RxDB and leverage its capabilities for browser database development, check out the following resources: RxDB GitHub Repository: Visit the official GitHub repository of RxDB to access the source code, documentation, and community support.RxDB Quickstart: Get started quickly with RxDB by following the provided quickstart guide, which offers step-by-step instructions for setting up and using RxDB in your projects.RxDB Examples: Browse official examples to see RxDB in action and learn best practices you can apply to your own project - even if jQuery isn't explicitly featured, the patterns are similar. ","version":"Next","tagName":"h2"},{"title":"RxDB - JSON Database for JavaScript","type":0,"sectionRef":"#","url":"/articles/json-database.html","content":"","keywords":"","version":"Next"},{"title":"Why Choose a JSON Database?","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#why-choose-a-json-database","content":" JavaScript Friendliness: JavaScript, a prevalent language for web development, naturally uses JSON for data representation. Using a JSON database aligns seamlessly with JavaScript's native data format. Compatibility: JSON is widely supported across different programming languages and platforms. Storing data in JSON format ensures compatibility with a broad range of tools and systems. All modern programming ecosystems have packages to parse, validate and process JSON data. Flexibility: JSON documents can accommodate complex and nested data structures, allowing developers to store data in a more intuitive and hierarchical manner compared to SQL table rows. Nested data can be just stored in-document instead of having related tables. Human-Readable: JSON is easy to read and understand, simplifying debugging and data inspection tasks. ","version":"Next","tagName":"h2"},{"title":"Storage and Access Options for JSON Documents","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#storage-and-access-options-for-json-documents","content":" When incorporating JSON documents into your application, you have several storage and access options to consider: Local In-App Database with In-Memory Storage: Ideal for lightweight applications or temporary data storage, this option keeps data in memory, ensuring fast read and write operations. However, data is not persisted beyond the current application session, making it suitable for temporary data storage. With RxDB, the memory RxStorage can be utilized to create an in-memory database. Local In-App Database with Persistent Storage: Suitable for applications requiring data retention across sessions. Data is stored on the user's device or inside of the Node.js application, offering persistence between application sessions. It balances speed and data retention, making it versatile for various applications. With RxDB, a whole range of persistend storages is available. As example, for browser there is the IndexedDB storage. For server side applications, the Node.js Filesystem storage can be used. There are many more storages for React-Native, Flutter, Capacitors.js and others. Server Database Connected to the Application: For applications requiring data synchronization and accessibility from multiple processes, a server-based database is the preferred choice. Data is stored on a remote server, facilitating data sharing, synchronization, and accessibility across multiple processes. It's suitable for scenarios requiring centralized data management and enhanced security and backup capabilities on the server. RxDB supports the FoundationDB and MongoDB as a remote database server. ","version":"Next","tagName":"h2"},{"title":"Compression Storage for JSON Documents","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#compression-storage-for-json-documents","content":" Compression storage for JSON documents is made effortless with RxDB's key-compression plugin. This feature enables the efficient storage of compressed document data, reducing storage requirements while maintaining data integrity. Queries on compressed documents remain seamless, ensuring that your application benefits from both space-saving advantages and optimal query performance, making RxDB a compelling choice for managing JSON data efficiently. The compression happens inside of the RxDatabase and does not affect the API usage. The only limitation is that encrypted fields themself cannot be used inside a query. ","version":"Next","tagName":"h2"},{"title":"Schema Validation and Data Migration on Schema Changes","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#schema-validation-and-data-migration-on-schema-changes","content":" Storing JSON documents inside of a database in an application, can cause a problem when the format of the data changes. Instead of having a single server where the data must be migrated, many client devices are out there that have to run a migration. When your application's schema evolves, RxDB provides migration strategies to facilitate the transition, ensuring data consistency throughout schema updates. JSONSchema Validation Plugins: RxDB supports multiple JSONSchema validation plugins, guaranteeing that only valid data is stored in the database. RxDB uses the JsonSchema standardization that you might know from other technologies like OpenAPI (aka Swagger). // RxDB Schema example const mySchema = { version: 0, primaryKey: 'id', // <- define the primary key for your documents type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, name: { type: 'string', maxLength: 100 }, done: { type: 'boolean' }, timestamp: { type: 'string', format: 'date-time' } }, required: ['id', 'name', 'done', 'timestamp'] } ","version":"Next","tagName":"h2"},{"title":"Store JSON with RxDB in Browser Applications","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#store-json-with-rxdb-in-browser-applications","content":" RxDB offers versatile storage solutions for browser-based applications: Multiple Storage Plugins: RxDB supports various storage backends, including IndexedDB, Dexie.js and In-Memory, catering to a range of browser environments. Observable Queries: With RxDB, you can create observable queries that work seamlessly across multiple browser tabs, providing real-time updates and synchronization. ","version":"Next","tagName":"h2"},{"title":"RxDB JSON Database Performance","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#rxdb-json-database-performance","content":" Certainly! Let's delve deeper into the performance aspects of RxDB when it comes to working with JSON data. Efficient Querying: RxDB is engineered for rapid and efficient querying of JSON data. It employs a well-optimized indexing system that allows for lightning-fast retrieval of specific data points within your JSON documents. Whether you're fetching individual values or complex nested structures, RxDB's query performance is designed to keep your application responsive, even when dealing with large datasets. Scalability: As your application grows and your JSON dataset expands, RxDB scales gracefully. Its performance remains consistent, enabling you to handle increasingly larger volumes of data without compromising on speed or responsiveness. This scalability is essential for applications that need to accommodate growing user bases and evolving data needs. Reduced Latency: RxDB's streamlined data access mechanisms significantly reduce latency when working with JSON data. Whether you're reading from the database, making updates, or synchronizing data between clients and servers, RxDB's optimized operations help minimize the delays often associated with data access. Observed queries are optimized with the EventReduce algorithm to provide nearly-instand UI updates on data changes. RxStorage Layer: Because RxDB allows you to swap out the storage layer. A storage with the most optimal performance can be chosen for each runtime while not touching other database code. Depending on the access patterns, you can pick exactly the storage that is best: ","version":"Next","tagName":"h2"},{"title":"RxDB in Node.js","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#rxdb-in-nodejs","content":" Node.js developers can also benefit from RxDB's capabilities. By integrating RxDB into your Node.js applications, you can harness the power of a NoSQL JSON db to efficiently manage your data on the server-side. RxDB's flexibility, performance, and essential features are equally valuable in server-side development. Read more about RxDB+Node.js. ","version":"Next","tagName":"h2"},{"title":"RxDB to store JSON documents in React Native","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#rxdb-to-store-json-documents-in-react-native","content":" For mobile app developers working with React Native, RxDB offers a convenient solution for handling JSON data. Whether you're building Android or iOS applications, RxDB's compatibility with JavaScript and its ability to work with JSON documents make it a natural choice for data management within your React Native apps. Read more about RxDB+React-Native. ","version":"Next","tagName":"h2"},{"title":"Using SQLite as a JSON Database","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#using-sqlite-as-a-json-database","content":" In some cases, you might want to use SQLite as a backend storage solution for your JSON data. RxDB can be configured to work with SQLite, providing the benefits of both a relational database system and JSON document storage. This hybrid approach can be advantageous when dealing with complex data relationships while retaining the flexibility of JSON data representation. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#follow-up","content":" To further explore RxDB and get started with using it in your frontend applications, consider the following resources: RxDB Quickstart: A step-by-step guide to quickly set up RxDB in your project and start leveraging its features.RxDB GitHub Repository: The official repository for RxDB, where you can find the code, examples, and community support. By embracing RxDB as your JSON database solution, you can tap into the extensive capabilities of JSON data storage. This empowers your applications with offline accessibility, caching, enhanced performance, and effortless data synchronization. RxDB's focus on JavaScript and its robust feature set render it the perfect selection for frontend developers in pursuit of efficient and scalable data storage solutions. ","version":"Next","tagName":"h2"},{"title":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","type":0,"sectionRef":"#","url":"/articles/local-database.html","content":"","keywords":"","version":"Next"},{"title":"Use Cases of Local Databases","type":1,"pageTitle":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","url":"/articles/local-database.html#use-cases-of-local-databases","content":" Local databases are particularly beneficial for: Offline Functionality: Essential for apps that must remain usable without a consistent internet connection, such as note-taking apps or offline-first CRMs. Users can continue adding and editing data, then sync changes once they reconnect.Low Latency: By reducing round-trips to remote servers, local databases enable real-time responsiveness. This feature is critical for interactive applications such as gaming platforms, data dashboards, or analytics tools that need near-instant feedback.Data Synchronization: Many modern applications - like chat systems or collaborative editing tools - require continuous data exchange between multiple users or devices. Local databases can handle intermittent connectivity gracefully, queuing updates locally and syncing them when possible. In addition, local databases are increasingly integral to Progressive Web Apps (PWAs), offering a native app-like user experience that is fast and available, even when offline. ","version":"Next","tagName":"h3"},{"title":"Performance Optimization","type":1,"pageTitle":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","url":"/articles/local-database.html#performance-optimization","content":" The primary performance benefit of a local database is its proximity to the application: queries and updates happen directly on the user's device, eliminating the overhead of multiple network hops. Common optimizations include: Caching: Storing frequently accessed data in memory or on disk to minimize expensive operations.Batching Writes: Grouping database operations into a single write transaction to reduce overhead and lock contention.Efficient Indexing: Using appropriate indexes to speed up queries, especially important for applications that handle large data sets or frequent lookups. These techniques ensure that local databases run smoothly, even on lower-powered or mobile devices. ","version":"Next","tagName":"h3"},{"title":"Security and Encryption","type":1,"pageTitle":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","url":"/articles/local-database.html#security-and-encryption","content":" Storing data on user devices introduces unique security considerations, such as the risk of physical theft or unauthorized access. Consequently, many local databases support encryption options to safeguard sensitive information. Developers can implement additional security measures like device-level encryption, secure storage plugins, and user authentication to further protect data from prying eyes. ","version":"Next","tagName":"h3"},{"title":"Why RxDB is Optimized for JavaScript Applications","type":1,"pageTitle":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","url":"/articles/local-database.html#why-rxdb-is-optimized-for-javascript-applications","content":" RxDB (Reactive Database) is an offline-first, NoSQL database designed to meet the needs of modern JavaScript applications. Built with a focus on reactivity and real-time data handling, RxDB excels in scenarios where low-latency, offline availability, and scalability are essential. ","version":"Next","tagName":"h2"},{"title":"Real-Time Reactivity","type":1,"pageTitle":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","url":"/articles/local-database.html#real-time-reactivity","content":" At the core of RxDB is reactive programming, allowing you to subscribe to changes in your data collections and receive immediate UI updates when records change - no manual polling or refetching required. For instance, a chat application can display incoming messages as soon as they arrive, maintaining a smooth and responsive experience. ","version":"Next","tagName":"h3"},{"title":"Offline-First Support","type":1,"pageTitle":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","url":"/articles/local-database.html#offline-first-support","content":" RxDB's primary design goal is to work seamlessly in offline environments. Even if your device loses internet connectivity, RxDB enables you to continue reading and writing data. Once the connection is restored, all pending changes are automatically synchronized with your backend. This offline-first approach is ideal for productivity apps, field service tools, and other scenarios where reliability and user autonomy are paramount. ","version":"Next","tagName":"h3"},{"title":"Flexible Data Replication","type":1,"pageTitle":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","url":"/articles/local-database.html#flexible-data-replication","content":" A standout feature of RxDB is its bi-directional replication. It supports synchronization with a variety of backends, such as: CouchDB: Via the CouchDB replication protocol, facilitating easy integration with any Couch-compatible server.GraphQL Endpoints: Through community plugins, developers can replicate JSON documents to and from GraphQL servers.Custom Backends: RxDB provides hooks to build custom replication strategies for proprietary or specialized server APIs. This flexibility ensures that RxDB fits into diverse architectures without locking you into a single vendor or technology stack. ","version":"Next","tagName":"h3"},{"title":"Schema Validation and Versioning","type":1,"pageTitle":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","url":"/articles/local-database.html#schema-validation-and-versioning","content":" Rather than relying on implicit data models, RxDB leverages JSON schema to define document structures. This approach promotes data consistency by enforcing constraints such as required fields and acceptable data formats. As your application grows and changes, RxDB's built-in schema versioning and migration tools help you evolve your database schema safely, minimizing risks of data corruption or loss. ","version":"Next","tagName":"h3"},{"title":"Rich Plugin Ecosystem","type":1,"pageTitle":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","url":"/articles/local-database.html#rich-plugin-ecosystem","content":" One of RxDB's greatest strengths is its pluggable architecture, allowing you to add functionality as needed: Encryption: Secure your data at rest using advanced encryption plugins.Full-Text Search: Integrate powerful text search capabilities for applications that require quick and flexible query options.Storage Adapters: Swap out the underlying storage layer (e.g., IndexedDB in the browser, SQLite in React Native, or a custom engine) without rewriting your application logic. You can fine-tune RxDB to your exact needs, avoiding the performance overhead of unnecessary features. ","version":"Next","tagName":"h3"},{"title":"Multi-Platform Compatibility","type":1,"pageTitle":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","url":"/articles/local-database.html#multi-platform-compatibility","content":" RxDB is a perfect fit for cross-platform development, as it supports numerous environments: Browsers (IndexedDB): For web and PWA projects.Node.js: Ideal for server-side rendering or background services.React Native: Leverage SQLite or other adapters for mobile app development.Electron: Create offline-capable desktop apps with a unified codebase. This versatility empowers teams to reuse application logic across multiple platforms while maintaining a consistent data model. ","version":"Next","tagName":"h3"},{"title":"Performance Optimization","type":1,"pageTitle":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","url":"/articles/local-database.html#performance-optimization-1","content":" With lazy loading of data and the ability to utilize efficient storage engines, RxDB delivers high-speed operations and quick response times. By minimizing disk I/O and leveraging indexes effectively, RxDB ensures that even large-scale applications remain performant. Its reactive nature also helps avoid unnecessary re-renders, improving the end-user experience. ","version":"Next","tagName":"h3"},{"title":"Proven Reliability","type":1,"pageTitle":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","url":"/articles/local-database.html#proven-reliability","content":" RxDB is battle-tested in production environments, handling use cases from small single-user applications to large-scale enterprise solutions. Its robust replication mechanism resolves conflicts, manages concurrent writes, and ensures data integrity. The active open-source community provides ongoing support, documentation updates, and feature improvements. ","version":"Next","tagName":"h3"},{"title":"Developer-Friendly Features","type":1,"pageTitle":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","url":"/articles/local-database.html#developer-friendly-features","content":" For developers, RxDB offers: Straightforward APIs: Built on top of familiar JavaScript paradigms like promises and observables.Excellent Documentation: Detailed guides, tutorials, and references for every major feature.Rich Community Support: Benefit from an active ecosystem of contributors creating plugins, answering questions, and maintaining core libraries. These qualities streamline development, making RxDB an appealing choice for teams of all sizes. ","version":"Next","tagName":"h3"},{"title":"Follow Up","type":1,"pageTitle":"What is a Local Database and Why RxDB is the Best Local Database for JavaScript Applications","url":"/articles/local-database.html#follow-up","content":" Ready to get started? Here are some next steps: Try the Quickstart Tutorial and build a basic project to see RxDB in action.Compare RxDB with other local database solutions to determine the best fit for your unique requirements. Ultimately, RxDB is more than just a database - it's a robust, reactive toolkit that empowers you to build fast, resilient, and user-centric applications. Whether you're creating an offline-first note-taking app or a real-time collaborative platform, RxDB can handle your local storage needs with ease and flexibility. ","version":"Next","tagName":"h2"},{"title":"Mobile Database - RxDB as Database for Mobile Applications","type":0,"sectionRef":"#","url":"/articles/mobile-database.html","content":"","keywords":"","version":"Next"},{"title":"Understanding Mobile Databases","type":1,"pageTitle":"Mobile Database - RxDB as Database for Mobile Applications","url":"/articles/mobile-database.html#understanding-mobile-databases","content":" Mobile databases are specialized software systems designed to handle data storage and management for mobile applications. These databases are optimized for the unique requirements of mobile environments, which often include limited device resources, fluctuations in network connectivity, and the need for offline functionality. There are various types of mobile databases available, each with its own strengths and use cases. Local databases, such as SQLite and Realm, reside directly on the user's device, providing offline capabilities and faster data access. Cloud-based databases, like Firebase Realtime Database and Amazon DynamoDB, rely on remote servers to store and retrieve data, enabling synchronization across multiple devices. Hybrid databases, as the name suggests, combine the benefits of both local and cloud-based approaches, offering a balance between offline functionality and data synchronization. ","version":"Next","tagName":"h2"},{"title":"Introducing RxDB: A Paradigm Shift in Mobile Database Solutions","type":1,"pageTitle":"Mobile Database - RxDB as Database for Mobile Applications","url":"/articles/mobile-database.html#introducing-rxdb-a-paradigm-shift-in-mobile-database-solutions","content":" RxDB, also known as Reactive Database, has emerged as a game-changer in the realm of mobile databases. Built on top of popular web technologies like JavaScript, TypeScript, and RxJS (Reactive Extensions for JavaScript), RxDB provides an elegant solution for seamless offline-first capabilities and real-time data synchronization in mobile applications. Benefits of RxDB for Hybrid App Development Offline-First Approach: One of the major advantages of RxDB is its ability to work in an offline mode. It allows mobile applications to store and access data locally, ensuring uninterrupted functionality even when the network connection is weak or unavailable. The database automatically syncs the data with the server once the connection is reestablished, guaranteeing data consistency. Real-Time Data Synchronization: RxDB leverages the power of real-time data synchronization, making it an excellent choice for applications that require collaborative features or live updates. It uses the concept of change streams to detect modifications made to the database and instantly propagates those changes across connected devices. This real-time synchronization enables seamless collaboration and enhances user experience. Reactive Programming Paradigm: RxDB embraces the principles of reactive programming, which simplifies the development process by handling asynchronous events and data streams. By leveraging RxJS observables, developers can write concise, declarative code that reacts to changes in data, ensuring a highly responsive user experience. The reactive programming paradigm enhances code maintainability, scalability, and testability. Easy Integration with Hybrid App Frameworks: RxDB seamlessly integrates with popular hybrid app development frameworks like React Native and Capacitor. This compatibility allows developers to leverage the existing ecosystem and tools of these frameworks, making the transition to RxDB smoother and more efficient. By utilizing RxDB within these frameworks, developers can harness the power of a robust database solution without sacrificing the advantages of hybrid app development. Cross-Platform Support: RxDB enables developers to build cross-platform mobile applications that run seamlessly on both iOS and Android devices. This versatility eliminates the need for separate database implementations for different platforms, saving development time and effort. With RxDB, developers can focus on building a unified codebase and delivering a consistent user experience across platforms. ","version":"Next","tagName":"h2"},{"title":"Use Cases for RxDB in Hybrid App Development","type":1,"pageTitle":"Mobile Database - RxDB as Database for Mobile Applications","url":"/articles/mobile-database.html#use-cases-for-rxdb-in-hybrid-app-development","content":" Offline-First Applications: RxDB is an ideal choice for applications that heavily rely on offline functionality. Whether it's a note-taking app, a task manager, or a survey application, RxDB ensures that users can continue working even when connectivity is compromised. The seamless synchronization capabilities of RxDB ensure that changes made offline are automatically propagated once the device reconnects to the internet. Real-Time Collaboration: Applications that require real-time collaboration, such as messaging platforms or collaborative editing tools, can greatly benefit from RxDB. The real-time synchronization capabilities enable multiple users to work on the same data simultaneously, ensuring that everyone sees the latest updates in real-time. Data-Intensive Applications: RxDB's performance and scalability make it suitable for data-intensive applications that handle large datasets or complex data structures. Whether it's a media-rich app, a data visualization tool, or an analytics platform, RxDB can handle the heavy lifting and provide a smooth user experience. Cross-Platform Applications: Hybrid app frameworks like React Native and Capacitor have gained popularity due to their ability to build cross-platform applications. By utilizing RxDB within these frameworks, developers can create a unified codebase that runs seamlessly on both iOS and Android, significantly reducing development time and effort. ","version":"Next","tagName":"h2"},{"title":"Conclusion","type":1,"pageTitle":"Mobile Database - RxDB as Database for Mobile Applications","url":"/articles/mobile-database.html#conclusion","content":" Mobile databases play a vital role in the performance and functionality of mobile applications. RxDB, with its offline-first approach, real-time data synchronization, and seamless integration with hybrid app development frameworks like React Native and Capacitor, offers a robust solution for managing data in mobile apps. By leveraging the power of reactive programming, RxDB empowers developers to build highly responsive, scalable, and cross-platform applications that deliver an exceptional user experience. With its versatility and ease of use, RxDB is undoubtedly a database solution worth considering for hybrid app development. Embrace the power of RxDB and unlock the full potential of your mobile applications. ","version":"Next","tagName":"h2"},{"title":"Local Vector Database with RxDB and transformers.js","type":0,"sectionRef":"#","url":"/articles/javascript-vector-database.html","content":"","keywords":"","version":"Next"},{"title":"What is a Vector Database?","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#what-is-a-vector-database","content":" A vector database is a specialized database optimized for storing and querying data in the form of high-dimensional vectors, often referred to as embeddings. These embeddings are numerical representations of data, such as text, images, or audio, created by machine learning models like MiniLM. Unlike traditional databases that work with exact matches on predefined fields, vector databases focus on semantic similarity, allowing you to query data based on meaning rather than exact values. A vector, or embedding, is essentially an array of numbers, like [0.56, 0.12, -0.34, -0.90]. For example, instead of asking "Which document has the word 'database'?", you can query "Which documents discuss similar topics to this one?" The vector database compares embeddings and returns results based on how similar the vectors are to each other. Vector databases handle multiple types of data beyond text, including images, videos, and audio files, all transformed into embeddings for efficient querying. Mostly you would not train a model by yourself and instead use one of the public available transformer models. Vector databases are highly effective in various types of applications: Similarity Search: Finds the closest matches to a query, even when the query doesn't contain the exact terms.Clustering: Groups similar items based on the proximity of their vector representations.Recommendations: Suggests items based on shared characteristics.Anomaly Detection: Identifies outliers that differ from the norm.Classification: Assigns categories to data based on its vector's nearest neighbors. In this tutorial, we will build a vector database designed as a Similarity Search for text. For other use cases, the setup can be adapted accordingly. This flexibility is why RxDB doesn't provide a dedicated vector-database plugin, but rather offers utility functions to help you build your own vector search system. ","version":"Next","tagName":"h2"},{"title":"Generating Embeddings Locally in a Browser","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#generating-embeddings-locally-in-a-browser","content":" For the first step to build a local-first vector database we need to compute embeddings directly on the user's device. This is where transformers.js from huggingface comes in, allowing us to run machine learning models in the browser with WebAssembly. Below is an implementation of a getEmbeddingFromText() function, which takes a piece of text and transforms it into an embedding using the Xenova/all-MiniLM-L6-v2 model: import { pipeline } from "@xenova/transformers"; const pipePromise = pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2'); async function getEmbeddingFromText(text) { const pipe = await pipePromise; const output = await pipe(text, { pooling: "mean", normalize: true, }); return Array.from(output.data); } This function creates an embedding by running the text through a pre-trained model and returning it in the form of an array of numbers, which can then be stored and further processed locally. note Vector embeddings from different machine learning models or versions are not compatible with each other. When you change your model, you have to recreate all embeddings for your data. ","version":"Next","tagName":"h2"},{"title":"Storing the Embeddings in RxDB","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#storing-the-embeddings-in-rxdb","content":" To store the embeddings, first we have to create our RxDB Database with the Dexie.js storage that stores data in IndexedDB. import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const db = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageDexie() }); Then we add a items collection that stores our documents with the text field that stores the content. await db.addCollections({ items: { schema: { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 20 }, text: { type: 'string' } }, required: ['id', 'text'] } } }); const itemsCollection = db.items; In our example repo, we use the Wiki Embeddings dataset from supabase which was transformed and used to fill up the items collection with test data. const imported = await itemsCollection.count().exec(); const response = await fetch('./files/items.json'); const items = await response.json(); const insertResult = await itemsCollection.bulkInsert( items ); Also we need a vector collection that stores our embeddings. RxDB, as a NoSQL database, allows for the storage of flexible data structures, such as embeddings, within documents. To achieve this, we need to define a schema that specifies how the embeddings will be stored alongside each document. The schema includes fields for an id and the embedding array itself. await db.addCollections({ vector: { schema: { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 20 }, embedding: { type: 'array', items: { type: 'string' } } }, required: ['id', 'embedding'] } } }); const vectorCollection = db.vector; When storing documents in the database, we need to ensure that the embeddings for these documents are generated and stored automatically. This requires a handler that runs during every document write, calling the machine learning model to generate the embeddings and storing them in a separate vector collection. Since our app runs in a browser, it's essential to avoid duplicate work when multiple browser tabs are open and ensure efficient use of resources. Furthermore, we want the app to resume processing documents from where it left off if it's closed or interrupted. To achieve this, RxDB provides a pipeline plugin, which allows us to set up a workflow that processes items and stores their embeddings. In our example, a pipeline takes batches of 10 documents, generates embeddings, and stores them in a separate vector collection. const pipeline = await itemsCollection.addPipeline({ identifier: 'my-embeddings-pipeline', destination: vectorCollection, batchSize: 10, handler: async (docs) => { await Promise.all(docs.map(async(doc) => { const embedding = await getVectorFromText(doc.text); await vectorCollection.upsert({ id: doc.primary, embedding }); })); } }); However, processing data locally presents performance challenges. Running the handler with a batch size of 10 takes around 2-4 seconds per batch, meaning processing 10k documents would take up to an hour. To improve performance, we can do parallel processing using WebWorkers. A WebWorker runs on a different JavaScript process and we can start and run many of them in parallel. Our worker listens for messages and performance the embedding generation on each request. It then sends the result embedding back to the main thread. // worker.js import { getVectorFromText } from './vector.js'; onmessage = async (e) => { const embedding = await getVectorFromText(e.data.text); postMessage({ id: e.data.id, embedding }); }; On the main thread we spawn one worker per core and send the tasks to the worker instead of processing them on the main thread. // create one WebWorker per core const workers = new Array(navigator.hardwareConcurrency) .fill(0) .map(() => new Worker(new URL("worker.js", import.meta.url))); let lastWorkerId = 0; let lastId = 0; export async function getVectorFromTextWithWorker(text: string): Promise<number[]> { let worker = workers[lastWorkerId++]; if(!worker) { lastWorkerId = 0; worker = workers[lastWorkerId++]; } const id = (lastId++) + ''; return new Promise<number[]>(res => { const listener = (ev: any) => { if (ev.data.id === id) { res(ev.data.embedding); worker.removeEventListener('message', listener); } }; worker.addEventListener('message', listener); worker.postMessage({ id, text }); }); } const pipeline = await itemsCollection.addPipeline({ identifier: 'my-embeddings-pipeline', destination: vectorCollection, batchSize: navigator.hardwareConcurrency, // one per CPU core handler: async (docs) => { await Promise.all(docs.map(async (doc, i) => { const embedding = await getVectorFromTextWithWorker(doc.body); /* ... */ }); } }); This setup allows us to utilize the full hardware capacity of the client's machine. By setting the batch size to match the number of logical processors available (using the navigator.hardwareConcurrency API) and running one worker per processor, we can reduce the processing time for 10k embeddings to about 5 minutes on my developer laptop with 32 CPU cores. ","version":"Next","tagName":"h2"},{"title":"Comparing Vectors by calculating the distance","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#comparing-vectors-by-calculating-the-distance","content":" Now that we have stored our embeddings in the database, the next step is to compare these vectors to each other. Various methods are available to measure the similarity or difference between two vectors, such as Euclidean distance, Manhattan distance, Cosine similarity, and Jaccard similarity (and more). RxDB provides utility functions for each of these methods, making it easy to choose the most suitable method for your application. In this tutorial, we will use Euclidean distance to compare vectors. However, the ideal algorithm may vary depending on your data's distribution and the specific type of query you are performing. To find the optimal method for your app, it is up to you to try out all of these and compare the results. Each method gets two vectors as input and returns a single number. Here's how to calculate the Euclidean distance between two embeddings with the vector utilities from RxDB: import { euclideanDistance } from 'rxdb/plugins/vector'; const distance = euclideanDistance(embedding1, embedding2); console.log(distance); // 25.20443 With this we can sort multiple embeddings by how good they match our search query vector. ","version":"Next","tagName":"h2"},{"title":"Searching the Vector database with a full table scan","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#searching-the-vector-database-with-a-full-table-scan","content":" To find out if our embeddings have been stored correctly and that our vector comparison works as should, let's run a basic query to ensure everything functions as expected. In this query, we aim to find documents similar to a given user input text. The process involves calculating the embedding from the input text, fetching all documents, calculating the distance between their embeddings and the query embedding, and then sorting them based on their similarity. import { euclideanDistance } from 'rxdb/plugins/vector'; import { sortByObjectNumberProperty } from 'rxdb/plugins/core'; const userInput = 'new york people'; const queryVector = await getEmbeddingFromText(userInput); const candidates = await vectorCollection.find().exec(); const withDistance = candidates.map(doc => ({ doc, distance: euclideanDistance(queryVector, doc.embedding) })); const queryResult = withDistance.sort(sortByObjectNumberProperty('distance')).reverse(); console.dir(queryResult); note For distance-based comparisons, sorting should be in ascending order (smallest first), while for similarity-based algorithms, the sorting should be in descending order (largest first). If we inspect the results, we can see that the documents returned are ordered by relevance, with the most similar document at the top: note This demo page can be run online here. However our full-scan method presents a significant challenge: it does not scale well. As the number of stored documents increases, the time taken to fetch and compare embeddings grows proportionally. For example, retrieving embeddings from our test dataset of 10k documents takes around 700 milliseconds. If we scale up to 100k documents, this delay would rise to approximately 7 seconds, making the search process inefficient for larger datasets. ","version":"Next","tagName":"h2"},{"title":"Indexing the Embeddings for Better Performance","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#indexing-the-embeddings-for-better-performance","content":" To address the scalability issue, we need to store embeddings in a way that allows us to avoid fetching all of them from storage during a query. In traditional databases, you can sort documents by an index field, allowing efficient queries that retrieve only the necessary documents. An index organizes data in a structured, sortable manner, much like a phone book. However, with vector embeddings we are not dealing with simple, single values. Instead, we have large lists of numbers, which makes indexing more complex because we have more than one dimension. ","version":"Next","tagName":"h2"},{"title":"Vector Indexing Methods","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#vector-indexing-methods","content":" Various methods exist for indexing these vectors to improve query efficiency and performance: Locality Sensitive Hashing (LSH): LSH hashes data so that similar items are likely to fall into the same bucket, optimizing approximate nearest neighbor searches in high-dimensional spaces by reducing the number of comparisons.Hierarchical Small World: HSW is a graph structure designed for efficient navigation, allowing quick jumps across the graph while maintaining short paths between nodes, forming the basis for HNSW's optimization.Hierarchical Navigable Small Worlds (HNSW): HNSW builds a hierarchical graph for fast approximate nearest neighbor search. It uses multiple layers where higher layers represent fewer, more connected nodes, improving search efficiency in large datasets.Distance to samples: While testing different indexing strategies, I found out that using the distance to a sample set of items is a good way to index embeddings. You pick like 5 random items of your data and get the embeddings for them out of the model. These are your 5 index vectors. For each embedding stored in the vector database, we calculate the distance to our 5 index vectors and store that number as an index value. This seems to work good because similar things have similar distances to other things. For example the words "shoe" and "socks" have a similar distance to "boat" and therefore should have roughly the same index value. When building local-first applications, performance is often a challenge, especially in JavaScript. With IndexedDB, certain operations, like many sequential get by id calls, are slow, while bulk operations, such as get by index range, are fast. Therefore, it's essential to use an indexing method that allows embeddings to be stored in a sortable way, like Locality Sensitive Hashing or Distance to Samples. In this article, we'll use Distance to Samples, because for me it provides the best default behavior for the sample dataset. ","version":"Next","tagName":"h3"},{"title":"Storing indexed embeddings in RxDB","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#storing-indexed-embeddings-in-rxdb","content":" The optimal way to store index values alongside embeddings in RxDB is to place them within the same RxCollection. To ensure that the index values are both sortable and precise, we convert them into strings with a fixed length of 10 characters. This standardization helps in managing values with many decimals and ensures proper sorting in the database. Here's is our schema example schema where each document contains an embedding and corresponding index fields: const indexSchema = { type: 'string', maxLength: 10 }; const schema = { "version": 0, "primaryKey": "id", "type": "object", "properties": { "id": { "type": "string", "maxLength": 100 }, "embedding": { "type": "array", "items": { "type": "number" } }, // index fields "idx0": indexSchema, "idx1": indexSchema, "idx2": indexSchema, "idx3": indexSchema, "idx4": indexSchema }, "required": [ "id", "embedding", "idx0", "idx1", "idx2", "idx3", "idx4" ], "indexes": [ "idx0", "idx1", "idx2", "idx3", "idx4" ] } To populate these index fields, we modify the RxPipeline handler accordingly to the Distance to samples method. We calculate the distance between the document's embedding and our set of 5 index vectors. The calculated distances are converted to string and stored in the appropriate index fields: import { euclideanDistance } from 'rxdb/plugins/vector'; const sampleVectors: number[][] = [/* the index vectors */]; const pipeline = await itemsCollection.addPipeline({ handler: async (docs) => { await Promise.all(docs.map(async(doc) => { const embedding = await getEmbedding(doc.text); const docData = { id: doc.primary, embedding }; // calculate the distance to all samples and store them in the index fields new Array(5).fill(0).map((_, idx) => { const indexValue = euclideanDistance(sampleVectors[idx], embedding); docData['idx' + idx] = indexNrToString(indexValue); }); await vectorCollection.upsert(docData); })); } }); ","version":"Next","tagName":"h3"},{"title":"Searching the Vector database with utilization of the indexes","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#searching-the-vector-database-with-utilization-of-the-indexes","content":" Once our embeddings are stored in an indexed format, we can perform searches much more efficiently than through a full table scan. While this indexing method boosts performance, it comes with a tradeoff: a slight loss in precision, meaning that the result set may not always be the optimal one. However, this is generally acceptable for similarity search use cases. There are multiple ways to leverage indexes for faster queries. Here are two effective methods: Query for Index Similarity in Both Directions: For each index vector, calculate the distance to the search embedding and fetch all relevant embeddings in both directions (sorted before and after) from that value. async function vectorSearchIndexSimilarity(searchEmbedding: number[]) { const docsPerIndexSide = 100; const candidates = new Set<RxDocument>(); await Promise.all( new Array(5).fill(0).map(async (_, i) => { const distanceToIndex = euclideanDistance(sampleVectors[i], searchEmbedding); const [docsBefore, docsAfter] = await Promise.all([ vectorCollection.find({ selector: { ['idx' + i]: { $lt: indexNrToString(distanceToIndex) } }, sort: [{ ['idx' + i]: 'desc' }], limit: docsPerIndexSide }).exec(), vectorCollection.find({ selector: { ['idx' + i]: { $gt: indexNrToString(distanceToIndex) } }, sort: [{ ['idx' + i]: 'asc' }], limit: docsPerIndexSide }).exec() ]); docsBefore.map(d => candidates.add(d)); docsAfter.map(d => candidates.add(d)); }) ); const docsWithDistance = Array.from(candidates).map(doc => { const distance = euclideanDistance((doc as any).embedding, searchEmbedding); return { distance, doc }; }); const sorted = docsWithDistance.sort(sortByObjectNumberProperty('distance')).reverse(); return { result: sorted.slice(0, 10), docReads }; } Query for an Index Range with a Defined Distance: Set an indexDistance and retrieve all embeddings within a specified range from the index vector to the search embedding. async function vectorSearchIndexRange(searchEmbedding: number[]) { await pipeline.awaitIdle(); const indexDistance = 0.003; const candidates = new Set<RxDocument>(); let docReads = 0; await Promise.all( new Array(5).fill(0).map(async (_, i) => { const distanceToIndex = euclideanDistance(sampleVectors[i], searchEmbedding); const range = distanceToIndex * indexDistance; const docs = await vectorCollection.find({ selector: { ['idx' + i]: { $gt: indexNrToString(distanceToIndex - range), $lt: indexNrToString(distanceToIndex + range) } }, sort: [{ ['idx' + i]: 'asc' }], }).exec(); docs.map(d => candidates.add(d)); docReads = docReads + docs.length; }) ); const docsWithDistance = Array.from(candidates).map(doc => { const distance = euclideanDistance((doc as any).embedding, searchEmbedding); return { distance, doc }; }); const sorted = docsWithDistance.sort(sortByObjectNumberProperty('distance')).reverse(); return { result: sorted.slice(0, 10), docReads }; }; Both methods allow you to limit the number of embeddings fetched from storage while still ensuring a reasonably precise search result. However, they differ in how many embeddings are read and how precise the results are, with trade-offs between performance and accuracy. The first method has a known embedding read amount of docsPerIndexSide * 2 * [amount of indexes]. The second method reads out an unknown amount of embeddings, depending on the sparsity of the dataset and the value of indexDistance. And that's it for the implementation. We now have a local first vector database that is able to store and query vector data. ","version":"Next","tagName":"h2"},{"title":"Performance benchmarks","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#performance-benchmarks","content":" In server-side databases, performance can be improved by scaling hardware or adding more servers. However, local-first apps face the unique challenge that the hardware is determined by the end user, making performance unpredictable. Some users may have high-end gaming PCs, while others might be using outdated smartphones in power-saving mode. Therefore, when building a local-first app that processes more than a few documents, performance becomes a critical factor and should be thoroughly tested upfront. Let's run performance benchmarks on my high-end gaming PC to give you a sense of how long different operations take and what's achievable. ","version":"Next","tagName":"h2"},{"title":"Performance of the Query Methods","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#performance-of-the-query-methods","content":" Query Method\tTime in milliseconds\tDocs read from storageFull Scan\t765\t10000 Index Similarity\t1647\t934 Index Range\t88\t2187 As shown, the index similarity query method takes significantly longer compared to others. This is due to the need for descending sort orders in some queries sort: [{ ['idx' + i]: 'desc' }]. While RxDB supports descending sorts, performance suffers because IndexedDB does not efficiently handle reverse indexed bulk operations. As a result, the index range method performs much better for this use case and should be used instead. With its query time of only 88 milliseconds it is fast enough for all most things and likely such fast that you do not even need to show a loading spinner. Also it is faster compared to fetching the query result from a server-side vector database over the internet. ","version":"Next","tagName":"h3"},{"title":"Performance of the Models","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#performance-of-the-models","content":" Let's also look at the time taken to calculate a single embedding across various models from the huggingface transformers list: Model Name\tTime per Embedding in (ms)\tVector Size\tModel Size (MB)Xenova/all-MiniLM-L6-v2\t173\t384\t23 Supabase/gte-small\t341\t384\t34 Xenova/paraphrase-multilingual-mpnet-base-v2\t1000\t768\t279 jinaai/jina-embeddings-v2-base-de\t1291\t768\t162 jinaai/jina-embeddings-v2-base-zh\t1437\t768\t162 jinaai/jina-embeddings-v2-base-code\t1769\t768\t162 mixedbread-ai/mxbai-embed-large-v1\t3359\t1024\t337 WhereIsAI/UAE-Large-V1\t3499\t1024\t337 Xenova/multilingual-e5-large\t4215\t1024\t562 From these benchmarks, it's evident that models with larger vector outputs take longer to process. Additionally, the model size significantly affects performance, with larger models requiring more time to compute embeddings. This trade-off between model complexity and performance must be considered when choosing the right model for your use case. ","version":"Next","tagName":"h3"},{"title":"Potential Performance Optimizations","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#potential-performance-optimizations","content":" There are multiple other techniques to improve the performance of your local vector database: Shorten embeddings: The storing and retrieval of embeddings can be improved by "shortening" the embedding. To do that, you just strip away numbers from your vector. For example [0.56, 0.12, -0.34, 0.78, -0.90] becomes [0.56, 0.12]. That's it, you now have a smaller embedding that is faster to read out of the storage and calculating distances is faster because it has to process less numbers. The downside is that you loose precision in your search results. Sometimes shortening the embeddings makes more sense as a pre-query step where you first compare the shortened vectors and later fetch the "real" vectors for the 10 most matching documents to improve their sort order. Optimize the variables in our Setup: In this examples we picked our variables in a non-optimal way. You can get huge performance improvements by setting different values: We picked 5 indexes for the embeddings. Using less indexes improves your query performance with the cost of less good results.For queries that search by fetching a specific embedding distance we used the indexDistance value of 0.003. Using a lower value means we read less document from the storage. This is faster but reduces the precision of the results which means we will get a less optimal result compared to a full table scan.For queries that search by fetching a given amount of documents per index side, we set the value docsPerIndexSide to 100. Increasing this value means you fetch more data from the storage but also get a better precision in the search results. Decreasing it can improve query performance with worse precision. Use faster models: There are many ways to improve performance of machine learning models. If your embedding calculation is too slow, try other models. Smaller mostly means faster. The model Xenova/all-MiniLM-L6-v2 which is used in this tutorial is about 1 year old. There exist better, more modern models to use. Huggingface makes these convenient to use. You only have to switch out the model name with any other model from that site. Narrow down the search space: By utilizing other "normal" filter operators to your query, you can narrow down the search space and optimize performance. For example in an email search you could additionally use a operator that limits the results to all emails that are not older then one year. Dimensionality Reduction with an autoencoder: An autoencoder encodes vector data with minimal loss which can improve the performance by having to store and compare less numbers in an embedding. Different RxDB Plugins: RxDB has different storages and plugins that can improve the performance like the IndexedDB RxStorage, the OPFS RxStorage, the sharding plugin and the Worker and SharedWorker storages. ","version":"Next","tagName":"h2"},{"title":"Migrating Data on Model/Index Changes","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#migrating-data-on-modelindex-changes","content":" When you change the index parameter or even update the whole model which was used to create the embeddings, you have to migrate the data that is already stored on your users devices. RxDB offers the Schema Migration Plugin for that. When the app is reloaded and the updated source code is started, RxDB detects changes in your schema version and runs the migration strategy accordingly. So to update the stored data, increase the schema version and define a handler: const schemaV1 = { "version": 1, // <- increase schema version by 1 "primaryKey": "id", "properties": { /* ... */ }, /* ... */ }; In the migration handler we recreate the new embeddings and index values. await myDatabase.addCollections({ vectors: { schema: schemaV1, migrationStrategies: { 1: function(docData){ const embedding = await getEmbedding(docData.body); new Array(5).fill(0).map((_, idx) => { docData['idx' + idx] = euclideanDistance(mySampleVectors[idx], embedding); }); return docData; }, } } }); ","version":"Next","tagName":"h2"},{"title":"Possible Future Improvements to Local-First Vector Databases","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#possible-future-improvements-to-local-first-vector-databases","content":" For now our vector database works and we are good to go. However there are some things to consider for the future: WebGPU is not fully supported yet. When this changes, creating embeddings in the browser have the potential to become faster. You can check if your current chrome supports WebGPU by opening chrome://gpu/. Notice that WebGPU has been reported to sometimes be even slower compared to WASM but likely it will be faster in the long term.Cross-Modal AI Models: While progress is being made, AI models that can understand and integrate multiple modalities are still in development. For example you could query for an image together with a text prompt to get a more detailed output.Multi-Step queries: In this article we only talked about having a single query as input and an ordered list of outputs. But there is big potential in chaining models or queries together where you take the results of one query and input them into a different model with different embeddings or outputs. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"Local Vector Database with RxDB and transformers.js","url":"/articles/javascript-vector-database.html#follow-up","content":" Shared/Like my announcement tweetRead the source code that belongs to this article at githubLearn how to use RxDB with the RxDB QuickstartCheck out the RxDB github repo and leave a star ⭐ ","version":"Next","tagName":"h2"},{"title":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","type":0,"sectionRef":"#","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html","content":"","keywords":"","version":"Next"},{"title":"The available Storage APIs in a modern Browser","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#the-available-storage-apis-in-a-modern-browser","content":" First lets have a brief overview of the different APIs, their intentional use case and history: ","version":"Next","tagName":"h2"},{"title":"What are Cookies","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#what-are-cookies","content":" Cookies were first introduced by netscape in 1994. Cookies store small pieces of key-value data that are mainly used for session management, personalization, and tracking. Cookies can have several security settings like a time-to-live or the domain attribute to share the cookies between several subdomains. Cookies values are not only stored at the client but also sent with every http request to the server. This means we cannot store much data in a cookie but it is still interesting how good cookie access performance compared to the other methods. Especially because cookies are such an important base feature of the web, many performance optimizations have been done and even these days there is still progress being made like the Shared Memory Versioning by chromium or the asynchronous CookieStore API. ","version":"Next","tagName":"h3"},{"title":"What is LocalStorage","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#what-is-localstorage","content":" The localStorage API was first proposed as part of the WebStorage specification in 2009. LocalStorage provides a simple API to store key-value pairs inside of a web browser. It has the methods setItem, getItem, removeItem and clear which is all you need from a key-value store. LocalStorage is only suitable for storing small amounts of data that need to persist across sessions and it is limited by a 5MB storage cap. Storing complex data is only possible by transforming it into a string for example with JSON.stringify(). The API is not asynchronous which means if fully blocks your JavaScript process while doing stuff. Therefore running heavy operations on it might block your UI from rendering. There is also the SessionStorage API. The key difference is that localStorage data persists indefinitely until explicitly cleared, while sessionStorage data is cleared when the browser tab or window is closed. ","version":"Next","tagName":"h3"},{"title":"What is IndexedDB","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#what-is-indexeddb","content":" IndexedDB was first introduced as "Indexed Database API" in 2015. IndexedDB is a low-level API for storing large amounts of structured JSON data. While the API is a bit hard to use, IndexedDB can utilize indexes and asynchronous operations. It lacks support for complex queries and only allows to iterate over the indexes which makes it more like a base layer for other libraries then a fully fledged database. In 2018, IndexedDB version 2.0 was introduced. This added some major improvements. Most noticeable the getAll() method which improves performance dramatically when fetching bulks of JSON documents. IndexedDB version 3.0 is in the workings which contains many improvements. Most important the addition of Promise based calls that makes modern JS features like async/await more useful. ","version":"Next","tagName":"h3"},{"title":"What is OPFS","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#what-is-opfs","content":" The Origin Private File System (OPFS) is a relatively new API that allows web applications to store large files directly in the browser. It is designed for data-intensive applications that want to write and read binary data in a simulated file system. OPFS can be used in two modes: Either asynchronous on the main threadOr in a WebWorker with the faster, asynchronous access with the createSyncAccessHandle() method. Because only binary data can be processed, OPFS is made to be a base filesystem for library developers. You will unlikely directly want to use the OPFS in your code when you build a "normal" application because it is too complex. That would only make sense for storing plain files like images, not to store and query JSON data efficiently. I have build a OPFS based storage for RxDB with proper indexing and querying and it took me several months. ","version":"Next","tagName":"h3"},{"title":"What is WASM SQLite","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#what-is-wasm-sqlite","content":" WebAssembly (Wasm) is a binary format that allows high-performance code execution on the web. Wasm was added to major browsers over the course of 2017 which opened a wide range of opportunities on what to run inside of a browser. You can compile native libraries to WebAssembly and just run them on the client with just a few adjustments. WASM code can be shipped to browser apps and generally runs much faster compared to JavaScript, but still about 10% slower then native. Many people started to use compiled SQLite as a database inside of the browser which is why it makes sense to also compare this setup to the native APIs. The compiled byte code of SQLite has a size of about 938.9 kB which must be downloaded and parsed by the users on the first page load. WASM cannot directly access any persistent storage API in the browser. Instead it requires data to flow from WASM to the main-thread and then can be put into one of the browser APIs. This is done with so called VFS (virtual file system) adapters that handle data access from SQLite to anything else. ","version":"Next","tagName":"h3"},{"title":"What was WebSQL","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#what-was-websql","content":" WebSQL was a web API introduced in 2009 that allowed browsers to use SQL databases for client-side storage, based on SQLite. The idea was to give developers a way to store and query data using SQL on the client side, similar to server-side databases. WebSQL has been removed from browsers in the current years for multiple good reasons: WebSQL was not standardized and having an API based on a single specific implementation in form of the SQLite source code is hard to ever make it to a standard.WebSQL required browsers to use a specific version of SQLite (version 3.6.19) which means whenever there would be any update or bugfix to SQLite, it would not be possible to add that to WebSQL without possible breaking the web.Major browsers like firefox never supported WebSQL. Therefore in the following we will just ignore WebSQL even if it would be possible to run tests on in by setting specific browser flags or using old versions of chromium. ","version":"Next","tagName":"h3"},{"title":"Feature Comparison","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#feature-comparison","content":" Now that you know the basic concepts of the APIs, lets compare some specific features that have shown to be important for people using RxDB and browser based storages in general. ","version":"Next","tagName":"h2"},{"title":"Storing complex JSON Documents","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#storing-complex-json-documents","content":" When you store data in a web application, most often you want to store complex JSON documents and not only "normal" values like the integers and strings you store in a server side database. Only IndexedDB works with JSON objects natively.With SQLite WASM you can store JSON in a text column since version 3.38.0 (2022-02-22) and even run deep queries on it and use single attributes as indexes. Every of the other APIs can only store strings or binary data. Of course you can transform any JSON object to a string with JSON.stringify() but not having the JSON support in the API can make things complex when running queries and running JSON.stringify() many times can cause performance problems. ","version":"Next","tagName":"h3"},{"title":"Multi-Tab Support","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#multi-tab-support","content":" A big difference when building a Web App compared to Electron or React-Native, is that the user will open and close the app in multiple browser tabs at the same time. Therefore you have not only one JavaScript process running, but many of them can exist and might have to share state changes between each other to not show outdated data to the user. If your users' muscle memory puts the left hand on the F5 key while using your website, you did something wrong! Not all storage APIs support a way to automatically share write events between tabs. Only localstorage has a way to automatically share write events between tabs by the API itself with the storage-event which can be used to observe changes. // localStorage can observe changes with the storage event. // This feature is missing in IndexedDB and others addEventListener("storage", (event) => {}); There was the experimental IndexedDB observers API for chrome, but the proposal repository has been archived. To workaround this problem, there are two solutions: The first option is to use the BroadcastChannel API which can send messages across browser tabs. So whenever you do a write to the storage, you also send a notification to other tabs to inform them about these changes. This is the most common workaround which is also used by RxDB. Notice that there is also the WebLocks API which can be used to have mutexes across browser tabs.The other solution is to use the SharedWorker and do all writes inside of the worker. All browser tabs can then subscribe to messages from that single SharedWorker and know about changes. ","version":"Next","tagName":"h3"},{"title":"Indexing Support","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#indexing-support","content":" The big difference between a database and storing data in a plain file, is that a database is writing data in a format that allows running operations over indexes to facilitate fast performant queries. From our list of technologies only IndexedDB and WASM SQLite support for indexing out of the box. In theory you can build indexes on top of any storage like localstorage or OPFS but you likely should not want to do that by yourself. In IndexedDB for example, we can fetch a bulk of documents by a given index range: // find all products with a price between 10 and 50 const keyRange = IDBKeyRange.bound(10, 50); const transaction = db.transaction('products', 'readonly'); const objectStore = transaction.objectStore('products'); const index = objectStore.index('priceIndex'); const request = index.getAll(keyRange); const result = await new Promise((res, rej) => { request.onsuccess = (event) => res(event.target.result); request.onerror = (event) => rej(event); }); Notice that IndexedDB has the limitation of not having indexes on boolean values. You can only index strings and numbers. To workaround that you have to transform boolean to numbers and backwards when storing the data. ","version":"Next","tagName":"h3"},{"title":"WebWorker Support","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#webworker-support","content":" When running heavy data operations, you might want to move the processing away from the JavaScript main thread. This ensures that our app keeps being responsive and fast while the processing can run in parallel in the background. In a browser you can either use the WebWorker, SharedWorker or the ServiceWorker API to do that. In RxDB you can use the WebWorker or SharedWorker plugins to move your storage inside of a worker. The most common API for that use case is spawning a WebWorker and doing most work on that second JavaScript process. The worker is spawned from a separate JavaScript file (or base64 string) and communicates with the main thread by sending data with postMessage(). Unfortunately LocalStorage and Cookiescannot be used in WebWorker or SharedWorker because of the design and security constraints. WebWorkers run in a separate global context from the main browser thread and therefore cannot do stuff that might impact the main thread. They have no direct access to certain web APIs, like the DOM, localStorage, or cookies. Everything else can be used from inside a WebWorker. The fast version of OPFS with the createSyncAccessHandle method can onlybe used in a WebWorker, and not on the main thread. This is because all the operations of the returned AccessHandle are not async and therefore block the JavaScript process, so you do want to do that on the main thread and block everything. ","version":"Next","tagName":"h3"},{"title":"Storage Size Limits","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#storage-size-limits","content":" Cookies are limited to about 4 KB of data in RFC-6265. Because the stored cookies are send to the server with every HTTP request, this limitation is reasonable. You can test your browsers cookie limits here. Notice that you should never fill up the full 4 KB of your cookies because your websserver will not accept too long headers and reject the requests with HTTP ERROR 431 - Request header fields too large. Once you have reached that point you can not even serve updated JavaScript to your user to clean up the cookies and you will have locked out that user until the cookies get cleaned up manually. LocalStorage has a storage size limitation that varies depending on the browser, but generally ranges from 4 MB to 10 MB per origin. You can test your localStorage size limit here. Chrome/Chromium/Edge: 5 MB per domainFirefox: 10 MB per domainSafari: 4-5 MB per domain (varies slightly between versions) IndexedDB does not have a specific fixed size limitation like localStorage. The maximum storage size for IndexedDB depends on the browser implementation. The upper limit is typically based on the available disc space on the user's device. In chromium browsers it can use up to 80% of total disk space. You can get an estimation about the storage size limit by calling await navigator.storage.estimate(). Typically you can store gigabytes of data which can be tried out here. Notice that we have a full article about storage max size limits of IndexedDB that covers this topic. OPFS has the same storage size limitation as IndexedDB. Its limit depends on the available disc space. This can also be tested here. ","version":"Next","tagName":"h2"},{"title":"Performance Comparison","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#performance-comparison","content":" Now that we've reviewed the features of each storage method, let's dive into performance comparisons, focusing on initialization times, read/write latencies, and bulk operations. Notice that we only run simple tests and for your specific use case in your application the results might differ. Also we only compare performance in google chrome (version 128.0.6613.137). Firefox and Safari have similar but not equal performance patterns. You can run the test by yourself on your own machine from this github repository. For all tests we throttle the network to behave like the average german internet speed. (download: 135,900 kbit/s, upload: 28,400 kbit/s, latency: 125ms). Also all tests store an "average" JSON object that might be required to be stringified depending on the storage. We also only test the performance of storing documents by id because some of the technologies (cookies, OPFS and localstorage) do not support indexed range operations so it makes no sense to compare the performance of these. ","version":"Next","tagName":"h2"},{"title":"Initialization Time","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#initialization-time","content":" Before you can store any data, many APIs require a setup process like creating databases, spawning WebAssembly processes or downloading additional stuff. To ensure your app starts fast, the initialization time is important. The APIs of localStorage and Cookies do not have any setup process and can be directly used. IndexedDB requires to open a database and a store inside of it. WASM SQLite needs to download a WASM file and process it. OPFS needs to download and start a worker file and initialize the virtual file system directory. Here are the time measurements from how long it takes until the first bit of data can be stored: Technology\tTime in MillisecondsIndexedDB\t46 OPFS Main Thread\t23 OPFS WebWorker\t26.8 WASM SQLite (memory)\t504 WASM SQLite (IndexedDB)\t535 Here we can notice a few things: Opening a new IndexedDB database with a single store takes surprisingly longThe latency overhead of sending data from the main thread to a WebWorker OPFS is about 4 milliseconds. Here we only send minimal data to init the OPFS file handler. It will be interesting if that latency increases when more data is processed.Downloading and parsing WASM SQLite and creating a single table takes about half a second. Using also the IndexedDB VFS to store data persistently adds additional 31 milliseconds. Reloading the page with enabled caching and already prepared tables is a bit faster with 420 milliseconds (memory). ","version":"Next","tagName":"h3"},{"title":"Latency of small Writes","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#latency-of-small-writes","content":" Next lets test the latency of small writes. This is important when you do many small data changes that happen independent from each other. Like when you stream data from a websocket or persist pseudo randomly happening events like mouse movements. Technology\tTime in MillisecondsCookies\t0.058 LocalStorage\t0.017 IndexedDB\t0.17 OPFS Main Thread\t1.46 OPFS WebWorker\t1.54 WASM SQLite (memory)\t0.17 WASM SQLite (IndexedDB)\t3.17 Here we can notice a few things: LocalStorage has the lowest write latency with only 0.017 milliseconds per write.IndexedDB writes are about 10 times slower compared to localStorage.Sending the data to the WASM SQLite process and letting it persist via IndexedDB is slow with over 3 milliseconds per write. The OPFS operations take about 1.5 milliseconds to write the JSON data into one document per file. We can see the sending the data to a webworker first is a bit slower which comes from the overhead of serializing and deserializing the data on both sides. If we would not create on OPFS file per document but instead append everything to a single file, the performance pattern changes significantly. Then the faster file handle from the createSyncAccessHandle() only takes about 1 millisecond per write. But this would require to somehow remember at which position the each document is stored. Therefore in our tests we will continue using one file per document. ","version":"Next","tagName":"h3"},{"title":"Latency of small Reads","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#latency-of-small-reads","content":" Now that we have stored some documents, lets measure how long it takes to read single documents by their id. Technology\tTime in MillisecondsCookies\t0.132 LocalStorage\t0.0052 IndexedDB\t0.1 OPFS Main Thread\t1.28 OPFS WebWorker\t1.41 WASM SQLite (memory)\t0.45 WASM SQLite (IndexedDB)\t2.93 Here we can notice a few things: LocalStorage reads are really really fast with only 0.0052 milliseconds per read.The other technologies perform reads in a similar speed to their write latency. ","version":"Next","tagName":"h3"},{"title":"Big Bulk Writes","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#big-bulk-writes","content":" As next step, lets do some big bulk operations with 200 documents at once. Technology\tTime in MillisecondsCookies\t20.6 LocalStorage\t5.79 IndexedDB\t13.41 OPFS Main Thread\t280 OPFS WebWorker\t104 WASM SQLite (memory)\t19.1 WASM SQLite (IndexedDB)\t37.12 Here we can notice a few things: Sending the data to a WebWorker and running it via the faster OPFS API is about twice as fast.WASM SQLite performs better on bulk operations compared to its single write latency. This is because sending the data to WASM and backwards is faster if it is done all at once instead of once per document. ","version":"Next","tagName":"h3"},{"title":"Big Bulk Reads","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#big-bulk-reads","content":" Now lets read 100 documents in a bulk request. Technology\tTime in MillisecondsCookies\t6.34 LocalStorage\t0.39 IndexedDB\t4.99 OPFS Main Thread\t54.79 OPFS WebWorker\t25.61 WASM SQLite (memory)\t3.59 WASM SQLite (IndexedDB)\t5.84 (35ms without cache) Here we can notice a few things: Reading many files in the OPFS webworker is about twice as fast compared to the slower main thread mode.WASM SQLite is surprisingly fast. Further inspection has shown that the WASM SQLite process keeps the documents in memory cached which improves the latency when we do reads directly after writes on the same data. When the browser tab is reloaded between the writes and the reads, finding the 100 documents takes about 35 milliseconds instead. ","version":"Next","tagName":"h3"},{"title":"Performance Conclusions","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#performance-conclusions","content":" LocalStorage is really fast but remember that is has some downsides: It blocks the main JavaScript process and therefore should not be used for big bulk operations.Only Key-Value assignments are possible, you cannot use it efficiently when you need to do index based range queries on your data. OPFS is way faster when used in the WebWorker with the createSyncAccessHandle() method compare to using it directly in the main thread.SQLite WASM can be fast but the you have to initially download the full binary and start it up which takes about half a second. This might not be relevant at all if your app is started up once and the used for a very long time. But for web-apps that are opened and closed in many browser tabs many times, this might be a problem. ","version":"Next","tagName":"h2"},{"title":"Possible Improvements","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#possible-improvements","content":" There is a wide range of possible improvements and performance hacks to speed up the operations. For IndexedDB I have made a list of performance hacks here. For example you can do sharding between multiple database and webworkers or use a custom index strategy.OPFS is slow in writing one file per document. But you do not have to do that and instead you can store everything at a single file like a normal database would do. This improves performance dramatically like it was done with the RxDB OPFS RxStorage.You can mix up the technologies to optimize for multiple scenarios at once. For example in RxDB there is the localstorage meta optimizer which stores initial metadata in localstorage and "normal" documents inside of IndexedDB. This improves the initial startup time while still having the documents stored in a way to query them efficiently.There is the memory-mapped storage plugin in RxDB which maps data directly to memory. Using this in combination with a shared worker can improve pageloads and query time significantly.Compressing data before storing it might improve the performance for some of the storages.Splitting work up between multiple WebWorkers via sharding can improve performance by utilizing the whole capacity of your users device. Here you can see the performance comparison of various RxDB storage implementations which gives a better view of real world performance: ","version":"Next","tagName":"h2"},{"title":"Future Improvements","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#future-improvements","content":" You are reading this in 2024, but the web does not stand still. There is a good chance that browser get enhanced to allow faster and better data operations. Currently there is no way to directly access a persistent storage from inside a WebAssembly process. If this changes in the future, running SQLite (or a similar database) in a browser might be the best option.Sending data between the main thread and a WebWorker is slow but might be improved in the future. There is a good article about why postMessage() is slow.IndexedDB lately got support for storage buckets (chrome only) which might improve performance. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm.html#follow-up","content":" Share my announcement tweet -->Reproduce the benchmarks at the github repoLearn how to use RxDB with the RxDB QuickstartCheck out the RxDB github repo and leave a star ⭐ ","version":"Next","tagName":"h2"},{"title":"Using localStorage in Modern Applications: A Comprehensive Guide","type":0,"sectionRef":"#","url":"/articles/localstorage.html","content":"","keywords":"","version":"Next"},{"title":"What is the localStorage API?","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#what-is-the-localstorage-api","content":" The localStorage API is a built-in feature of web browsers that enables web developers to store small amounts of data persistently on a user's device. It operates on a simple key-value basis, allowing developers to save strings, numbers, and other simple data types. This data remains available even after the user closes the browser or navigates away from the page. The API provides a convenient way to maintain state and store user preferences without relying on server-side storage. ","version":"Next","tagName":"h2"},{"title":"Exploring local storage Methods: A Practical Example","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#exploring-local-storage-methods-a-practical-example","content":" Let's dive into some hands-on code examples to better understand how to leverage the power of localStorage. The API offers several methods for interaction, including setItem, getItem, removeItem, and clear. Consider the following code snippet: // Storing data using setItem localStorage.setItem('username', 'john_doe'); // Retrieving data using getItem const storedUsername = localStorage.getItem('username'); // Removing data using removeItem localStorage.removeItem('username'); // Clearing all data localStorage.clear(); ","version":"Next","tagName":"h2"},{"title":"Storing Complex Data in JavaScript with JSON Serialization","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#storing-complex-data-in-javascript-with-json-serialization","content":" While js localStorage excels at handling simple key-value pairs, it also supports more intricate data storage through JSON serialization. By utilizing JSON.stringify and JSON.parse, you can store and retrieve structured data like objects and arrays. Here's an example of storing a document: const user = { name: 'Alice', age: 30, email: '[email protected]' }; // Storing a user object localStorage.setItem('user', JSON.stringify(user)); // Retrieving and parsing the user object const storedUser = JSON.parse(localStorage.getItem('user')); ","version":"Next","tagName":"h2"},{"title":"Understanding the Limitations of local storage","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#understanding-the-limitations-of-local-storage","content":" Despite its convenience, localStorage does come with a set of limitations that developers should be aware of: Non-Async Blocking API: One significant drawback is that js localStorage operates as a non-async blocking API. This means that any operations performed on localStorage can potentially block the main thread, leading to slower application performance and a less responsive user experience.Limited Data Structure: Unlike more advanced databases, localStorage is limited to a simple key-value store. This restriction makes it unsuitable for storing complex data structures or managing relationships between data elements.Stringification Overhead: Storing JSON data in localStorage requires stringifying the data before storage and parsing it when retrieved. This process introduces performance overhead, potentially slowing down operations by up to 10 times.Lack of Indexing: localStorage lacks indexing capabilities, making it challenging to perform efficient searches or iterate over data based on specific criteria. This limitation can hinder applications that rely on complex data retrieval.Tab Blocking: In a multi-tab environment, one tab's localStorage operations can impact the performance of other tabs by monopolizing CPU resources. You can reproduce this behavior by opening this test file in two browser windows and trigger localstorage inserts in one of them. You will observe that the indication spinner will stuck in both windows.Storage Limit: Browsers typically impose a storage limit of around 5 MiB for each origin's localStorage. ","version":"Next","tagName":"h2"},{"title":"Reasons to Still Use localStorage","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#reasons-to-still-use-localstorage","content":" ","version":"Next","tagName":"h2"},{"title":"Is localStorage Slow?","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#is-localstorage-slow","content":" Contrary to concerns about performance, the localStorage API in JavaScript is surprisingly fast when compared to alternative storage solutions like IndexedDB or OPFS. It excels in handling small key-value assignments efficiently. Due to its simplicity and direct integration with browsers, accessing and modifying localStorage data incur minimal overhead. For scenarios where quick and straightforward data storage is required, localStorage remains a viable option. For example RxDB uses localStorage in the localStorage meta optimizer to manage simple key values pairs while storing the "normal" documents inside of another storage like IndexedDB. ","version":"Next","tagName":"h3"},{"title":"When Not to Use localStorage","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#when-not-to-use-localstorage","content":" While localStorage offers convenience, it may not be suitable for every use case. Consider the following situations where alternatives might be more appropriate: Data Must Be Queryable: If your application relies heavily on querying data based on specific criteria, localStorage might not provide the necessary querying capabilities. Complex data retrieval might lead to inefficient code and slow performance.Big JSON Documents: Storing large JSON documents in localStorage can consume a significant amount of memory and degrade performance. It's essential to assess the size of the data you intend to store and consider more robust solutions for handling substantial datasets.Many Read/Write Operations: Excessive read and write operations on localStorage can lead to performance bottlenecks. Other storage solutions might offer better performance and scalability for applications that require frequent data manipulation.Lack of Persistence: If your application can function without persistent data across sessions, consider using in-memory data structures like new Map() or new Set(). These options offer speed and efficiency for transient data. ","version":"Next","tagName":"h2"},{"title":"What to use instead of the localStorage API in JavaScript","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#what-to-use-instead-of-the-localstorage-api-in-javascript","content":" ","version":"Next","tagName":"h2"},{"title":"localStorage vs IndexedDB","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#localstorage-vs-indexeddb","content":" While localStorage serves as a reliable storage solution for simpler data needs, it's essential to explore alternatives like IndexedDB when dealing with more complex requirements. IndexedDB is designed to store not only key-value pairs but also JSON documents. Unlike localStorage, which usually has a storage limit of around 5-10MB per domain, IndexedDB can handle significantly larger datasets. IndexDB with its support for indexing facilitates efficient querying, making range queries possible. However, it's worth noting that IndexedDB lacks observability, which is a feature unique to localStorage through the storage event. Also, complex queries can pose a challenge with IndexedDB, and while its performance is acceptable, IndexedDB can be too slow for some use cases. // localStorage can observe changes with the storage event. // This feature is missing in IndexedDB addEventListener("storage", (event) => {}); For those looking to harness the full power of IndexedDB with added capabilities, using wrapper libraries like RxDB is recommended. These libraries augment IndexedDB with features such as complex queries and observability, enhancing its usability for modern applications by providing a real database instead of only a key-value store. In summary when you compare IndexedDB vs localStorage, IndexedDB will win at any case where much data is handled while localStorage has better performance on small key-value datasets. ","version":"Next","tagName":"h3"},{"title":"File System API (OPFS)","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#file-system-api-opfs","content":" Another intriguing option is the OPFS (File System API). This API provides direct access to an origin-based, sandboxed filesystem which is highly optimized for performance and offers in-place write access to its content. OPFS offers impressive performance benefits. However, working with the OPFS API can be complex, and it's only accessible within a WebWorker. To simplify its usage and extend its capabilities, consider using a wrapper library like RxDB's OPFS RxStorage, which builds a comprehensive database on top of the OPFS API. This abstraction allows you to harness the power of the OPFS API without the intricacies of direct usage. ","version":"Next","tagName":"h3"},{"title":"localStorage vs Cookies","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#localstorage-vs-cookies","content":" Cookies, once a primary method of client-side data storage, have fallen out of favor in modern web development due to their limitations. While they can store data, they are about 100 times slower when compared to the localStorage API. Additionally, cookies are included in the HTTP header, which can impact network performance. As a result, cookies are not recommended for data storage purposes in contemporary web applications. ","version":"Next","tagName":"h3"},{"title":"localStorage vs WebSQL","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#localstorage-vs-websql","content":" WebSQL, despite offering a SQL-based interface for client-side data storage, is a deprecated technology and should be avoided. Its API has been phased out of modern browsers, and it lacks the robustness of alternatives like IndexedDB. Moreover, WebSQL tends to be around 10 times slower than IndexedDB, making it a suboptimal choice for applications that demand efficient data manipulation and retrieval. ","version":"Next","tagName":"h3"},{"title":"localStorage vs sessionStorage","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#localstorage-vs-sessionstorage","content":" In scenarios where data persistence beyond a session is unnecessary, developers often turn to sessionStorage. This storage mechanism retains data only for the duration of a tab or browser session. It survives page reloads and restores, providing a handy solution for temporary data needs. However, it's important to note that sessionStorage is limited in scope and may not suit all use cases. ","version":"Next","tagName":"h3"},{"title":"AsyncStorage for React Native","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#asyncstorage-for-react-native","content":" For React Native developers, the AsyncStorage API is the go-to solution, mirroring the behavior of localStorage but with asynchronous support. Since not all JavaScript runtimes support localStorage, AsyncStorage offers a seamless alternative for data persistence in React Native applications. ","version":"Next","tagName":"h3"},{"title":"node-localstorage for Node.js","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#node-localstorage-for-nodejs","content":" Because native localStorage is absent in the Node.js JavaScript runtime, you will get the error ReferenceError: localStorage is not defined in Node.js or node based runtimes like Next.js. The node-localstorage npm package bridges the gap. This package replicates the browser's localStorage API within the Node.js environment, ensuring consistent and compatible data storage capabilities. ","version":"Next","tagName":"h3"},{"title":"localStorage in browser extensions","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#localstorage-in-browser-extensions","content":" While browser extensions for chrome and firefox support the localStorage API, it is not recommended to use it in that context to store extension-related data. The browser will clear the data in many scenarios like when the users clear their browsing history. Instead the Extension Storage API should be used for browser extensions. In contrast to localStorage, the storage API works async and all operations return a Promise. Also it provides automatic sync to replicate data between all instances of that browser that the user is logged into. The storage API is even able to storage JSON-ifiable objects instead of plain strings. // Using the storage API in chrome await chrome.storage.local.set({ foobar: {nr: 1} }); const result = await chrome.storage.local.get('foobar'); console.log(result.foobar); // {nr: 1} ","version":"Next","tagName":"h2"},{"title":"localStorage in Deno and Bun","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#localstorage-in-deno-and-bun","content":" The Deno JavaScript runtime has a working localStorage API so running localStorage.setItem() and the other methods, will just work and the locally stored data is persisted across multiple runs. Bun does not support the localStorage JavaScript API. Trying to use localStorage will error with ReferenceError: Can't find variable: localStorage. To store data locally in Bun, you could use the bun:sqlite module instead or directly use a in-JavaScript database with Bun support like RxDB. ","version":"Next","tagName":"h2"},{"title":"Conclusion: Choosing the Right Storage Solution","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#conclusion-choosing-the-right-storage-solution","content":" In the world of modern web development, localStorage serves as a valuable tool for lightweight data storage. Its simplicity and speed make it an excellent choice for small key-value assignments. However, as application complexity grows, developers must assess their storage needs carefully. For scenarios that demand advanced querying, complex data structures, or high-volume operations, alternatives like IndexedDB, wrapper libraries with additional features like RxDB, or platform-specific APIs offer more robust solutions. By understanding the strengths and limitations of various storage options, developers can make informed decisions that pave the way for efficient and scalable applications. ","version":"Next","tagName":"h2"},{"title":"Follow up","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#follow-up","content":" Learn how to store and query data with RxDB in the RxDB QuickstartWhy IndexedDB is slow and how to fix itRxStorage performance comparison ","version":"Next","tagName":"h2"},{"title":"RxDB – The Ultimate Offline Database with Sync and Encryption","type":0,"sectionRef":"#","url":"/articles/offline-database.html","content":"","keywords":"","version":"Next"},{"title":"Why Choose an Offline Database?","type":1,"pageTitle":"RxDB – The Ultimate Offline Database with Sync and Encryption","url":"/articles/offline-database.html#why-choose-an-offline-database","content":" Offline-first or local-first software stores data directly on the client device. This strategy isn’t just about surviving network outages; it also makes your application faster, more user-friendly, and better at handling multiple usage scenarios. ","version":"Next","tagName":"h2"},{"title":"1. Zero Loading Spinners","type":1,"pageTitle":"RxDB – The Ultimate Offline Database with Sync and Encryption","url":"/articles/offline-database.html#1-zero-loading-spinners","content":" Applications that call remote servers for every request inevitably show loading spinners. With an offline database, read and write operations happen locally—providing near-instant feedback. Users no longer stare at progress indicators or wait for server responses, resulting in a smoother and more fluid experience. ","version":"Next","tagName":"h3"},{"title":"2. Multi-Tab Consistency","type":1,"pageTitle":"RxDB – The Ultimate Offline Database with Sync and Encryption","url":"/articles/offline-database.html#2-multi-tab-consistency","content":" Many websites mishandle data across multiple browser tabs. In an offline database, all tabs share the same local datastore. If the user updates data in one tab (like completing a to-do item), changes instantly reflect in every other tab. This removes complex multi-window synchronization problems. ","version":"Next","tagName":"h3"},{"title":"3. Real-Time Data Feeds","type":1,"pageTitle":"RxDB – The Ultimate Offline Database with Sync and Encryption","url":"/articles/offline-database.html#3-real-time-data-feeds","content":" Apps that rely on a purely server-driven approach often show stale data unless they add a separate real-time push system (like websockets). Local-first solutions with built-in replication essentially get real-time updates “for free.” Once the server sends any changes, your local offline database updates—keeping your UI live and accurate. ","version":"Next","tagName":"h3"},{"title":"4. Reduced Server Load","type":1,"pageTitle":"RxDB – The Ultimate Offline Database with Sync and Encryption","url":"/articles/offline-database.html#4-reduced-server-load","content":" In a traditional app, every interaction triggers a request to the server, scaling up resource usage quickly as traffic grows. Offline-first setups replicate data to the client once, and subsequent local reads or writes do not stress the backend. Your server usage grows with the amount of data—rather than every user action—leading to more efficient scaling. ","version":"Next","tagName":"h3"},{"title":"5. Simpler Development: Fewer Endpoints, No Extra State Library","type":1,"pageTitle":"RxDB – The Ultimate Offline Database with Sync and Encryption","url":"/articles/offline-database.html#5-simpler-development-fewer-endpoints-no-extra-state-library","content":" Typical apps require numerous REST endpoints and possibly a client-side state manager (like Redux) to handle data flow. If you adopt an offline database, you can replicate nearly everything to the client. The local DB becomes your single source of truth, and you may skip advanced state libraries altogether. ","version":"Next","tagName":"h3"},{"title":"Introducing RxDB – A Powerful Offline Database Solution","type":1,"pageTitle":"RxDB – The Ultimate Offline Database with Sync and Encryption","url":"/articles/offline-database.html#introducing-rxdb--a-powerful-offline-database-solution","content":" RxDB (Reactive Database) is a NoSQL JavaScript database that lives entirely in your client environment. It’s optimized for: Offline-first usageReactive queries (your UI updates in real time)Flexible replication with various backendsField-level encryption to protect sensitive data You can run RxDB in: Browsers (IndexedDB, OPFS)Mobile hybrid apps (Ionic, Capacitor)Native modules (React Native)Desktop environments (Electron)Node.jsServers or Scripts Wherever your JavaScript executes, RxDB can serve as a robust offline database. ","version":"Next","tagName":"h2"},{"title":"Quick Setup Example","type":1,"pageTitle":"RxDB – The Ultimate Offline Database with Sync and Encryption","url":"/articles/offline-database.html#quick-setup-example","content":" Below is a short demo of how to create an RxDB database, add a collection, and observe a query. You can expand upon this to enable encryption or full sync. import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; async function initDB() { // Create a local offline database const db = await createRxDatabase({ name: 'myOfflineDB', storage: getRxStorageDexie() }); // Add collections await db.addCollections({ tasks: { schema: { title: 'tasks schema', version: 0, type: 'object', primaryKey: 'id', properties: { id: { type: 'string' }, title: { type: 'string' }, done: { type: 'boolean' } } } } }); // Observe changes in real time db.tasks .find({ selector: { done: false } }) .$ // returns an observable that emits whenever the result set changes .subscribe(undoneTasks => { console.log('Currently undone tasks:', undoneTasks); }); return db; } Now the tasks collection is ready to store data offline. You could also replicate it to a backend, encrypt certain fields, or utilize more advanced features like conflict resolution. ","version":"Next","tagName":"h2"},{"title":"How Offline Sync Works in RxDB","type":1,"pageTitle":"RxDB – The Ultimate Offline Database with Sync and Encryption","url":"/articles/offline-database.html#how-offline-sync-works-in-rxdb","content":" RxDB uses a replication protocol that pushes local changes to the server and pulls remote updates back down. This ensures local data is always fresh and that the server has the latest offline edits once the device reconnects. Multiple Plugins exist to handle various backends or replication methods: CouchDB or PouchDBGoogle FirestoreGraphQL endpointsREST / HTTPWebSocket or WebRTC (for peer-to-peer sync) You pick the plugin that fits your stack, and RxDB handles everything from conflict detection to event emission, allowing you to focus on building your user-facing features. import { replicateRxCollection } from 'rxdb/plugins/replication'; replicateRxCollection({ collection: db.tasks, replicationIdentifier: 'tasks-sync', pull: { /* fetch updates from server */ }, push: { /* send local writes to server */ }, live: true // keep them in sync constantly }); ","version":"Next","tagName":"h2"},{"title":"Securing Your Offline Database with Encryption","type":1,"pageTitle":"RxDB – The Ultimate Offline Database with Sync and Encryption","url":"/articles/offline-database.html#securing-your-offline-database-with-encryption","content":" Local data can be a risk if it’s sensitive or personal. RxDB offers encryption plugins to keep specific document fields secure at rest. Encryption Example import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; import { wrappedKeyEncryptionCryptoJsStorage } from 'rxdb/plugins/encryption-crypto-js'; async function initSecureDB() { // Wrap the Dexie storage with crypto-js encryption const encryptedStorage = wrappedKeyEncryptionCryptoJsStorage({ storage: getRxStorageDexie() }); // Create database with a password const db = await createRxDatabase({ name: 'secureOfflineDB', storage: encryptedStorage, password: 'myTopSecretPassword' }); // Define an encrypted collection await db.addCollections({ userSecrets: { schema: { title: 'encrypted user data', version: 0, type: 'object', primaryKey: 'id', properties: { id: { type: 'string' }, secretData: { type: 'string' } }, required: ['id'], encrypted: ['secretData'] // field is encrypted at rest } } }); return db; } When the device is off or the database file is extracted, secretData remains unreadable without the specified password. This ensures only authorized parties can access sensitive fields, even in offline scenarios. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"RxDB – The Ultimate Offline Database with Sync and Encryption","url":"/articles/offline-database.html#follow-up","content":" Integrating an offline database approach into your app delivers near-instant interactions, true multi-tab data consistency, automatic real-time updates, and reduced server dependencies. By choosing RxDB, you gain: Offline-first local storageFlexible replication to various backendsEncryption of sensitive fieldsReactive queries for real-time UI updates RxDB transforms how you build and scale apps—no more loading spinners, no more stale data, no more complicated offline handling. Everything is local, synced, and secured. Continue your learning path: Explore the RxDB EcosystemDive into additional features like Compression or advanced Conflict Handling to optimize your offline database. Learn More About Offline-FirstRead our Offline First documentation for a deeper understanding of why local-first architectures improve user experience and reduce server load. Join the CommunityHave questions or feedback? Connect with us on the RxDB Chat or open an issue on GitHub. Upgrade to PremiumIf you need high-performance features—like SQLite storage for mobile or the Web Crypto-based encryption plugin—consider our premium offerings. By adopting an offline database approach with RxDB, you unlock speed, reliability, and security for your applications—leading to a truly seamless user experience. ","version":"Next","tagName":"h2"},{"title":"Building an Optimistic UI with RxDB","type":0,"sectionRef":"#","url":"/articles/optimistic-ui.html","content":"","keywords":"","version":"Next"},{"title":"Benefits of an Optimistic UI","type":1,"pageTitle":"Building an Optimistic UI with RxDB","url":"/articles/optimistic-ui.html#benefits-of-an-optimistic-ui","content":" Optimistic UIs offer a host of advantages, from improving the user experience to streamlining the underlying infrastructure. Below are some key reasons why an optimistic approach can revolutionize your application's performance and scalability. ","version":"Next","tagName":"h2"},{"title":"Better User Experience with Optimistic UI","type":1,"pageTitle":"Building an Optimistic UI with RxDB","url":"/articles/optimistic-ui.html#better-user-experience-with-optimistic-ui","content":" No loading spinners, near-zero latency: Users perceive their actions as instant. Any actual network delays or slow server operations can be handled behind the scenes.Offline capability: Optimistic UI pairs perfectly with offline-first apps. Users can continue to interact with the application even when offline, and changes will be synced automatically once the network is available again. ","version":"Next","tagName":"h3"},{"title":"Better Scaling and Easier to Implement","type":1,"pageTitle":"Building an Optimistic UI with RxDB","url":"/articles/optimistic-ui.html#better-scaling-and-easier-to-implement","content":" Fewer server endpoints: Instead of sending a separate HTTP request for every single user interaction, you can batch updates and sync them in bulk.Less server load: By handling changes locally and syncing in batches, you reduce the volume of server round-trips.Automated error handling: If a request fails or a document is in conflict, RxDB's replication mechanism can seamlessly retry and resolve conflicts in the background, without requiring a separate endpoint or manual user intervention. ","version":"Next","tagName":"h3"},{"title":"Building Optimistic UI Apps with RxDB","type":1,"pageTitle":"Building an Optimistic UI with RxDB","url":"/articles/optimistic-ui.html#building-optimistic-ui-apps-with-rxdb","content":" Now that we know what an optimistic UI is, lets build one with RxDB. ","version":"Next","tagName":"h2"},{"title":"Local Database: The Backbone of an Optimistic UI","type":1,"pageTitle":"Building an Optimistic UI with RxDB","url":"/articles/optimistic-ui.html#local-database-the-backbone-of-an-optimistic-ui","content":" A local database is the heart of an Optimistic UI. With RxDB, all application state is stored locally, ensuring seamless and instant updates. You can choose from multiple storage backends based on your runtime - check out RxDB Storage Options to see which engines (IndexedDB, SQLite, or custom) suit your environment best. Instant Writes: When users perform an action (like clicking a button or submitting a form), the changes are written directly to the local RxDB database. This immediate local write makes the UI feel snappy and removes the dependency on instantaneous server responses. Offline-First: Because data is managed locally, your app continues to operate smoothly even without an internet connection. Users can view, create, and update data at any time, assured that changes will sync automatically once they're back online. ","version":"Next","tagName":"h3"},{"title":"Real-Time UI Changes on Updates","type":1,"pageTitle":"Building an Optimistic UI with RxDB","url":"/articles/optimistic-ui.html#real-time-ui-changes-on-updates","content":" RxDB's core is built around observables that react to any state changes - whether from local writes or incoming replication from the server. Automatic UI refresh: Any query or document subscription in RxDB automatically notifies your UI layer when data changes. There's no need to manually poll or refetch.Cross-tab updates: If you have the same RxDB database open in multiple browser tabs, changes in one tab instantly propagate to the others. Event-Reduce Algorithm: Under the hood, RxDB uses the event-reduce algorithm to minimize overhead. Instead of re-running expensive queries, RxDB calculates the smallest possible updates needed to keep query results accurate - further boosting real-time performance. ","version":"Next","tagName":"h3"},{"title":"Replication with a Server","type":1,"pageTitle":"Building an Optimistic UI with RxDB","url":"/articles/optimistic-ui.html#replication-with-a-server","content":" While local storage is key to an Optimistic UI, most applications ultimately need to sync with a remote back end. RxDB offers a powerful replication system that can sync your local data with virtually any server/database in the background: Incremental and real-time: RxDB continuously pushes local changes to the server when a network is available and fetches server updates as they happen.Conflict resolution: If changes happen offline or multiple clients update the same data, RxDB detects conflicts and makes it straightforward to resolve them.Flexible transport: Beyond simple HTTP polling, you can incorporate WebSockets, Server-Sent Events (SSE), or other protocols for instant, server-confirmed changes broadcast to all connected clients. See this guide to learn more. By combining local-first data handling with real-time synchronization, RxDB delivers most of what an Optimistic UI needs - right out of the box. The result is a seamless user experience where interactions never feel blocked by slow networks, and any conflicts or final validations are quietly handled in the background. Handling Offline Changes and Conflicts Offline-first approach: All writes are initially stored in the local database. When connectivity returns, RxDB's replication automatically pushes changes to the server.Conflict resolution: If multiple clients edit the same documents while offline, conflicts are automatically detected and can be resolved gracefully (more on conflicts below). WebSockets, SSE, or Beyond For truly real-time communication - where server-confirmed changes instantly reach all clients - you can go beyond simple HTTP polling. Use WebSockets, Server-Sent Events (SSE), or other streaming protocols to broadcast updates the moment they occur. This pattern excels in scenarios like chats, collaborative editors, or dynamic dashboards. To learn more about these protocols and their integration with RxDB, check out this guide. ","version":"Next","tagName":"h3"},{"title":"Optimistic UI in Various Frameworks","type":1,"pageTitle":"Building an Optimistic UI with RxDB","url":"/articles/optimistic-ui.html#optimistic-ui-in-various-frameworks","content":" ","version":"Next","tagName":"h2"},{"title":"Angular Example","type":1,"pageTitle":"Building an Optimistic UI with RxDB","url":"/articles/optimistic-ui.html#angular-example","content":" Angular's async pipe works smoothly with RxDB's observables. Suppose you have a myCollection of documents, you can directly subscribe in the template: <ul *ngIf="(myCollection.find().$ | async) as docs"> <li *ngFor="let doc of docs"> {{ doc.name }} </li> </ul> This snippet: Subscribes to myCollection.find().$, which emits live updates whenever documents in the collection change.Passes the emitted array of documents into docs.Renders each document in a list item, instantly reflecting any changes. ","version":"Next","tagName":"h3"},{"title":"React Example","type":1,"pageTitle":"Building an Optimistic UI with RxDB","url":"/articles/optimistic-ui.html#react-example","content":" In React, you can utilize signals or other state management tools. For instance, if we have an RxDB extension that exposes a signal: import React from 'react'; function MyComponent({ myCollection }) { // .find().$$ provides a signal that updates whenever data changes const docsSignal = myCollection.find().$$; return ( <ul> {docs.map((doc) => ( <li key={doc.id}>{doc.name}</li> ))} </ul> ); } export default MyComponent; When you call docsSignal.value or use a hook like useSignal, it pulls the latest value from the RxDB query. Whenever the collection updates, the signal emits the new data, and React re-renders the component instantly. ","version":"Next","tagName":"h3"},{"title":"Downsides of Optimistic UI Apps","type":1,"pageTitle":"Building an Optimistic UI with RxDB","url":"/articles/optimistic-ui.html#downsides-of-optimistic-ui-apps","content":" While Optimistic UIs feel snappy, there are some caveats: Conflict Resolution: With an optimistic approach, multiple offline devices might edit the same data. When syncing back, conflicts occur that must be merged. RxDB uses revisions to detect and handle these conflicts. User Confusion: Users may see changes that haven't yet been confirmed by the server. If a subsequent server validation fails, the UI must revert to a previous state. Clear visual feedback or user notifications help reduce confusion. Server Compatibility: The server must be capable of storing and returning revision metadata (for instance, a timestamp or versioning system). Check out RxDB's replication docs for details on how to structure your back end. Storage Limits: Storing data in the client has practical size limits. IndexedDB or other client-side storages have constraints (though usually quite large). See storage comparisons. ","version":"Next","tagName":"h2"},{"title":"Conflict Resolution Strategies","type":1,"pageTitle":"Building an Optimistic UI with RxDB","url":"/articles/optimistic-ui.html#conflict-resolution-strategies","content":" Last Write to Server Wins: A simplest-possible method: whatever update reaches the server last overrides previous data. Good for non-critical data like “like" counts or ephemeral states.Revision-Based Merges: Use revision numbers or timestamps to track concurrent edits. Merge them intelligently by combining fields or choosing the latest sub-document changes. This is ideal for collaborative apps where you don't want to overwrite entire records.User Prompts: In certain workflows (e.g., shipping forms, e-commerce checkout), you may need to notify the user about conflicts and let them choose which version to keep.First Write to Server Wins (RxDB Default): RxDB's default approach is to let the first successful push define the latest version. Any incoming push with an outdated revision triggers a conflict that must be resolved on the client side. Learn more at here. ","version":"Next","tagName":"h2"},{"title":"When (and When Not) to Use Optimistic UI","type":1,"pageTitle":"Building an Optimistic UI with RxDB","url":"/articles/optimistic-ui.html#when-and-when-not-to-use-optimistic-ui","content":" When to Use Real-time interactions like chat apps, social feeds, or “Likes." Situations where high success rates of operations are expected (most writes don't fail).Apps that need an offline-first approach or handle intermittent connectivity gracefully. When Not to Use Large, complex transactions with high failure rates.Scenarios requiring heavy server validations or approvals (for example, financial transactions with complex rules).Workflows where immediate feedback could mislead users about an operation's success probability. Assessing Risk Consider the likelihood that a user's action might fail. If it's very low, optimistic UI is often best.If frequent failures or complex validations occur, consider a hybrid approach: partial optimistic updates for some actions, while more critical operations rely on immediate server confirmation. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"Building an Optimistic UI with RxDB","url":"/articles/optimistic-ui.html#follow-up","content":" Ready to start building your own Optimistic UI with RxDB? Here are some next steps: Do the RxDB QuickstartIf you're brand new to RxDB, the quickstart guide will walk you through installation and setting up your first project. Check Out the Demo AppA live RxDB Quickstart Demo showcases optimistic updates and real-time syncing. Explore the code to see how it works. Star the GitHub RepoShow your support for RxDB by starring the RxDB GitHub Repository. By combining RxDB's powerful offline-first capabilities with the principles of an Optimistic UI, you can deliver snappy, near-instant user interactions that keep your users engaged - no matter the network conditions. Get started today and give your users the experience they deserve! ","version":"Next","tagName":"h2"},{"title":"RxDB as a Database for Progressive Web Apps (PWA)","type":0,"sectionRef":"#","url":"/articles/progressive-web-app-database.html","content":"","keywords":"","version":"Next"},{"title":"What is a Progressive Web App","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#what-is-a-progressive-web-app","content":" Progressive Web Apps are the future of web development, seamlessly combining the best of both web and mobile app worlds. They can be easily installed on the user's home screen, function offline, and load at lightning speed. Unlike hybrid apps, PWAs offer a consistent user experience across platforms, making them a versatile choice for modern applications. PWAs bring a plethora of advantages to the table. They eliminate the hassle of app store installations and updates, reduce dependency on network connectivity, and prioritize fast loading times. By harnessing the power of service workers and intelligent caching mechanisms, PWAs ensure users can access content even in offline mode. Furthermore, PWAs are device-agnostic, seamlessly adapting to various devices, from desktops to smartphones. ","version":"Next","tagName":"h2"},{"title":"Introducing RxDB as a Client-Side Database for PWAs","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#introducing-rxdb-as-a-client-side-database-for-pwas","content":" At the heart of PWAs lies efficient data management, and RxDB steps in as a reliable ally. As a client-side NoSQL database, RxDB seamlessly integrates into web applications, offering real-time data synchronization and manipulation capabilities. This article sheds light on the transformative potential of RxDB as it collaborates harmoniously with PWAs, enabling local-first strategies and elevating user interactions to a whole new level. ","version":"Next","tagName":"h2"},{"title":"Getting Started with RxDB","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#getting-started-with-rxdb","content":" RxDB emerges as a reactive, schema-based NoSQL database crafted explicitly for client-side applications. Its real-time data synchronization and responsiveness align seamlessly with the dynamic demands of modern PWAs. Local-First Approach The cornerstone of RxDB's philosophy is the local-first approach, empowering PWAs to prioritize data storage and manipulation on the client side. This paradigm ensures that PWAs remain functional even when offline, allowing users to access and interact with data seamlessly. RxDB bridges any gaps in data synchronization once network connectivity is restored. Observable Queries Observable queries (aka Live-Queries) serve as the engine of RxDB's dynamic capabilities. By leveraging these queries, PWAs can monitor and respond to data changes in real time. The result is an engaging user interface with instantaneous updates that captivate users and keep them engaged. await db.heroes.find({ selector: { healthpoints: { $gt: 0 } } }) .$ // the $ returns an observable that emits each time the result set of the query changes .subscribe(aliveHeroes => console.dir(aliveHeroes)); Multi-Tab Support RxDB extends its prowess to multi-tab scenarios, guaranteeing data consistency across different tabs or windows of the same PWA. This feature promotes a seamless transition between various sections of the application, while minimizing data conflicts. ","version":"Next","tagName":"h3"},{"title":"Using RxDB in a Progressive Web App","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#using-rxdb-in-a-progressive-web-app","content":" Integrating RxDB into a Progressive Web App, driven by technologies like React, is a straightforward process. By configuring RxDB and installing the necessary packages, developers establish a solid foundation for robust data management within their PWA. ","version":"Next","tagName":"h3"},{"title":"Exploring Different RxStorage Layers","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#exploring-different-rxstorage-layers","content":" RxDB caters to diverse needs through its various RxStorage layers: Dexie.js RxStorage: Leveraging the capabilities of the Dexie.js library for storage.IndexedDB RxStorage: Tapping into the browser's IndexedDB for efficient data storage.OPFS RxStorage: Interfacing with the Offline-First Persistence System for seamless persistence.Memory RxStorage: Storing data in memory, ideal for temporary data requirements. This flexibility empowers developers to optimize data storage based on the unique needs of their PWA. Synchronizing Data with RxDB between PWA Clients and Servers To facilitate seamless data synchronization between PWA clients and servers, RxDB offers a range of replication options: RxDB Replication Algorithm: RxDB introduces its own replication algorithm, enabling efficient and reliable data synchronization between clients and servers. CouchDB Replication: Leveraging its roots in CouchDB, RxDB facilitates smooth data replication between clients and CouchDB servers, ensuring data consistency and synchronization across devices. Firestore Replication: RxDB synchronizes data with Google Firestore, a real-time cloud-hosted NoSQL database. This integration guarantees up-to-date data across different instances of the PWA. Peer-to-Peer (P2P) via WebRTC Replication: RxDB supports P2P replication, facilitating direct data synchronization between clients without intermediaries. This decentralized approach is invaluable in scenarios where server infrastructure is limited. ","version":"Next","tagName":"h2"},{"title":"Advanced RxDB Features and Techniques","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#advanced-rxdb-features-and-techniques","content":" ","version":"Next","tagName":"h2"},{"title":"Encryption of Local Data","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#encryption-of-local-data","content":" RxDB empowers PWAs with the ability to encrypt local data, enhancing data security and safeguarding sensitive information. This feature is indispensable for applications handling user credentials, financial transactions, and other confidential data. ","version":"Next","tagName":"h3"},{"title":"Indexing and Performance Optimization","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#indexing-and-performance-optimization","content":" Performance optimization is a top priority for PWAs. RxDB addresses this concern by offering indexing options that expedite data retrieval, resulting in a snappier user interface and heightened responsiveness. ","version":"Next","tagName":"h3"},{"title":"JSON Key Compression","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#json-key-compression","content":" RxDB introduces JSON key compression, a feature that reduces storage requirements. This optimization is particularly beneficial for PWAs dealing with substantial data volumes, enhancing overall efficiency and resource utilization. ","version":"Next","tagName":"h3"},{"title":"Change Streams and Event Handling","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#change-streams-and-event-handling","content":" RxDB introduces change streams, enabling PWAs to react to data changes in real time. This capability empowers dynamic updates to the user interface, promoting interactivity and engagement. ","version":"Next","tagName":"h3"},{"title":"Conclusion","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#conclusion","content":" In the ever-evolving landscape of web application development, Progressive Web Apps continue to redefine user experiences. RxDB emerges as a pivotal player, seamlessly integrating with PWAs and enhancing their capabilities. With features like the local-first approach, observable queries, replication mechanisms, and advanced encryption, RxDB empowers developers to create responsive, offline-capable, and data-driven PWAs. As the demand for sophisticated PWAs continues to surge, RxDB remains an indispensable tool for developers aiming to push the boundaries of innovation and redefine the standards of user engagement. By embracing RxDB, developers ensure their PWAs remain at the forefront of the digital revolution, offering seamless and immersive experiences to users around the world. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#follow-up","content":" To explore more about RxDB and leverage its capabilities for browser database development, check out the following resources: RxDB GitHub Repository: Visit the official GitHub repository of RxDB to access the source code, documentation, and community support.RxDB Quickstart: Get started quickly with RxDB by following the provided quickstart guide, which provides step-by-step instructions for setting up and using RxDB in your projects.RxDB Progressive Web App in Angular Example ","version":"Next","tagName":"h2"},{"title":"RxDB as a Database for React Applications","type":0,"sectionRef":"#","url":"/articles/react-database.html","content":"","keywords":"","version":"Next"},{"title":"Introducing RxDB as a JavaScript Database","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#introducing-rxdb-as-a-javascript-database","content":" RxDB, a powerful JavaScript database, has garnered attention as an optimal solution for managing data in React applications. Built on top of the IndexedDB standard, RxDB combines the principles of reactive programming with database management. Its core features include reactive data handling, offline-first capabilities, and robust data replication. ","version":"Next","tagName":"h2"},{"title":"What is RxDB?","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#what-is-rxdb","content":" RxDB, short for Reactive Database, is an open-source JavaScript database that seamlessly integrates reactive programming with database operations. It offers a comprehensive API for performing database actions and synchronizing data across clients and servers. RxDB's underlying philosophy revolves around observables, allowing developers to reactively manage data changes and create dynamic user interfaces. ","version":"Next","tagName":"h2"},{"title":"Reactive Data Handling","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#reactive-data-handling","content":" One of RxDB's standout features is its support for reactive data handling. Traditional databases often require manual intervention for data fetching and updating, leading to complex and error-prone code. RxDB, however, automatically notifies subscribers whenever data changes occur, eliminating the need for explicit data manipulation. This reactive approach simplifies code and enhances the responsiveness of React components. ","version":"Next","tagName":"h3"},{"title":"Local-First Approach","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#local-first-approach","content":" RxDB embraces a local-first methodology, enabling applications to function seamlessly even in offline scenarios. By storing data locally, RxDB ensures that users can interact with the application and make updates regardless of internet connectivity. Once the connection is reestablished, RxDB synchronizes the local changes with the remote database, maintaining data consistency across devices. ","version":"Next","tagName":"h3"},{"title":"Data Replication","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#data-replication","content":" Data replication is a cornerstone of modern applications that require synchronization between multiple clients and servers. RxDB provides robust data replication mechanisms that facilitate real-time synchronization between different instances of the database. This ensures that changes made on one client are promptly propagated to others, contributing to a cohesive and unified user experience. ","version":"Next","tagName":"h3"},{"title":"Observable Queries","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#observable-queries","content":" RxDB extends the concept of observables beyond data changes. It introduces observable queries, allowing developers to observe the results of database queries. This feature enables automatic updates to query results whenever relevant data changes occur. Observable queries simplify state management by eliminating the need to manually trigger updates in response to changing data. await db.heroes.find({ selector: { healthpoints: { $gt: 0 } } }) .$ // the $ returns an observable that emits each time the result set of the query changes .subscribe(aliveHeroes => console.dir(aliveHeroes)); ","version":"Next","tagName":"h3"},{"title":"Multi-Tab Support","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#multi-tab-support","content":" Web applications often operate in multiple browser tabs or windows. RxDB accommodates this scenario by offering built-in multi-tab support. It ensures that data changes made in one tab are efficiently propagated to other tabs, maintaining data consistency and providing a seamless experience for users interacting with the application across different tabs. ","version":"Next","tagName":"h3"},{"title":"RxDB vs. Other React Database Options","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#rxdb-vs-other-react-database-options","content":" While considering database options for React applications, RxDB stands out due to its unique combination of reactive programming and database capabilities. Unlike traditional solutions such as IndexedDB or Web Storage, which provide basic data storage, RxDB offers a dedicated database solution with advanced features. Additionally, while state management libraries like Redux and MobX can be adapted for database use, RxDB provides an integrated solution specifically designed for handling data. ","version":"Next","tagName":"h3"},{"title":"IndexedDB in React and the Advantage of RxDB","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#indexeddb-in-react-and-the-advantage-of-rxdb","content":" Using IndexedDB directly in React can be challenging due to its low-level, callback-based API which doesn't align neatly with modern React's Promise and async/await patterns. This intricacy often leads to bulky and complex implementations for developers. Also, when used wrong, IndexedDB can have a worse performance profile then it could have. In contrast, RxDB, with the IndexedDB RxStorage and the Dexie.js RxStorage, abstracts these complexities, integrating reactive programming and providing a more streamlined experience for data management in React applications. Thus, RxDB offers a more intuitive approach, eliminating much of the manual overhead required with IndexedDB. ","version":"Next","tagName":"h3"},{"title":"Using RxDB in a React Application","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#using-rxdb-in-a-react-application","content":" The process of integrating RxDB into a React application is straightforward. Begin by installing RxDB as a dependency:npm install rxdb rxjsOnce installed, RxDB can be imported and initialized within your React components. The following code snippet illustrates a basic setup: import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const db = await createRxDatabase({ name: 'heroesdb', // <- name storage: getRxStorageDexie(), // <- RxStorage password: 'myPassword', // <- password (optional) multiInstance: true, // <- multiInstance (optional, default: true) eventReduce: true, // <- eventReduce (optional, default: false) cleanupPolicy: {} // <- custom cleanup policy (optional) }); ","version":"Next","tagName":"h3"},{"title":"Using RxDB React Hooks","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#using-rxdb-react-hooks","content":" The rxdb-hooks package provides a set of React hooks that simplify data management within components. These hooks leverage RxDB's reactivity to automatically update components when data changes occur. The following example demonstrates the usage of the useRxCollection and useRxQuery hooks to query and observe a collection: const collection = useRxCollection('characters'); const query = collection.find().where('affiliation').equals('Jedi'); const { result: characters, isFetching, fetchMore, isExhausted, } = useRxQuery(query, { pageSize: 5, pagination: 'Infinite', }); if (isFetching) { return 'Loading...'; } return ( <CharacterList> {characters.map((character, index) => ( <Character character={character} key={index} /> ))} {!isExhausted && <button onClick={fetchMore}>load more</button>} </CharacterList> ); ","version":"Next","tagName":"h3"},{"title":"Different RxStorage Layers for RxDB","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#different-rxstorage-layers-for-rxdb","content":" RxDB offers multiple storage layers, each backed by a different underlying technology. Developers can choose the storage layer that best suits their application's requirements. Some available options include: Dexie.js RxStorage: Built on top of Dexie.js, a popular IndexedDB wrapper.IndexedDB RxStorage: The default RxDB storage layer, providing efficient data storage in modern browsers.OPFS RxStorage: Uses the Operational File System (OPFS) for storage, suitable for Electron applications.Memory RxStorage: Stores data in memory, primarily intended for testing and development purposes.SQLite RxStorage: Stores data in an SQLite database. Can be used in a browser with react by using a SQLite database that was compiled to WebAssembly. Using SQLite in react might not be the best idea, because a compiled SQLite wasm file is about one megabyte of code that has to be loaded and rendered by your users. Using native browser APIs like IndexedDB and OPFS have shown to be a more optimal database solution for browser based react apps compared to SQLite. ","version":"Next","tagName":"h3"},{"title":"Synchronizing Data with RxDB between Clients and Servers","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#synchronizing-data-with-rxdb-between-clients-and-servers","content":" The offline-first approach is a fundamental principle of RxDB's design. When dealing with client-server synchronization, RxDB ensures that changes made offline are captured and propagated to the server once connectivity is reestablished. This mechanism guarantees that data remains consistent across different client instances, even when operating in an occasionally connected environment. RxDB offers a range of replication plugins that facilitate data synchronization between clients and servers. These plugins support various synchronization strategies, such as one-way replication, two-way replication, and custom conflict resolution. Developers can select the appropriate plugin based on their application's synchronization requirements. ","version":"Next","tagName":"h3"},{"title":"Advanced RxDB Features and Techniques","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#advanced-rxdb-features-and-techniques","content":" Encryption of Local Data Security is paramount when handling sensitive user data. RxDB supports data encryption, ensuring that locally stored information remains protected from unauthorized access. This feature is particularly valuable when dealing with sensitive data in offline scenarios. ","version":"Next","tagName":"h3"},{"title":"Indexing and Performance Optimization","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#indexing-and-performance-optimization","content":" Efficient indexing is critical for achieving optimal database performance. RxDB provides mechanisms to define indexes on specific fields, enhancing query speed and reducing the computational overhead of data retrieval. ","version":"Next","tagName":"h3"},{"title":"JSON Key Compression","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#json-key-compression","content":" RxDB employs JSON key compression to reduce storage space and improve performance. This technique minimizes the memory footprint of the database, making it suitable for applications with limited resources. ","version":"Next","tagName":"h3"},{"title":"Change Streams and Event Handling","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#change-streams-and-event-handling","content":" RxDB enables developers to subscribe to change streams, which emit events whenever data changes occur. This functionality facilitates real-time event handling and provides opportunities for implementing features such as notifications and live updates. ","version":"Next","tagName":"h3"},{"title":"Conclusion","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#conclusion","content":" In the realm of React application development, efficient data management is pivotal to delivering a seamless and engaging user experience. RxDB emerges as a compelling solution, seamlessly integrating reactive programming principles with sophisticated database capabilities. By adopting RxDB, React developers can harness its powerful features, including reactive data handling, offline-first support, and real-time synchronization. With RxDB as a foundational pillar, React applications can excel in responsiveness, scalability, and data integrity. As the landscape of web development continues to evolve, RxDB remains a steadfast companion for creating robust and dynamic React applications. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#follow-up","content":" To explore more about RxDB and leverage its capabilities for browser database development, check out the following resources: RxDB GitHub Repository: Visit the official GitHub repository of RxDB to access the source code, documentation, and community support.RxDB Quickstart: Get started quickly with RxDB by following the provided quickstart guide, which provides step-by-step instructions for setting up and using RxDB in your projects.RxDB React Example at GitHub ","version":"Next","tagName":"h2"},{"title":"IndexedDB Database in React Apps - The Power of RxDB","type":0,"sectionRef":"#","url":"/articles/react-indexeddb.html","content":"","keywords":"","version":"Next"},{"title":"What is IndexedDB?","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#what-is-indexeddb","content":" IndexedDB is a low-level API for storing significant amounts of structured data in the browser. It provides a transactional database system that can store key-value pairs, complex objects, and more. This storage engine is asynchronous and supports advanced data types, making it suitable for offline storage and complex web applications. ","version":"Next","tagName":"h2"},{"title":"Why Use IndexedDB in React","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#why-use-indexeddb-in-react","content":" When building React applications, IndexedDB can play a crucial role in enhancing both performance and user experience. Here are some reasons to consider using IndexedDB: Offline-First / Local-First: By storing data locally, your application remains functional even without an internet connection.Performance: Using local data means zero latency and no loading spinners, as data doesn't need to be fetched over a network.Easier Implementation: Replicating all data to the client once is often simpler than implementing multiple endpoints for each user interaction.Scalability: Local data reduces server load because queries run on the client side, decreasing server bandwidth and processing requirements. ","version":"Next","tagName":"h2"},{"title":"Why To Not Use Plain IndexedDB","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#why-to-not-use-plain-indexeddb","content":" While IndexedDB itself is powerful, its native API comes with several drawbacks for everyday application developers: Callback-Based API: IndexedDB was designed with callbacks rather than modern Promises, making asynchronous code more cumbersome.Complexity: IndexedDB is low-level, intended for library developers rather than for app developers who simply want to store data.Basic Query API: Its rudimentary query capabilities limit how you can efficiently perform complex queries, whereas libraries like RxDB offer more advanced query features.TypeScript Support: Ensuring good TypeScript support with IndexedDB is challenging, especially when trying to enforce document type consistency.Lack of Observable API: IndexedDB doesn't provide an observable API out of the box. RxDB solves this by enabling you to observe query results or specific document fields.Cross-Tab Communication: Managing cross-tab updates in plain IndexedDB is difficult. RxDB handles this seamlessly-changes in one tab automatically affect observed data in others.Missing Advanced Features: Features like encryption or compression aren't built into IndexedDB, but they are available via RxDB.Limited Platform Support: IndexedDB exists only in the browser. In contrast, RxDB offers swappable storages to use the same code in React Native, Capacitor, or Electron. ","version":"Next","tagName":"h2"},{"title":"Set up RxDB in React","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#set-up-rxdb-in-react","content":" Setting up RxDB with React is straightforward. It abstracts IndexedDB complexities and adds a layer of powerful features over it. ","version":"Next","tagName":"h2"},{"title":"Installing RxDB","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#installing-rxdb","content":" First, install RxDB and RxJS from npm: npm install rxdb rxjs --save``` ","version":"Next","tagName":"h3"},{"title":"Create a Database and Collections","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#create-a-database-and-collections","content":" RxDB provides two main storage options: The free Dexie.js-based storageThe premium plain IndexedDB-based storage, offering faster performance Below is an example of setting up a simple RxDB database using the Dexie.js-based storage in a React app: import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // create a database const db = await createRxDatabase({ name: 'heroesdb', // the name of the database storage: getRxStorageDexie() }); // Define your schema const heroSchema = { title: 'hero schema', version: 0, description: 'Describes a hero in your app', primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 }, name: { type: 'string' }, power: { type: 'string' } }, required: ['id', 'name'] }; // add collections await db.addCollections({ heroes: { schema: heroSchema } }); ","version":"Next","tagName":"h3"},{"title":"CRUD Operations","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#crud-operations","content":" Once your database is initialized, you can perform all CRUD operations: // insert await db.heroes.insert({ name: 'Iron Man', power: 'Genius-level intellect' }); // bulk insert await db.heroes.bulkInsert([ { name: 'Thor', power: 'God of Thunder' }, { name: 'Hulk', power: 'Superhuman Strength' } ]); // find and findOne const heroes = await db.heroes.find().exec(); const ironMan = await db.heroes.findOne({ selector: { name: 'Iron Man' } }).exec(); // update const doc = await db.heroes.findOne({ selector: { name: 'Hulk' } }).exec(); await doc.update({ $set: { power: 'Unlimited Strength' } }); // delete const doc = await db.heroes.findOne({ selector: { name: 'Thor' } }).exec(); await doc.remove(); ","version":"Next","tagName":"h3"},{"title":"Reactive Queries and Live Updates","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#reactive-queries-and-live-updates","content":" RxDB excels in providing reactive data capabilities, ideal for real-time applications. There are two main approaches to achieving live queries with RxDB: using RxJS Observables with React Hooks or utilizing Preact Signals. ","version":"Next","tagName":"h2"},{"title":"With RxJS Observables and React Hooks","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#with-rxjs-observables-and-react-hooks","content":" RxDB integrates seamlessly with RxJS Observables, allowing you to build reactive components. Here's an example of a React component that subscribes to live data updates: import { useState, useEffect } from 'react'; function HeroList({ collection }) { const [heroes, setHeroes] = useState([]); useEffect(() => { // create an observable query const query = collection.find(); const subscription = query.$.subscribe(newHeroes => { setHeroes(newHeroes); }); return () => subscription.unsubscribe(); }, [collection]); return ( <div> <h2>Hero List</h2> <ul> {heroes.map(hero => ( <li key={hero.id}> <strong>{hero.name}</strong> - {hero.power} </li> ))} </ul> </div> ); } This component subscribes to the collection's changes, updating the UI automatically whenever the underlying data changes, even across browser tabs. ","version":"Next","tagName":"h3"},{"title":"With Preact Signals","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#with-preact-signals","content":" RxDB also supports Preact Signals for reactivity, which can be integrated into React applications via a premium plugin. Preact Signals offer a modern, fine-grained reactivity model. First, install the necessary package: npm install @preact/signals-core --save Set up RxDB with Preact Signals reactivity: import { PreactSignalsRxReactivityFactory } from 'rxdb-premium/plugins/reactivity-preact-signals'; import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const database = await createRxDatabase({ name: 'mydb', storage: getRxStorageDexie(), reactivity: PreactSignalsRxReactivityFactory }); Now, you can obtain signals directly from RxDB queries using the double-dollar sign ($$): function HeroList({ collection }) { const heroes = collection.find().$$; return ( <ul> {heroes.map(hero => ( <li key={hero.id}>{hero.name}</li> ))} </ul> ); } This approach provides automatic updates whenever the data changes, without needing to manage subscriptions manually. ","version":"Next","tagName":"h3"},{"title":"React IndexedDB Example with RxDB","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#react-indexeddb-example-with-rxdb","content":" A comprehensive example of using RxDB within a React application can be found in the RxDB GitHub repository. This repository contains sample applications, showcasing best practices and demonstrating how to integrate RxDB for various use cases. ","version":"Next","tagName":"h2"},{"title":"Advanced RxDB Features","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#advanced-rxdb-features","content":" RxDB offers many advanced features that extend beyond basic data storage: RxDB Replication: Synchronize local data with remote databases seamlessly. Learn more: RxDB ReplicationData Migration: Handle schema changes gracefully with automatic data migrations. See: Data migrationEncryption: Secure your data with built-in encryption capabilities. Explore: EncryptionCompression: Optimize storage using key compression. Details: Compression ","version":"Next","tagName":"h2"},{"title":"Limitations of IndexedDB","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#limitations-of-indexeddb","content":" While IndexedDB is powerful, it has some inherent limitations: Performance: IndexedDB can be slow under certain conditions. Read more: Slow IndexedDBStorage Limits: Browsers impose limits on how much data can be stored. See: Browser storage limits ","version":"Next","tagName":"h2"},{"title":"Alternatives to IndexedDB","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#alternatives-to-indexeddb","content":" Depending on your application's requirements, there are alternative storage solutions to consider: Origin Private File System (OPFS): A newer API that can offer better performance. RxDB supports OPFS as well. More info: RxDB OPFS StorageSQLite: Ideal for React applications on Capacitor or Ionic, offering native performance. Explore: RxDB SQLite Storage ","version":"Next","tagName":"h2"},{"title":"Performance comparison with other browser storages","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#performance-comparison-with-other-browser-storages","content":" Here is a performance overview of the various browser based storage implementation of RxDB: ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"IndexedDB Database in React Apps - The Power of RxDB","url":"/articles/react-indexeddb.html#follow-up","content":" Learn how to use RxDB with the RxDB Quickstart for a guided introduction.Check out the RxDB GitHub repository and leave a star ⭐ if you find it useful. By leveraging RxDB on top of IndexedDB, you can create highly responsive, offline-capable React applications without dealing with the low-level complexities of IndexedDB directly. With reactive queries, seamless cross-tab communication, and powerful advanced features, RxDB becomes an invaluable tool in modern web development. ","version":"Next","tagName":"h2"},{"title":"What is a realtime database?","type":0,"sectionRef":"#","url":"/articles/realtime-database.html","content":"","keywords":"","version":"Next"},{"title":"Realtime as in realtime computing","type":1,"pageTitle":"What is a realtime database?","url":"/articles/realtime-database.html#realtime-as-in-realtime-computing","content":" When "normal" developers hear the word "realtime", they think of Real-time computing (RTC). Real-time computing is a type of computer processing that guarantees specific response times for tasks or events, crucial in applications like industrial control, automotive systems, and aerospace. It relies on specialized operating systems (RTOS) to ensure predictability and low latency. Hard real-time systems must never miss deadlines, while soft real-time systems can tolerate occasional delays. Real-time responses are often understood to be in the order of milliseconds, and sometimes microseconds. Consider the role of real-time computing in car airbags: sensors detect collision force, swiftly process the data, and immediately decide to deploy the airbags within milliseconds. Such rapid action is imperative for safeguarding passengers. Hence, the controlling chip must guarantee a certain response time - it must operate in "realtime". But when people talk about realtime databases, especially in the web-development world, they almost never mean realtime, as in realtime computing, they mean something else. In fact, with any programming language that run on end users devices, it is not even possible to built a "real" realtime database. A program, like a JavaScript (browser or Node.js) process, can be halted by the operating systems task manager at any time and therefore it will never be able to guarantee specific response times. To build a realtime computing database, you would need a realtime capable operating system. ","version":"Next","tagName":"h2"},{"title":"Real time Database as in realtime replication","type":1,"pageTitle":"What is a realtime database?","url":"/articles/realtime-database.html#real-time-database-as-in-realtime-replication","content":" When talking about realtime databases, most people refer to realtime, as in realtime replication. Often they mean a very specific product which is the Firebase Realtime Database (not the Firestore). In the context of the Firebase Realtime Database, "realtime" means that data changes are synchronized and delivered to all connected clients or devices as soon as they occur, typically within milliseconds. This means that when any client updates, adds, or removes data in the database, all other clients that are connected to the same database instance receive those updates instantly, without the need for manual polling or frequent HTTP requests. In short, when replicating data between databases, instead of polling, we use a websocket connection to live-stream all changes between the server and the clients, this is labeled as "realtime database". A similar thing can be done with RxDB and the RxDB Replication Plugins. ","version":"Next","tagName":"h2"},{"title":"Realtime as in realtime applications","type":1,"pageTitle":"What is a realtime database?","url":"/articles/realtime-database.html#realtime-as-in-realtime-applications","content":" In the context of realtime client-side applications, "realtime" refers to the immediate or near-instantaneous processing and response to events or data inputs. When data changes, the application must directly update to reflect the new data state, without any user interaction or delay. Notice that the change to the data could have come from any source, like a user action, an operation in another browser tab, or even an operation from another device that has been replicated to the client. In contrast to push-pull based databases (e.g., MySQL or MongoDB servers), a realtime database contains features which make it easy to build realtime applications. For example with RxDB you can not only fetch query results once, but instead you can subscribe to a query and directly update the HTML dom tree whenever the query has a new result set: await db.heroes.find({ selector: { healthpoints: { $gt: 0 } } }) .$ // The $ returns an observable that emits whenever the query's result set changes. .subscribe(aliveHeroes => { // Refresh the HTML list each time there are new query results. const newContent = aliveHeroes.map(doc => '<li>' + doc.name + '</li>'); document.getElementById('#myList').innerHTML = newContent; }); // You can even subscribe to any RxDB document's fields. myDocument.firstName$.subscribe(newName => console.log('name is: ' + newName)); A competent realtime application is engineered to offer feedback or results swiftly, ideally within milliseconds to microseconds. Ideally, a data modification should be processed in under 16 milliseconds (since 1 second divided by 60 frames equals 16.66ms) to ensure users don't perceive any lag from input to visualization. RxDB utilizes the EventReduce algorithm to manage changes more swiftly than 16ms. However, it can never assure fixed response times as a "realtime computing database" would. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"What is a realtime database?","url":"/articles/realtime-database.html#follow-up","content":" Dive into the RxDB QuickstartDiscover more about the RxDB realtime replication protocolJoin the conversation at RxDB Chat ","version":"Next","tagName":"h2"},{"title":"React Native Encryption and Encrypted Database/Storage","type":0,"sectionRef":"#","url":"/articles/react-native-encryption.html","content":"","keywords":"","version":"Next"},{"title":"🔒 Why Encryption Matters","type":1,"pageTitle":"React Native Encryption and Encrypted Database/Storage","url":"/articles/react-native-encryption.html#-why-encryption-matters","content":" Encryption ensures that, even if an unauthorized party obtains physical access to your device or intercepts data, they cannot read the information without the encryption key. Sensitive user data such as credentials, personal information, or financial details should always be encrypted. Proper encryption practices reduce the risk of data breaches and help your application remain compliant with regulations like GDPR or HIPAA. ","version":"Next","tagName":"h2"},{"title":"React Native Encryption Overview","type":1,"pageTitle":"React Native Encryption and Encrypted Database/Storage","url":"/articles/react-native-encryption.html#react-native-encryption-overview","content":" React Native supports multiple ways to secure local data: Encrypted DatabasesUse databases with built-in encryption capabilities, such as SQLite with encryption layers or RxDB with its encryption plugin. Secure Storage LibrariesFor key-value data (like tokens or secrets), you can use libraries like react-native-keychain or react-native-encrypted-storage. Custom EncryptionIf you need more fine-grained control, you can integrate libraries like crypto-js or the Web Crypto API to encrypt data before storing it in a database or file. ","version":"Next","tagName":"h2"},{"title":"Setting Up Encryption in RxDB for React Native","type":1,"pageTitle":"React Native Encryption and Encrypted Database/Storage","url":"/articles/react-native-encryption.html#setting-up-encryption-in-rxdb-for-react-native","content":" ","version":"Next","tagName":"h2"},{"title":"1. Install RxDB and Required Plugins","type":1,"pageTitle":"React Native Encryption and Encrypted Database/Storage","url":"/articles/react-native-encryption.html#1-install-rxdb-and-required-plugins","content":" Install RxDB and the encryption plugin(s) you need. For the CryptoJS plugin: npm install rxdb npm install crypto-js ","version":"Next","tagName":"h3"},{"title":"2. Set Up Your RxDB Database with Encryption","type":1,"pageTitle":"React Native Encryption and Encrypted Database/Storage","url":"/articles/react-native-encryption.html#2-set-up-your-rxdb-database-with-encryption","content":" RxDB offers two encryption plugins: CryptoJS Plugin: A free and straightforward solution for most basic use cases.Web Crypto Plugin: A premium plugin that utilizes the native Web Crypto API for better performance and security. Below is an example showing how to set up RxDB using the CryptoJS plugin. This example uses the in-memory storage for testing purposes. In a real production scenario, you would use a persistent storage adapter, mostly the SQLite-based storage. import { createRxDatabase } from 'rxdb'; import { wrappedKeyEncryptionCryptoJsStorage } from 'rxdb/plugins/encryption-crypto-js'; /* * For testing, we use the in-memory storage of RxDB. * In production you would use the persistend SQLite based storage instead. */ import { getRxStorageMemory } from 'rxdb/plugins/storage-memory'; async function initEncryptedDatabase() { // Wrap the normal storage with the encryption plugin const encryptedMemoryStorage = wrappedKeyEncryptionCryptoJsStorage({ storage: getRxStorageMemory() }); // Create an encrypted database const db = await createRxDatabase({ name: 'myEncryptedDatabase', storage: encryptedMemoryStorage, password: 'sudoLetMeIn' // Make sure not to hardcode in production }); // Define a schema and create a collection await db.addCollections({ secureData: { schema: { title: 'secure data schema', version: 0, type: 'object', primaryKey: 'id', properties: { id: { type: 'string', maxLength: 100 }, normalField: { type: 'string' }, secretField: { type: 'string' } }, required: ['id', 'normalField', 'secretField'] } } }); return db; } ","version":"Next","tagName":"h3"},{"title":"3. Inserting and Querying Encrypted Data","type":1,"pageTitle":"React Native Encryption and Encrypted Database/Storage","url":"/articles/react-native-encryption.html#3-inserting-and-querying-encrypted-data","content":" Once you've set up the database with encryption, data in fields specified by your schema will be encrypted automatically before it is stored, and decrypted when queried. (async () => { const db = await initEncryptedDatabase(); // Insert encrypted data const doc = await db.secureData.insert({ id: 'mySecretId', normalField: 'foobar', secretField: 'This is top secret data' }); // Query encrypted data by its primary key or non-encrypted fields const fetchedDoc = await db.secureData.findOne({ selector: { normalField: 'foobar' } }).exec(true); console.log(fetchedDoc.secretField); // 'This is top secret data' // Update data await fetchedDoc.patch({ secretField: 'Updated secret data' }); })(); Note: You can only query directly by non-encrypted fields or primary keys. Encrypted fields cannot be used in queries because they are stored as ciphertext in the database. A common approach is to have a small subset of fields that need to be queried unencrypted while storing any sensitive data in encrypted fields. ","version":"Next","tagName":"h3"},{"title":"Best Practices for React Native Encryption","type":1,"pageTitle":"React Native Encryption and Encrypted Database/Storage","url":"/articles/react-native-encryption.html#best-practices-for-react-native-encryption","content":" Secure Password Handling Avoid hardcoding passwords or encryption keys.Use secure storage solutions like React Native Keychain or react-native-encrypted-storage to fetch the database password at runtime: // Example: using react-native-keychain to securely retrieve a stored password import * as Keychain from 'react-native-keychain'; async function getDatabasePassword() { const credentials = await Keychain.getGenericPassword(); if (credentials) { return credentials.password; } throw new Error('No password stored in Keychain'); } Encrypt Attachments: If you need to store files (images, text files, etc.), consider encrypting attachments. RxDB supports attachments that can be encrypted automatically, ensuring your files are protected: import { createBlob } from 'rxdb/plugins/core'; const doc = await await db.secureData.findOne({ selector: { normalField: 'foobar' } }).exec(true); const attachment = await doc.putAttachment({ id: 'encryptedFile.txt', data: createBlob('Sensitive content', 'text/plain'), type: 'text/plain', }); Optimize Performance If performance is critical, consider using the premium Web Crypto plugin, which leverages native APIs for faster encryption and decryption.If big chunks of data are encrypted, store them in attachments instead of document fields. Attachments will only be decrypted on explicit fetches, not during queries. Use DevMode in Development: RxDB's DevMode Plugin can help validate your schema and encryption setup during development. Disable it in production for performance reasons. Secure Communication: Use HTTPS to secure network communication between the app and any backend services.If you're synchronizing data to a server, ensure the data is also encrypted in transit. RxDB's replication plugins can work with secure endpoints to keep data consistent. SSL Pinning: Consider SSL Pinning if you want to prevent man-in-the-middle attacks. SSL Pinning ensures the device trusts only the pinned certificate, preventing attackers from swapping out valid certificates with their own. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"React Native Encryption and Encrypted Database/Storage","url":"/articles/react-native-encryption.html#follow-up","content":" Learn how to use RxDB with the RxDB Quickstart for a guided introduction.A good way to learn using RxDB database with React Native is to check out the RxDB React Native example and use that as a tutorial.Check out the RxDB GitHub repository and leave a star ⭐ if you find it useful.Learn more about the RxDB encryption plugins. By following these best practices and leveraging RxDB's powerful encryption plugins, you can build secure, performant, and robust React Native applications that keep your users' data safe. ","version":"Next","tagName":"h2"},{"title":"RxDB as a Database in a Vue Application","type":0,"sectionRef":"#","url":"/articles/vue-database.html","content":"","keywords":"","version":"Next"},{"title":"Why Vue Applications Need a Database","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#why-vue-applications-need-a-database","content":" Vue is renowned for its lightweight core and flexible architecture centered around reactive state management and reusable components. However, modern Vue applications often require: Offline Capabilities: Allowing users to continue working even without internet access.Real-Time Updates: Keeping UI data in sync with changes as they occur, whether locally or from other connected clients.Improved Performance: Reducing server round trips and leveraging local storage for faster data operations.Scalable Data Handling: Managing increasingly large datasets or complex queries right in the browser. While you can store data in Vuex/Pinia stores or via direct AJAX calls, these solutions may not suffice when your application demands a full-featured offline-first database or complex synchronization with a server. RxDB addresses these needs with a dedicated, reactive, browser-based database that pairs seamlessly with Vue's reactivity system. ","version":"Next","tagName":"h2"},{"title":"Introducing RxDB as a Database Solution","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#introducing-rxdb-as-a-database-solution","content":" RxDB - short for Reactive Database - is built on the principle of combining NoSQL database capabilities with reactive programming. It runs inside your client-side environment (browser, Node.js, or mobile devices) and provides: Real-Time Reactivity: Automatically updates subscribed components whenever data changes.Offline-First Approach: Stores data locally and syncs with the server when online connectivity is restored.Data Replication: Effortlessly keeps data synchronized across multiple tabs, devices, or server instances.Multi-Tab Support: Seamlessly propagates changes to all open tabs in the user's browser.Observable Queries: Automatically refresh the result set when documents in your queried collection change. ","version":"Next","tagName":"h2"},{"title":"RxDB vs. Other Vue Database Options","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#rxdb-vs-other-vue-database-options","content":" Compared to traditional approaches - like raw IndexedDB or local storage - RxDB adds a powerful, reactive layer that simplifies your data flow. While tools like Vuex or Pinia are great for state management, they are not fully fledged databases with features like replication, conflict resolution, and offline persistence. RxDB bridges the gap by providing an integrated data handling solution tailor-made for modern, data-intensive Vue applications. ","version":"Next","tagName":"h3"},{"title":"Getting Started with RxDB","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#getting-started-with-rxdb","content":" Let's break down the essentials for using RxDB within a Vue application. ","version":"Next","tagName":"h2"},{"title":"Installation","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#installation","content":" You can install RxDB (and RxJS, which it depends on) via npm or yarn: npm install rxdb rxjs ","version":"Next","tagName":"h3"},{"title":"Creating and Configuring Your Database","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#creating-and-configuring-your-database","content":" Within your Vue project, you can set up an RxDB instance in a dedicated file or a Vue plugin. Below is an example using Dexie as the storage engine: // db.js import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; export async function initDatabase() { const db = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageDexie(), password: 'myPassword', // optional encryption password multiInstance: true, // multi-tab support eventReduce: true // optimize event handling }); await db.addCollections({ hero: { schema: { title: 'hero schema', version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string' }, name: { type: 'string' }, healthpoints: { type: 'number' } } } } }); return db; } After creating the RxDB instance, you can share it across your application (for example, by providing it in a plugin or a global property in Vue). ","version":"Next","tagName":"h2"},{"title":"Vue Reactivity and RxDB Observables","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#vue-reactivity-and-rxdb-observables","content":" RxDB queries return RxJS observables (.$). Vue can automatically update components when data changes if you manually subscribe and store results in Vue refs/reactive objects, or if you use RxDB's custom reactivity for Vue. Example with Vue 3 Composition API: // HeroList.vue <script setup> import { ref, onMounted } from 'vue'; import { initDatabase } from '@/db'; const heroes = ref([]); let db; onMounted(async () => { db = await initDatabase(); // Subscribe to an RxDB query db.hero .find({ selector: { healthpoints: { $gt: 0 } }, sort: [{ name: 'asc' }] }) .$ // the dot-$ is an observable that emits whenever the query results change .subscribe((newHeroes) => { heroes.value = newHeroes; }); }); </script> <template> <ul> <li v-for="hero in heroes" :key="hero.id"> {{ hero.name }} - HP: {{ hero.healthpoints }} </li> </ul> </template> ","version":"Next","tagName":"h2"},{"title":"Different RxStorage Layers for RxDB","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#different-rxstorage-layers-for-rxdb","content":" RxDB supports multiple storage backends - called "RxStorage layers" - giving you flexibility in how data is persisted: Dexie.js RxStorage: A popular IndexedDB wrapper, often the default choice.IndexedDB RxStorage: Direct usage of native IndexedDB.OPFS RxStorage: Uses the File System Access API for even faster storage in modern browsers.Memory RxStorage: Stores data in memory, ideal for tests or ephemeral data.SQLite RxStorage: Runs SQLite, which can be compiled to WebAssembly for the browser. While possible, it typically carries a larger bundle size compared to native browser APIs like IndexedDB or OPFS. Choose the storage option that best aligns with your Vue application's requirements for performance, persistence, and platform compatibility. ","version":"Next","tagName":"h2"},{"title":"Synchronizing Data with RxDB between Clients and Servers","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#synchronizing-data-with-rxdb-between-clients-and-servers","content":" RxDB champions an offline-first approach: data is kept locally so that your Vue app remains usable, even without internet. When connectivity is restored, RxDB ensures your local changes synchronize to the server, resolving conflicts as necessary. Real-Time Synchronization: With RxDB's replication plugins, any local change can be instantly pushed to a remote endpoint while pulling down remote changes to ensure consistency.Conflict Resolution: In multi-user scenarios, conflicts may arise if two clients update the same document simultaneously. RxDB provides hooks to handle and resolve these gracefully.Scalable Architecture: By reducing reliance on continuous server requests, you can lighten server load and deliver a more responsive user experience. ","version":"Next","tagName":"h2"},{"title":"Advanced RxDB Features and Techniques","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#advanced-rxdb-features-and-techniques","content":" ","version":"Next","tagName":"h2"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#offline-first-approach","content":" Vue applications can seamlessly function offline by leveraging RxDB's local database storage. The moment the network is restored, all unsynced data is pushed to the server. This capability is particularly beneficial for Progressive Web Apps (PWAs) and scenarios with spotty connectivity. ","version":"Next","tagName":"h3"},{"title":"Observable Queries and Change Streams","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#observable-queries-and-change-streams","content":" Beyond simply returning data, RxDB queries emit observables that respond to any change in the underlying documents. This real-time approach can drastically simplify state management, since updates flow directly into your Vue components without additional manual wiring. ","version":"Next","tagName":"h3"},{"title":"Encryption of Local Data","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#encryption-of-local-data","content":" For applications handling sensitive information, RxDB supports encryption of local data. Your data is stored securely in the browser, protecting it from unauthorized access. ","version":"Next","tagName":"h3"},{"title":"Indexing and Performance Optimization","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#indexing-and-performance-optimization","content":" By defining indexes on frequently searched fields, you can speed up queries and reduce overall resource usage. This is crucial for larger datasets where performance might otherwise degrade. ","version":"Next","tagName":"h3"},{"title":"JSON Key Compression","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#json-key-compression","content":" This optimization shortens field names in stored JSON documents, thereby reducing storage space and potentially improving performance for read/write operations. ","version":"Next","tagName":"h3"},{"title":"Multi-Tab Support","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#multi-tab-support","content":" If your users open multiple tabs of your Vue application, RxDB ensures data is synchronized across all instances in real time. Changes made in one tab are immediately reflected in others, creating a unified user experience. ","version":"Next","tagName":"h3"},{"title":"Best Practices for Using RxDB in Vue","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#best-practices-for-using-rxdb-in-vue","content":" Here are some recommendations to get the most out of RxDB in your Vue projects: Centralize Database Creation: Initialize and configure RxDB in a dedicated file or plugin, ensuring only one database instance is created.Leverage Vue's Composition API or a Global Store: Use watchers, refs, or a store like Pinia to neatly manage data subscriptions and updates, preventing scattered subscription logic.Async Subscriptions: Prefer using Vue's lifecycle hooks and the Composition API to manage subscriptions. Clean up subscriptions when components unmount or no longer need the data.Optimize Queries and Indexes: Only query the data you need, and define indexes to speed up lookups.Test Offline Scenarios: Make sure your offline logic works as expected by simulating network disconnections and reconnections.Plan Conflict Resolution: For multi-user apps, decide how to merge concurrent changes to prevent data inconsistencies. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"RxDB as a Database in a Vue Application","url":"/articles/vue-database.html#follow-up","content":" To explore more about RxDB and leverage its capabilities for browser database development, check out the following resources: RxDB GitHub Repository: Visit the official GitHub repository of RxDB to access the source code, documentation, and community support.RxDB Quickstart: Get started quickly with RxDB by following the provided quickstart guide, which offers step-by-step instructions for setting up and using RxDB in your projects.RxDB Reactivity for Vue: Discover how RxDB observables can directly produce Vue refs, simplifying integration with your Vue components.RxDB Vue Example at GitHub: Explore an official Vue example to see RxDB in action within a Vue application.RxDB Examples: Browse even more official examples to learn best practices you can apply to your own projects. ","version":"Next","tagName":"h2"},{"title":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","type":0,"sectionRef":"#","url":"/articles/websockets-sse-polling-webrtc-webtransport.html","content":"","keywords":"","version":"Next"},{"title":"What is Long Polling?","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#what-is-long-polling","content":" Long polling was the first "hack" to enable a server-client messaging method that can be used in browsers over HTTP. The technique emulates server push communications with normal XHR requests. Unlike traditional polling, where the client repeatedly requests data from the server at regular intervals, long polling establishes a connection to the server that remains open until new data is available. Once the server has new information, it sends the response to the client, and the connection is closed. Immediately after receiving the server's response, the client initiates a new request, and the process repeats. This method allows for more immediate data updates and reduces unnecessary network traffic and server load. However, it can still introduce delays in communication and is less efficient than other real-time technologies like WebSockets. // long-polling in a JavaScript client function longPoll() { fetch('http://example.com/poll') .then(response => response.json()) .then(data => { console.log("Received data:", data); longPoll(); // Immediately establish a new long polling request }) .catch(error => { /** * Errors can appear in normal conditions when a * connection timeout is reached or when the client goes offline. * On errors we just restart the polling after some delay. */ setTimeout(longPoll, 10000); }); } longPoll(); // Initiate the long polling Implementing long-polling on the client side is pretty simple, as shown in the code above. However on the backend there can be multiple difficulties to ensure the client receives all events and does not miss out updates when the client is currently reconnecting. ","version":"Next","tagName":"h3"},{"title":"What are WebSockets?","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#what-are-websockets","content":" WebSockets provide a full-duplex communication channel over a single, long-lived connection between the client and server. This technology enables browsers and servers to exchange data without the overhead of HTTP request-response cycles, facilitating real-time data transfer for applications like live chat, gaming, or financial trading platforms. WebSockets represent a significant advancement over traditional HTTP by allowing both parties to send data independently once the connection is established, making it ideal for scenarios that require low latency and high-frequency updates. // WebSocket in a JavaScript client const socket = new WebSocket('ws://example.com'); socket.onopen = function(event) { console.log('Connection established'); // Sending a message to the server socket.send('Hello Server!'); }; socket.onmessage = function(event) { console.log('Message from server:', event.data); }; While the basics of the WebSocket API are easy to use it has shown to be rather complex in production. A socket can loose connection and must be re-created accordingly. Especially detecting if a connection is still usable or not, can be very tricky. Mostly you would add a ping-and-pong heartbeat to ensure that the open connection is not closed. This complexity is why most people use a library on top of WebSockets like Socket.IO which handles all these cases and even provides fallbacks to long-polling if required. ","version":"Next","tagName":"h3"},{"title":"What are Server-Sent-Events?","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#what-are-server-sent-events","content":" Server-Sent Events (SSE) provide a standard way to push server updates to the client over HTTP. Unlike WebSockets, SSEs are designed exclusively for one-way communication from server to client, making them ideal for scenarios like live news feeds, sports scores, or any situation where the client needs to be updated in real time without sending data to the server. You can think of Server-Sent-Events as a single HTTP request where the backend does not send the whole body at once, but instead keeps the connection open and trickles the answer by sending a single line each time an event has to be send to the client. Creating a connection for receiving events with SSE is straightforward. On the client side in a browser, you initialize an EventSource instance with the URL of the server-side script that generates the events. Listening for messages involves attaching event handlers directly to the EventSource instance. The API distinguishes between generic message events and named events, allowing for more structured communication. Here's how you can set it up in JavaScript: // Connecting to the server-side event stream const evtSource = new EventSource("https://example.com/events"); // Handling generic message events evtSource.onmessage = event => { console.log('got message: ' + event.data); }; In difference to WebSockets, an EventSource will automatically reconnect on connection loss. On the server side, your script must set the Content-Type header to text/event-stream and format each message according to the SSE specification. This includes specifying event types, data payloads, and optional fields like event ID and retry timing. Here's how you can set up a simple SSE endpoint in a Node.js Express app: import express from 'express'; const app = express(); const PORT = process.env.PORT || 3000; app.get('/events', (req, res) => { res.writeHead(200, { 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache', 'Connection': 'keep-alive', }); const sendEvent = (data) => { // all message lines must be prefixed with 'data: ' const formattedData = `data: ${JSON.stringify(data)}\\n\\n`; res.write(formattedData); }; // Send an event every 2 seconds const intervalId = setInterval(() => { const message = { time: new Date().toTimeString(), message: 'Hello from the server!', }; sendEvent(message); }, 2000); // Clean up when the connection is closed req.on('close', () => { clearInterval(intervalId); res.end(); }); }); app.listen(PORT, () => console.log(`Server running on http://localhost:${PORT}`)); ","version":"Next","tagName":"h3"},{"title":"What is the WebTransport API?","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#what-is-the-webtransport-api","content":" WebTransport is a cutting-edge API designed for efficient, low-latency communication between web clients and servers. It leverages the HTTP/3 QUIC protocol to enable a variety of data transfer capabilities, such as sending data over multiple streams, in both reliable and unreliable manners, and even allowing data to be sent out of order. This makes WebTransport a powerful tool for applications requiring high-performance networking, such as real-time gaming, live streaming, and collaborative platforms. However, it's important to note that WebTransport is currently a working draft and has not yet achieved widespread adoption. As of now (March 2024), WebTransport is in a Working Draft and not widely supported. You cannot yet use WebTransport in the Safari browser and there is also no native support in Node.js. This limits its usability across different platforms and environments. Even when WebTransport will become widely supported, its API is very complex to use and likely it would be something where people build libraries on top of WebTransport, not using it directly in an application's sourcecode. ","version":"Next","tagName":"h3"},{"title":"What is WebRTC?","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#what-is-webrtc","content":" WebRTC (Web Real-Time Communication) is an open-source project and API standard that enables real-time communication (RTC) capabilities directly within web browsers and mobile applications without the need for complex server infrastructure or the installation of additional plugins. It supports peer-to-peer connections for streaming audio, video, and data exchange between browsers. WebRTC is designed to work through NATs and firewalls, utilizing protocols like ICE, STUN, and TURN to establish a connection between peers. While WebRTC is made to be used for client-client interactions, it could also be leveraged for server-client communication where the server just simulated being also a client. This approach only makes sense for niche use cases which is why in the following WebRTC will be ignored as an option. The problem is that for WebRTC to work, you need a signaling-server anyway which would then again run over websockets, SSE or WebTransport. This defeats the purpose of using WebRTC as a replacement for these technologies. ","version":"Next","tagName":"h3"},{"title":"Limitations of the technologies","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#limitations-of-the-technologies","content":" ","version":"Next","tagName":"h2"},{"title":"Sending Data in both directions","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#sending-data-in-both-directions","content":" Only WebSockets and WebTransport allow to send data in both directions so that you can receive server-data and send client-data over the same connection. While it would also be possible with Long-Polling in theory, it is not recommended because sending "new" data to an existing long-polling connection would require to do an additional http-request anyway. So instead of doing that you can send data directly from the client to the server with an additional http-request without interrupting the long-polling connection. Server-Sent-Events do not support sending any additional data to the server. You can only do the initial request, and even there you cannot send POST-like data in the http-body by default with the native EventSource API. Instead you have to put all data inside of the url parameters which is considered a bad practice for security because credentials might leak into server logs, proxies and caches. To fix this problem, RxDB for example uses the eventsource polyfill instead of the native EventSource API. This library adds additional functionality like sending custom http headers. Also there is this library from microsoft which allows to send body data and use POST requests instead of GET. ","version":"Next","tagName":"h3"},{"title":"6-Requests per Domain Limit","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#6-requests-per-domain-limit","content":" Most modern browsers allow six connections per domain () which limits the usability of all steady server-to-client messaging methods. The limitation of six connections is even shared across browser tabs so when you open the same page in multiple tabs, they would have to shared the six-connection-pool with each other. This limitation is part of the HTTP/1.1-RFC (which even defines a lower number of only two connections). Quote From RFC 2616 - Section 8.1.4: "Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy. A proxy SHOULD use up to 2*N connections to another server or proxy, where N is the number of simultaneously active users. These guidelines are intended to improve HTTP response times and avoid congestion." While that policy makes sense to prevent website owners from using their visitors to D-DOS other websites, it can be a big problem when multiple connections are required to handle server-client communication for legitimate use cases. To workaround the limitation you have to use HTTP/2 or HTTP/3 with which the browser will only open a single connection per domain and then use multiplexing to run all data through a single connection. While this gives you a virtually infinity amount of parallel connections, there is a SETTINGS_MAX_CONCURRENT_STREAMS setting which limits the actually connections amount. The default is 100 concurrent streams for most configurations. In theory the connection limit could also be increased by the browser, at least for specific APIs like EventSource, but the issues have beem marked as "won't fix" by chromium and firefox. Lower the amount of connections in Browser Apps When you build a browser application, you have to assume that your users will use the app not only once, but in multiple browser tabs in parallel. By default you likely will open one server-stream-connection per tab which is often not necessary at all. Instead you open only a single connection and shared it between tabs, no matter how many tabs are open. RxDB does that with the LeaderElection from the broadcast-channel npm package to only have one stream of replication between server and clients. You can use that package standalone (without RxDB) for any type of application. ","version":"Next","tagName":"h3"},{"title":"Connections are not kept open on mobile apps","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#connections-are-not-kept-open-on-mobile-apps","content":" In the context of mobile applications running on operating systems like Android and iOS, maintaining open connections, such as those used for WebSockets and the others, poses a significant challenge. Mobile operating systems are designed to automatically move applications into the background after a certain period of inactivity, effectively closing any open connections. This behavior is a part of the operating system's resource management strategy to conserve battery and optimize performance. As a result, developers often rely on mobile push notifications as an efficient and reliable method to send data from servers to clients. Push notifications allow servers to alert the application of new data, prompting an action or update, without the need for a persistent open connection. ","version":"Next","tagName":"h3"},{"title":"Proxies and Firewalls","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#proxies-and-firewalls","content":" From consutling many RxDB users, it was shown that in enterprise environments (aka "at work") it is often hard to implement a WebSocket server into the infrastructure because many proxies and firewalls block non-HTTP connections. Therefore using the Server-Sent-Events provides and easier way of enterprise integration. Also long-polling uses only plain HTTP-requests and might be an option. ","version":"Next","tagName":"h3"},{"title":"Performance Comparison","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#performance-comparison","content":" Comparing the performance of WebSockets, Server-Sent Events (SSE), Long-Polling and WebTransport directly involves evaluating key aspects such as latency, throughput, server load, and scalability under various conditions. First lets look at the raw numbers. A good performance comparison can be found in this repo which tests the messages times in a Go Lang server implementation. Here we can see that the performance of WebSockets, WebRTC and WebTransport are comparable: note Remember that WebTransport is a pretty new technologie based on the also new HTTP/3 protocol. In the future (after March 2024) there might be more performance optimizations. Also WebTransport is optimized to use less power which metric is not tested. Lets also compare the Latency, the throughput and the scalability: ","version":"Next","tagName":"h2"},{"title":"Latency","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#latency","content":" WebSockets: Offers the lowest latency due to its full-duplex communication over a single, persistent connection. Ideal for real-time applications where immediate data exchange is critical.Server-Sent Events: Also provides low latency for server-to-client communication but cannot natively send messages back to the server without additional HTTP requests.Long-Polling: Incurs higher latency as it relies on establishing new HTTP connections for each data transmission, making it less efficient for real-time updates. Also it can occur that the server wants to send an event when the client is still in the process of opening a new connection. In these cases the latency would be significantly larger.WebTransport: Promises to offer low latency similar to WebSockets, with the added benefits of leveraging the HTTP/3 protocol for more efficient multiplexing and congestion control. ","version":"Next","tagName":"h3"},{"title":"Throughput","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#throughput","content":" WebSockets: Capable of high throughput due to its persistent connection, but throughput can suffer from backpressure where the client cannot process data as fast as the server is capable of sending it.Server-Sent Events: Efficient for broadcasting messages to many clients with less overhead than WebSockets, leading to potentially higher throughput for unidirectional server-to-client communication.Long-Polling: Generally offers lower throughput due to the overhead of frequently opening and closing connections, which consumes more server resources.WebTransport: Expected to support high throughput for both unidirectional and bidirectional streams within a single connection, outperforming WebSockets in scenarios requiring multiple streams. ","version":"Next","tagName":"h3"},{"title":"Scalability and Server Load","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#scalability-and-server-load","content":" WebSockets: Maintaining a large number of WebSocket connections can significantly increase server load, potentially affecting scalability for applications with many users.Server-Sent Events: More scalable for scenarios that primarily require updates from server to client, as it uses less connection overhead than WebSockets because it uses "normal" HTTP request without things like protocol updates that have to be run with WebSockets.Long-Polling: The least scalable due to the high server load generated by frequent connection establishment, making it suitable only as a fallback mechanism.WebTransport: Designed to be highly scalable, benefiting from HTTP/3's efficiency in handling connections and streams, potentially reducing server load compared to WebSockets and SSE. ","version":"Next","tagName":"h3"},{"title":"Recommendations and Use-Case Suitability","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#recommendations-and-use-case-suitability","content":" In the landscape of server-client communication technologies, each has its distinct advantages and use case suitability. Server-Sent Events (SSE) emerge as the most straightforward option to implement, leveraging the same HTTP/S protocols as traditional web requests, thereby circumventing corporate firewall restrictions and other technical problems that can appear with other protocols. They are easily integrated into Node.js and other server frameworks, making them an ideal choice for applications requiring frequent server-to-client updates, such as news feeds, stock tickers, and live event streaming. On the other hand, WebSockets excel in scenarios demanding ongoing, two-way communication. Their ability to support continuous interaction makes them the prime choice for browser games, chat applications, and live sports updates. However, WebTransport, despite its potential, faces adoption challenges. It is not widely supported by server frameworks including Node.js and lacks compatibility with safari. Moreover, its reliance on HTTP/3 further limits its immediate applicability because many WebServers like nginx only have experimental HTTP/3 support. While promising for future applications with its support for both reliable and unreliable data transmission, WebTransport is not yet a viable option for most use cases. Long-Polling, once a common technique, is now largely outdated due to its inefficiency and the high overhead of repeatedly establishing new HTTP connections. Although it may serve as a fallback in environments lacking support for WebSockets or SSE, its use is generally discouraged due to significant performance limitations. ","version":"Next","tagName":"h2"},{"title":"Known Problems","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#known-problems","content":" For all of the realtime streaming technologies, there are known problems. When you build anything on top of them, keep these in mind. ","version":"Next","tagName":"h2"},{"title":"A client can miss out events when reconnecting","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#a-client-can-miss-out-events-when-reconnecting","content":" When a client is connecting, reconnecting or offline, it can miss out events that happened on the server but could not be streamed to the client. This missed out events are not relevant when the server is streaming the full content each time anyway, like on a live updating stock ticker. But when the backend is made to stream partial results, you have to account for missed out events. Fixing that on the backend scales pretty bad because the backend would have to remember for each client which events have been successfully send already. Instead this should be implemented with client side logic. The RxDB replication protocol for example uses two modes of operation for that. One is the checkpoint iteration mode where normal http requests are used to iterate over backend data, until the client is in sync again. Then it can switch to event observation mode where updates from the realtime-stream are used to keep the client in sync. Whenever a client disconnects or has any error, the replication shortly switches to checkpoint iteration mode until the client is in sync again. This method accounts for missed out events and ensures that clients can always sync to the exact equal state of the server. ","version":"Next","tagName":"h3"},{"title":"Company firewalls can cause problems","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#company-firewalls-can-cause-problems","content":" There are many known problems with company infrastructure when using any of the streaming technologies. Proxies and firewall can block traffic or unintentionally break requests and responses. Whenever you implement a realtime app in such an infrastructure, make sure you first test out if the technology itself works for you. ","version":"Next","tagName":"h3"},{"title":"Follow Up","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#follow-up","content":" Check out the hackernews discussion of this articleShared/Like my announcement tweetLearn how to use Server-Sent-Events to replicate a client side RxDB database with your backend.Learn how to use RxDB with the RxDB QuickstartCheck out the RxDB github repo and leave a star ⭐ ","version":"Next","tagName":"h2"},{"title":"📥 Backup Plugin","type":0,"sectionRef":"#","url":"/backup.html","content":"","keywords":"","version":"Next"},{"title":"Installation","type":1,"pageTitle":"📥 Backup Plugin","url":"/backup.html#installation","content":" import { addRxPlugin } from 'rxdb'; import { RxDBBackupPlugin } from 'rxdb/plugins/backup'; addRxPlugin(RxDBBackupPlugin); ","version":"Next","tagName":"h2"},{"title":"one-time backup","type":1,"pageTitle":"📥 Backup Plugin","url":"/backup.html#one-time-backup","content":" Write the whole database to the filesystem once. When called multiple times, it will continue from the last checkpoint and not start all over again. const backupOptions = { // if false, a one-time backup will be written live: false, // the folder where the backup will be stored directory: '/my-backup-folder/', // if true, attachments will also be saved attachments: true } const backupState = myDatabase.backup(backupOptions); await backupState.awaitInitialBackup(); // call again to run from the last checkpoint const backupState2 = myDatabase.backup(backupOptions); await backupState2.awaitInitialBackup(); ","version":"Next","tagName":"h2"},{"title":"live backup","type":1,"pageTitle":"📥 Backup Plugin","url":"/backup.html#live-backup","content":" When live: true is set, the backup will write all ongoing changes to the backup directory. const backupOptions = { // set live: true to have an ongoing backup live: true, directory: '/my-backup-folder/', attachments: true } const backupState = myDatabase.backup(backupOptions); // you can still await the initial backup write, but further changes will still be processed. await backupState.awaitInitialBackup(); ","version":"Next","tagName":"h2"},{"title":"writeEvents$","type":1,"pageTitle":"📥 Backup Plugin","url":"/backup.html#writeevents","content":" You can listen to the writeEvents$ Observable to get notified about written backup files. const backupOptions = { live: false, directory: '/my-backup-folder/', attachments: true } const backupState = myDatabase.backup(backupOptions); const subscription = backupState.writeEvents$.subscribe(writeEvent => console.dir(writeEvent)); /* > { collectionName: 'humans', documentId: 'foobar', files: [ '/my-backup-folder/foobar/document.json' ], deleted: false } */ ","version":"Next","tagName":"h2"},{"title":"Limitations","type":1,"pageTitle":"📥 Backup Plugin","url":"/backup.html#limitations","content":" It is currently not possible to import from a written backup. If you need this functionality, please make a pull request. ","version":"Next","tagName":"h2"},{"title":"IndexedDB Database in Vue Apps - The Power of RxDB","type":0,"sectionRef":"#","url":"/articles/vue-indexeddb.html","content":"","keywords":"","version":"Next"},{"title":"What is IndexedDB?","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#what-is-indexeddb","content":" IndexedDB is a low-level API for storing significant amounts of structured data in the browser. It provides a transactional database system that can store key-value pairs, complex objects, and more. This storage engine is asynchronous and supports advanced data types, making it suitable for offline storage and complex web applications. ","version":"Next","tagName":"h2"},{"title":"Why Use IndexedDB in Vue","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#why-use-indexeddb-in-vue","content":" When building Vue applications, IndexedDB can play a crucial role in enhancing both performance and user experience. Here are some reasons to consider using IndexedDB: Offline-First / Local-First: By storing data locally, your application remains functional even without an internet connection.Performance: Using local data means zero latency and no loading spinners, as data doesn't need to be fetched over a network.Easier Implementation: Replicating all data to the client once is often simpler than implementing multiple endpoints for each user interaction.Scalability: Local data reduces server load because queries run on the client side, decreasing server bandwidth and processing requirements. ","version":"Next","tagName":"h2"},{"title":"Why To Not Use Plain IndexedDB","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#why-to-not-use-plain-indexeddb","content":" While IndexedDB itself is powerful, its native API comes with several drawbacks for everyday application developers: Callback-Based API: IndexedDB was originally designed around callbacks rather than modern Promises, making asynchronous code more cumbersome.Complexity: IndexedDB is low-level, intended for library developers rather than for app developers who just want to store and query data easily.Basic Query API: Its rudimentary query capabilities limit how you can efficiently perform complex queries. Libraries like RxDB offer more advanced querying and indexing.TypeScript Support: Ensuring good TypeScript support with IndexedDB is challenging, especially when trying to maintain schema consistency.Lack of Observable API: IndexedDB doesn't provide an observable API out of the box, making it hard to automatically update your Vue app in real time. RxDB solves this by enabling you to observe queries or specific documents.Cross-Tab Communication: Managing cross-tab updates in plain IndexedDB is difficult. RxDB handles this seamlessly - changes in one tab automatically affect observed data in others.Missing Advanced Features: Features like encryption or compression aren't built into IndexedDB, but they are available via RxDB.Limited Platform Support: IndexedDB is browser-only. RxDB offers swappable storages so you can reuse the same data layer code in mobile or desktop environments. ","version":"Next","tagName":"h2"},{"title":"Set up RxDB in Vue","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#set-up-rxdb-in-vue","content":" Setting up RxDB with Vue is straightforward. It abstracts IndexedDB complexities and adds a layer of powerful features over it. ","version":"Next","tagName":"h2"},{"title":"Installing RxDB","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#installing-rxdb","content":" First, install RxDB (and RxJS) from npm: npm install rxdb rxjs --save ","version":"Next","tagName":"h3"},{"title":"Create a Database and Collections","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#create-a-database-and-collections","content":" RxDB provides two main storage options: The free Dexie.js-based storageThe premium plain IndexedDB-based storage, offering faster performance Below is an example of setting up a simple RxDB database using the Dexie.js-based storage in a Vue app: // db.ts import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; export async function initDB() { const db = await createRxDatabase({ name: 'heroesdb', // the name of the database storage: getRxStorageDexie() }); // Define your schema const heroSchema = { title: 'hero schema', version: 0, description: 'Describes a hero in your app', primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 }, name: { type: 'string' }, power: { type: 'string' } }, required: ['id', 'name'] }; // add collections await db.addCollections({ heroes: { schema: heroSchema } }); return db; } ","version":"Next","tagName":"h3"},{"title":"CRUD Operations","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#crud-operations","content":" Once your database is initialized, you can perform all CRUD operations: // insert await db.heroes.insert({ id: '1', name: 'Iron Man', power: 'Genius-level intellect' }); // bulk insert await db.heroes.bulkInsert([ { id: '2', name: 'Thor', power: 'God of Thunder' }, { id: '3', name: 'Hulk', power: 'Superhuman Strength' } ]); // find and findOne const heroes = await db.heroes.find().exec(); const ironMan = await db.heroes.findOne({ selector: { name: 'Iron Man' } }).exec(); // update const doc = await db.heroes.findOne({ selector: { name: 'Hulk' } }).exec(); await doc.update({ $set: { power: 'Unlimited Strength' } }); // delete const thorDoc = await db.heroes.findOne({ selector: { name: 'Thor' } }).exec(); await thorDoc.remove(); ","version":"Next","tagName":"h3"},{"title":"Reactive Queries and Live Updates","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#reactive-queries-and-live-updates","content":" RxDB excels in providing reactive data capabilities, ideal for real-time applications. Subscribing to queries automatically updates your Vue components when underlying data changes - even across browser tabs. ","version":"Next","tagName":"h2"},{"title":"Using RxJS Observables with Vue 3 Composition API","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#using-rxjs-observables-with-vue-3-composition-api","content":" Here's an example of a Vue component that subscribes to live data updates: <template> <div> <h2>Hero List</h2> <ul> <li v-for="hero in heroes" :key="hero.id"> <strong>{{ hero.name }}</strong> - {{ hero.power }} </li> </ul> </div> </template> <script setup lang="ts"> import { ref, onMounted } from 'vue'; import { initDB } from '@/db'; const heroes = ref<any[]>([]); onMounted(async () => { const db = await initDB(); // create an observable query const query = db.heroes.find(); // subscribe to the query query.$.subscribe((newHeroes: any[]) => { heroes.value = newHeroes; }); }); </script> This component subscribes to the collection's changes, updating the UI automatically whenever the underlying data changes in any browser tab. ","version":"Next","tagName":"h3"},{"title":"Using Vue Signals","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#using-vue-signals","content":" If you're exploring Vue's reactivity transforms or signals, RxDB also offers custom reactivity factories (premium plugins are required). This allows queries to emit data as signals instead of traditional Observables. const heroesSignal = db.heroes.find().$$; // $$ indicates a reactive result With this, in your Vue template or script, you can directly read from heroesSignal() <template> <div> <h2>Hero List</h2> <ul> <!-- we read heroesSignal.value which is always up to date --> <li v-for="hero in heroesSignal.value" :key="hero.id"> <strong>{{ hero.name }}</strong> - {{ hero.power }} </li> </ul> </div> </template> ","version":"Next","tagName":"h3"},{"title":"Vue IndexedDB Example with RxDB","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#vue-indexeddb-example-with-rxdb","content":" A comprehensive example of using RxDB within a Vue application can be found in the RxDB GitHub repository. This repository contains sample applications, showcasing best practices and demonstrating how to integrate RxDB for various use cases. ","version":"Next","tagName":"h2"},{"title":"Advanced RxDB Features","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#advanced-rxdb-features","content":" RxDB offers many advanced features that extend beyond basic data storage: RxDB Replication: Synchronize local data with remote databases seamlessly. Data Migration: Handle schema changes gracefully with automatic data migrations. Encryption: Secure your data with built-in encryption capabilities. Compression: Optimize storage using key compression. ","version":"Next","tagName":"h2"},{"title":"Limitations of IndexedDB","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#limitations-of-indexeddb","content":" While IndexedDB is powerful, it has some inherent limitations: Performance: IndexedDB can be slow under certain conditions. Read more: Slow IndexedDBStorage Limits: Browsers impose limits on how much data can be stored. See: Browser storage limits. ","version":"Next","tagName":"h2"},{"title":"Alternatives to IndexedDB","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#alternatives-to-indexeddb","content":" Depending on your application's requirements, there are alternative storage solutions to consider: Origin Private File System (OPFS): A newer API that can offer better performance. RxDB supports OPFS as well. More info: RxDB OPFS StorageSQLite: Ideal for hybrid frameworks or Capacitor, offering native performance. Explore: RxDB SQLite Storage ","version":"Next","tagName":"h2"},{"title":"Performance Comparison with Other Browser Storages","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#performance-comparison-with-other-browser-storages","content":" Here is a performance overview of the various browser-based storage implementations of RxDB: ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"IndexedDB Database in Vue Apps - The Power of RxDB","url":"/articles/vue-indexeddb.html#follow-up","content":" Learn how to use RxDB with the RxDB Quickstart for a guided introduction.Check out the RxDB GitHub repository and leave a star ⭐ if you find it useful. By leveraging RxDB on top of IndexedDB, you can create highly responsive, offline-capable Vue applications without dealing with the low-level complexities of IndexedDB directly. With reactive queries, seamless cross-tab communication, and powerful advanced features, RxDB becomes an invaluable tool in modern web development. ","version":"Next","tagName":"h2"},{"title":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","type":0,"sectionRef":"#","url":"/articles/zero-latency-local-first.html","content":"","keywords":"","version":"Next"},{"title":"Why Zero Latency with a Local First Approach?","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#why-zero-latency-with-a-local-first-approach","content":" In a traditional architecture, each user action triggers requests to a server for reads or writes. Despite network optimizations, unavoidable latencies can delay responses and disrupt the user flow. By contrast, a local first model maintains data in the client's environment (browser, mobile, desktop), drastically reducing user-perceived delays. Once the user re-connects or resumes activity online, changes propagate automatically to the server, eliminating manual synchronization overhead. Instant Responsiveness: Because user actions (queries, updates, etc.) happen against a local datastore, UI updates do not wait on round-trip times.Offline Operation: Apps can continue to read and write data, even when there is zero connectivity.Reduced Backend Load: Instead of flooding the server with small requests, replication can combine and push or pull changes in batches.Simplified Caching: Instead of implementing multi-layer caching, local first transforms your data layer into a reliable, quickly accessible store for all user actions. ","version":"Next","tagName":"h2"},{"title":"RxDB: Your Key to Zero-Latency Local First Apps","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#rxdb-your-key-to-zero-latency-local-first-apps","content":" RxDB is a JavaScript-based NoSQL database designed for offline-first and real-time replication scenarios. It supports a range of environments - browsers (IndexedDB or OPFS), mobile (Ionic, React Native), Electron, Node.js - and is built around: Reactive Queries that trigger UI updates upon data changesSchema-based NoSQL Documents for flexible but robust data modelsAdvanced Replication Protocol to synchronize with diverse backendsEncryption for secure data at restCompression to reduce local and network overhead ","version":"Next","tagName":"h2"},{"title":"Real-Time Sync and Offline-First","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#real-time-sync-and-offline-first","content":" RxDB's replication logic revolves around pulling down remote changes and pushing up local modifications. It maintains a checkpoint-based mechanism, so only new or updated documents flow between the client and server, reducing bandwidth usage and latency. This ensures: Live Data: Queries automatically reflect server-side changes once they arrive locally.Background Updates: No manual polling needed; replication streams or intervals handle synchronization.Conflict Handling (see below) ensures data merges gracefully when multiple clients edit the same document offline. Multiple Replication Plugins and Approaches RxDB's flexible replication system lets you connect to different backends or even peer-to-peer networks. There are official plugins for CouchDB, Firestore, GraphQL, WebRTC, and more. Many developers create a custom HTTP replication to work with their existing REST-based backend, ensuring a painless integration that doesn't require adopting an entirely new server infrastructure. Example Setup of a local database import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; async function initZeroLocalDB() { // Create a local RxDB instance using Dexie-based IndexedDB storage const db = await createRxDatabase({ name: 'myZeroLocalDB', storage: getRxStorageDexie(), // optional: password for encryption if needed }); // Define one or more collections await db.addCollections({ tasks: { schema: { title: 'task schema', version: 0, type: 'object', primaryKey: 'id', properties: { id: { type: 'string', maxLength: 100 }, title: { type: 'string' }, done: { type: 'boolean' } } } } }); // Reactive query - automatically updates on local or remote changes db.tasks .find() .$ // returns an RxJS Observable .subscribe(allTasks => { console.log('All tasks updated:', allTasks); }); return db; } When offline, reads and writes to db.tasks happen locally with near-zero delay. Once connectivity resumes, changes sync to the server automatically (if replication is configured). Example Setup of the replication import { replicateRxCollection } from 'rxdb/plugins/replication'; async function syncLocalTasks(db) { replicateRxCollection({ collection: db.tasks, replicationIdentifier: 'sync-tasks', // Define how to pull server documents and push local documents pull: { handler: async (lastCheckpoint, batchSize) => { // logic to retrieve updated tasks from the server since lastCheckpoint }, }, push: { handler: async (docs) => { // logic to post local changes to the server }, }, live: true, // continuously replicate retryTime: 5000, // retry on errors or disconnections }); } This replication seamlessly merges server-side and client-side changes. Your app remains responsive throughout, regardless of the network status. ","version":"Next","tagName":"h3"},{"title":"Things you should also know about","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#things-you-should-also-know-about","content":" ","version":"Next","tagName":"h2"},{"title":"Optimistic UI on Local Data Changes","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#optimistic-ui-on-local-data-changes","content":" A local first approach, especially with RxDB, naturally supports an optimistic UI pattern. Because writes occur on the client, you can instantly reflect changes in the interface as soon as the user performs an action - no need to wait for server confirmation. For example, when a user updates a task document to done: true, the UI can re-render immediately with that new state. This even works across multiple browser tabs. If a server conflict arises later during replication, RxDB's conflict handling logic determines which changes to keep, and the UI can be updated accordingly. This is far more efficient than blocking the user or displaying a spinner while the backend processes the request. ","version":"Next","tagName":"h3"},{"title":"Conflict Handling","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#conflict-handling","content":" In local first models, conflicts emerge if multiple devices or clients edit the same document while offline. RxDB tracks document revisions so you can detect collisions and merge them effectively. By default, RxDB uses a last-write-wins approach, but developers can override it with a custom conflict handler. This provides fine-grained control - like merging partial fields, storing revision histories, or prompting users for resolution. Proper conflict handling keeps distributed data consistent across your entire system. ","version":"Next","tagName":"h3"},{"title":"Schema Migrations","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#schema-migrations","content":" Over time, apps evolve - new fields, changed field types, or altered indexes. RxDB allows incremental schema migrations so you can upgrade a user's local data from one schema version to another. You might, for instance, rename a property or transform data formats. Once you define your migration strategy, RxDB automatically applies it upon app initialization, ensuring the local database's structure aligns with your latest codebase. ","version":"Next","tagName":"h3"},{"title":"Advanced Features","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#advanced-features","content":" ","version":"Next","tagName":"h2"},{"title":"Setup Encryption","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#setup-encryption","content":" When storing data locally, you may handle user-sensitive information like PII (Personal Identifiable Information) or financial details. RxDB supports on-device encryption to protect fields. For example, you can define: import { wrappedKeyEncryptionCryptoJsStorage } from 'rxdb/plugins/encryption-crypto-js'; const encryptedStorage = wrappedKeyEncryptionCryptoJsStorage({ storage: getRxStorageDexie() }); const db = await createRxDatabase({ name: 'secureDB', storage: encryptedStorage, password: 'myEncryptionPassword' }); await db.addCollections({ secrets: { schema: { title: 'secrets schema', version: 0, type: 'object', primaryKey: 'id', properties: { id: { type: 'string', maxLength: 100 }, secretField: { type: 'string' } }, required: ['id'], encrypted: ['secretField'] // define which fields to encrypt } } }); Then mark fields as encrypted in the schema. This ensures data is unreadable on disk without the correct password. ","version":"Next","tagName":"h3"},{"title":"Setup Compression","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#setup-compression","content":" Local data can expand quickly, especially for large documents or repeated key names. RxDB's key compression feature replaces verbose field names with shorter tokens, decreasing storage usage and speeding up replication. You enable it by adding keyCompression: true to your collection schema: await db.addCollections({ logs: { schema: { title: 'log schema', version: 0, keyCompression: true, type: 'object', primaryKey: 'id', properties: { id: { type: 'string'. maxLength: 100 }, message: { type: 'string' }, timestamp: { type: 'number' } } } } }); ","version":"Next","tagName":"h3"},{"title":"Different RxDB Storages Depending on the Runtime","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#different-rxdb-storages-depending-on-the-runtime","content":" RxDB's storage layer is swappable, so you can pick the optimal adapter for each environment. Some common choices include: IndexedDB / Dexie in modern browsers (default).OPFS (Origin Private File System) in browsers that support it for potentially better performance.SQLite for mobile or desktop environments via the premium plugin, offering native-like speed on Android, iOS, or Electron.In-Memory for tests or ephemeral data. By choosing a suitable storage layer, you can adapt your zero-latency local first design to any runtime - web, mobile, or server-like contexts in Node.js. ","version":"Next","tagName":"h2"},{"title":"Performance Considerations","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#performance-considerations","content":" Performant local data operations are crucial for a zero-latency experience. According to the RxDB storage performance overview, differences in underlying storages can significantly impact throughput and latency. For instance, IndexedDB (via Dexie) typically performs well across modern browsers, OPFS offers improved throughput in supporting browsers, and SQLite storage (a premium plugin) often delivers near-native speed for mobile or desktop. ","version":"Next","tagName":"h2"},{"title":"Offloading Work from the Main Thread","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#offloading-work-from-the-main-thread","content":" In a browser environment, you can move database operations into a Web Worker using the Worker RxStorage plugin. This approach lets you keep heavy data processing off the main thread, ensuring the UI remains smooth and responsive. Complex queries or large write operations no longer cause stuttering in the user interface. ","version":"Next","tagName":"h3"},{"title":"Sharding or Memory-Mapped Storages","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#sharding-or-memory-mapped-storages","content":" For large datasets or high concurrency, advanced techniques like sharding collections across multiple storages or leveraging a memory-mapped variant can further boost performance. By splitting data into smaller subsets or streaming it only as needed, you can scale to handle complex usage scenarios without compromising on the zero-latency user experience. ","version":"Next","tagName":"h3"},{"title":"Follow Up","type":1,"pageTitle":"Zero Latency Local First Apps with RxDB – Sync, Encryption and Compression","url":"/articles/zero-latency-local-first.html#follow-up","content":" Dive into the RxDB Quickstart to set up your own local first database.Explore Replication Plugins for syncing with platforms like CouchDB, Firestore, or GraphQL.Check out Advanced Conflict Handling and Performance Tuning for big data sets or complex multi-user interactions.Join the RxDB Community on GitHub and Discord to share insights, file issues, and learn from other developers building zero-latency solutions. By integrating RxDB into your stack, you achieve millisecond interactions, full offline capabilities, secure data at rest, and minimal overhead for large or distributed teams. This zero-latency local first architecture is the future of modern software - delivering a fluid, always-available user experience without overcomplicating the developer workflow. ","version":"Next","tagName":"h2"},{"title":"🧹 Cleanup","type":0,"sectionRef":"#","url":"/cleanup.html","content":"","keywords":"","version":"Next"},{"title":"Installation","type":1,"pageTitle":"🧹 Cleanup","url":"/cleanup.html#installation","content":" import { addRxPlugin } from 'rxdb'; import { RxDBCleanupPlugin } from 'rxdb/plugins/cleanup'; addRxPlugin(RxDBCleanupPlugin); ","version":"Next","tagName":"h2"},{"title":"Create a database with cleanup options","type":1,"pageTitle":"🧹 Cleanup","url":"/cleanup.html#create-a-database-with-cleanup-options","content":" You can set a specific cleanup policy when a RxDatabase is created. For most use cases, the defaults should be ok. import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const db = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageDexie(), cleanupPolicy: { /** * The minimum time in milliseconds for how long * a document has to be deleted before it is * purged by the cleanup. * [default=one month] */ minimumDeletedTime: 1000 * 60 * 60 * 24 * 31, // one month, /** * The minimum amount of that that the RxCollection must have existed. * This ensures that at the initial page load, more important * tasks are not slowed down because a cleanup process is running. * [default=60 seconds] */ minimumCollectionAge: 1000 * 60, // 60 seconds /** * After the initial cleanup is done, * a new cleanup is started after [runEach] milliseconds * [default=5 minutes] */ runEach: 1000 * 60 * 5, // 5 minutes /** * If set to true, * RxDB will await all running replications * to not have a replication cycle running. * This ensures we do not remove deleted documents * when they might not have already been replicated. * [default=true] */ awaitReplicationsInSync: true, /** * If true, it will only start the cleanup * when the current instance is also the leader. * This ensures that when RxDB is used in multiInstance mode, * only one instance will start the cleanup. * [default=true] */ waitForLeadership: true } }); ","version":"Next","tagName":"h2"},{"title":"Calling cleanup manually","type":1,"pageTitle":"🧹 Cleanup","url":"/cleanup.html#calling-cleanup-manually","content":" You can manually run a cleanup per collection by calling RxCollection.cleanup(). /** * Manually run the cleanup with the * minimumDeletedTime from the cleanupPolicy. */ await myRxCollection.cleanup(); /** * Overwrite the minimumDeletedTime * be setting it explicitly (time in milliseconds) */ await myRxCollection.cleanup(1000); /** * Purge all deleted documents no * matter when they where deleted * by setting minimumDeletedTime to zero. */ await myRxCollection.cleanup(0); ","version":"Next","tagName":"h2"},{"title":"Using the cleanup plugin to empty a collection","type":1,"pageTitle":"🧹 Cleanup","url":"/cleanup.html#using-the-cleanup-plugin-to-empty-a-collection","content":" When you have a collection with documents and you want to empty it by purging all documents, the recommended way is to call myRxCollection.remove(). However this will destroy the JavaScript class of the collection and stop all listeners and observables. Sometimes the better option might be to manually delete all documents and then use the cleanup plugin to purge the deleted documents: // delete all documents await myRxCollection.find().remove(); // purge all deleted documents await myRxCollection.cleanup(0); ","version":"Next","tagName":"h2"},{"title":"FAQ","type":1,"pageTitle":"🧹 Cleanup","url":"/cleanup.html#faq","content":" When does the cleanup run The cleanup cycles are optimized to run only when the database is idle and it is unlikely that another database interactions performance will be decreased in the meantime. For example by default the cleanup does not run in the first 60 seconds of a collections creation to ensure an initial page load of your website will not be slowed down. Also we use mechanisms like the requestIdleCallback() API to improve the correct timing of the cleanup cycle. ","version":"Next","tagName":"h2"},{"title":"Data Migration","type":0,"sectionRef":"#","url":"/data-migration.html","content":"Data Migration This documentation page has been moved to here","keywords":"","version":"Next"},{"title":"Contribution","type":0,"sectionRef":"#","url":"/contribution.html","content":"","keywords":"","version":"Next"},{"title":"Requirements","type":1,"pageTitle":"Contribution","url":"/contribution.html#requirements","content":" Before you can start developing, do the following: Make sure you have installed nodejs with the version stated in the .nvmrcClone the repository git clone https://github.com/pubkey/rxdb.gitInstall the dependencies cd rxdb && npm installMake sure that the tests work for you. At first, try it out with npm run test:node:memory which tests the memory storage in node. In the package.json you can find more scripts to run the tests with different storages. ","version":"Next","tagName":"h2"},{"title":"Adding tests","type":1,"pageTitle":"Contribution","url":"/contribution.html#adding-tests","content":" Before you start creating a bugfix or a feature, you should create a test to reproduce it. Tests are in the test/unit-folder. If you want to reproduce a bug, you can modify the test in this file. ","version":"Next","tagName":"h2"},{"title":"Making a PR","type":1,"pageTitle":"Contribution","url":"/contribution.html#making-a-pr","content":" If you make a pull-request, ensure the following: Every feature or bugfix must be committed together with a unit-test which ensures everything works as expected.Do not commit build-files (anything in the dist-folder)Before you add non-trivial changes, create an issue to discuss if this will be merged and you don't waste your time.To run the unit and integration-tests, do npm run test and ensure everything works as expected ","version":"Next","tagName":"h2"},{"title":"Getting help","type":1,"pageTitle":"Contribution","url":"/contribution.html#getting-help","content":" If you need help with your contribution, ask at discord. ","version":"Next","tagName":"h2"},{"title":"No-Go","type":1,"pageTitle":"Contribution","url":"/contribution.html#no-go","content":" When reporting a bug, you need to make a PR with a test case that runs in the CI and reproduces your problem. Sending a link with a repo does not help the maintainer because installing random peoples projects is time consuming and dangerous. Also the maintainer will never go on a bug hunt based on your plain description. Either you report the bug with a test case, or the maintainer will likely not help you. Docs The source of the documentation is at the docs-src-folder. To read the docs locally, run npm run docs:install && npm run docs:serve and open http://localhost:4000/ Thank you for contributing! ","version":"Next","tagName":"h2"},{"title":"Capacitor Database - SQLite, RxDB and others","type":0,"sectionRef":"#","url":"/capacitor-database.html","content":"","keywords":"","version":"Next"},{"title":"Database Solutions for Capacitor","type":1,"pageTitle":"Capacitor Database - SQLite, RxDB and others","url":"/capacitor-database.html#database-solutions-for-capacitor","content":" ","version":"Next","tagName":"h2"},{"title":"Preferences API","type":1,"pageTitle":"Capacitor Database - SQLite, RxDB and others","url":"/capacitor-database.html#preferences-api","content":" Capacitor comes with a native Preferences API which is a simple, persistent key->value store for lightweight data, similar to the browsers localstorage or React Native AsyncStorage. To use it, you first have to install it from npm npm install @capacitor/preferences and then you can import it and write/read data. Notice that all calls to the preferences API are asynchronous so they return a Promise that must be await-ed. import { Preferences } from '@capacitor/preferences'; // write await Preferences.set({ key: 'foo', value: 'baar', }); // read const { value } = await Preferences.get({ key: 'foo' }); // > 'bar' // delete await Preferences.remove({ key: 'foo' }); The preferences API is good when only a small amount of data needs to be stored and when no query capabilities besides the key access are required. Complex queries or other features like indexes or replication are not supported which makes the preferences API not suitable for anything more than storing simple data like user settings. ","version":"Next","tagName":"h3"},{"title":"Localstorage/IndexedDB/WebSQL","type":1,"pageTitle":"Capacitor Database - SQLite, RxDB and others","url":"/capacitor-database.html#localstorageindexeddbwebsql","content":" Since Capacitor apps run in a web view, Web APIs like IndexedDB, Localstorage and WebSQL are available. But the default browser behavior is to clean up these storages regularly when they are not in use for a long time or the device is low on space. Therefore you cannot 100% rely on the persistence of the stored data and your application needs to expect that the data will be lost eventually. Storing data in these storages can be done in browsers, because there is no other option. But in Capacitor iOS and Android, you should not rely on these. ","version":"Next","tagName":"h3"},{"title":"SQLite","type":1,"pageTitle":"Capacitor Database - SQLite, RxDB and others","url":"/capacitor-database.html#sqlite","content":" SQLite is a SQL based relational database written in C that was crafted to be embed inside of applications. Operations are written in the SQL query language and SQLite generally follows the PostgreSQL syntax. To use SQLite in Capacitor, there are three options: The @capacitor-community/sqlite packageThe cordova-sqlite-storage packageThe non-free IonicSecure Storage which comes at 999$ per month. It is recommended to use the @capacitor-community/sqlite because it has the best maintenance and is open source. Install it first npm install --save @capacitor-community/sqlite and then set the storage location for iOS apps: { "plugins": { "CapacitorSQLite": { "iosDatabaseLocation": "Library/CapacitorDatabase" } } } Now you can create a database connection and use the SQLite database. import { Capacitor } from '@capacitor/core'; import { CapacitorSQLite, SQLiteDBConnection, SQLiteConnection, capSQLiteSet, capSQLiteChanges, capSQLiteValues, capEchoResult, capSQLiteResult, capNCDatabasePathResult } from '@capacitor-community/sqlite'; const sqlite = new SQLiteConnection(CapacitorSQLite); const database: SQLiteDBConnection = await this.sqlite.createConnection(databaseName, encrypted, mode, version, readOnly); let { rows } = database.query('SELECT somevalue FROM sometable'); The downside of SQLite is that it is lacking many features that are handful when using a database together with an UI based application like your Capacitor app. For example it is not possible to observe queries or document fields. Also there is no realtime replication feature, you can only import json files. This makes SQLite a good solution when you just want to store data on the client, but when you want to sync data with a server or other clients or create big complex realtime applications, you have to use something else. ","version":"Next","tagName":"h3"},{"title":"RxDB","type":1,"pageTitle":"Capacitor Database - SQLite, RxDB and others","url":"/capacitor-database.html#rxdb","content":" RxDB is an local first, NoSQL database for JavaScript Applications like hybrid apps. Because it is reactive, you can subscribe to all state changes like the result of a query or even a single field of a document. This is great for UI-based realtime applications in a way that makes it easy to develop realtime applications like what you need in Capacitor. Because RxDB is made for Web applications, most of the available RxStorage plugins can be used to store and query data in a Capacitor app. However it is recommended to use the SQLite RxStorage because it stores the data on the filesystem of the device, not in the JavaScript runtime (like IndexedDB). Storing data on the filesystem ensures it is persistent and will not be cleaned up by any process. Also the performance of SQLite is much faster compared to IndexedDB, because SQLite does not have to go through a browsers permission layers. For the SQLite binding you should use the @capacitor-community/sqlite package. Because the SQLite RxStorage is part of the 👑 Premium Plugins which must be purchased, it is recommended to use the Dexie.js RxStorage while testing and prototyping your Capacitor app. To use the SQLite RxStorage in Capacitor you have to install all dependencies via npm install rxdb rxjs rxdb-premium @capacitor-community/sqlite. For iOS apps you should add a database location in your Capacitor settings: { "plugins": { "CapacitorSQLite": { "iosDatabaseLocation": "Library/CapacitorDatabase" } } } Then you can assemble the RxStorage and create a database with it: import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsCapacitor } from 'rxdb-premium/plugins/storage-sqlite'; import { CapacitorSQLite, SQLiteConnection } from '@capacitor-community/sqlite'; import { Capacitor } from '@capacitor/core'; const sqlite = new SQLiteConnection(CapacitorSQLite); // create database const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsCapacitor(sqlite, Capacitor) }) }); // create collections const collections = await myRxDatabase.addCollections({ humans: { /* ... */ } }); // insert document await collections.humans.insert({id: 'foo', name: 'bar'}); // run a query const result = await collections.humans.find({ selector: { name: 'bar' } }).exec(); // observe a query await collections.humans.find({ selector: { name: 'bar' } }).$.subscribe(result => {/* ... */}); ","version":"Next","tagName":"h3"},{"title":"Follow up","type":1,"pageTitle":"Capacitor Database - SQLite, RxDB and others","url":"/capacitor-database.html#follow-up","content":" If you haven't done yet, you should start learning about RxDB with the Quickstart Tutorial.There is a followup list of other client side database alternatives. ","version":"Next","tagName":"h2"},{"title":"RxDB CRDT Plugin (beta)","type":0,"sectionRef":"#","url":"/crdt.html","content":"","keywords":"","version":"Next"},{"title":"RxDB CRDT operations","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#rxdb-crdt-operations","content":" In RxDB, a CRDT operation is defined with NoSQL update operators, like you might know them from MongoDB update operations or the RxDB update plugin. To run the operators, RxDB uses the mingo library. A CRDT operator example: const myCRDTOperation = { // increment the points field by +1 $inc: { points: 1 }, // set the modified field to true $set: { modified: true } }; ","version":"Next","tagName":"h2"},{"title":"Operators","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#operators","content":" At the moment, not all possible operators are implemented in mingo, if you need additional ones, you should make a pull request there. The following operators can be used at this point in time: $min$max$inc$set$unset$push$addToSet$pop$pullAll$rename For the exact definition on how each operator behaves, check out the MongoDB documentation on update operators. ","version":"Next","tagName":"h3"},{"title":"Installation","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#installation","content":" To use CRDTs with RxDB, you need the following: Add the CRDT plugin via addRxPlugin.Add a field to your schema that defines where to store the CRDT operations via getCRDTSchemaPart()Set the crdt options in your schema.Do NOT set a custom conflict handler, the plugin will use its own one. // import the relevant parts from the CRDT plugin import { getCRDTSchemaPart, RxDBcrdtPlugin } from 'rxdb/plugins/crdt'; // add the CRDT plugin to RxDB import { addRxPlugin } from 'rxdb'; addRxPlugin(RxDBcrdtPlugin); // create a database import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const myDatabase = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageDexie() }); // create a schema with the CRDT options const mySchema = { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 }, points: { type: 'number', maximum: 100, minimum: 0 }, crdts: getCRDTSchemaPart() // use this field to store the CRDT operations }, required: ['id', 'points'], crdt: { // CRDT options field: 'crdts' } } // add a collection await db.addCollections({ users: { schema: mySchema } }); // insert a document const myDocument = await db.users.insert({id: 'alice', points: 0}); // run a CRDT operation that increments the 'points' by one await myDocument.updateCRDT({ ifMatch: { $inc: { points: 1 } } }); ","version":"Next","tagName":"h2"},{"title":"Conditional CRDT operations","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#conditional-crdt-operations","content":" By default, all CRDTs operations will be run to build the current document state. But in many cases, more granular operations are required to better reflect the desired business logic. For these cases, conditional CRDTs can be used. For example if you have a field points with a maximum of 100, you might want to only run an $inc operations, if the points value is less than 100. In an conditional CRDT, you can specify a selector and the operation sets ifMatch and ifNotMatch. At each time the CRDT is applied to the document state, first the selector will run and evaluate which operations path must be used. await myDocument.updateCRDT({ // only if the selector matches, the ifMatch operation will run selector: { age: { $lt: 100 } }, // an operation that runs if the selector matches ifMatch: { $inc: { points: 1 } }, // if the selector does NOT match, you could run a different operation instead ifNotMatch: { // ... } }); ","version":"Next","tagName":"h2"},{"title":"Running multiples operations at once","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#running-multiples-operations-at-once","content":" By default, one CRDT operation is applied to the document in a single database write. To represent more complex logic chains, it might make sense to use multiple CRDTs and write them at once inside of a single atomic document write. For these cases, the updateCRDT() method allows to pass an array of operations. await myDocument.updateCRDT([ { selector: { /** ... **/ }, ifMatch: { /** ... **/ } }, { selector: { /** ... **/ }, ifMatch: { /** ... **/ } }, { selector: { /** ... **/ }, ifMatch: { /** ... **/ } }, { selector: { /** ... **/ }, ifMatch: { /** ... **/ } } ]); ","version":"Next","tagName":"h2"},{"title":"CRDTs on inserts","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#crdts-on-inserts","content":" When CRDTs are enabled with the plugin, all insert operations are automatically mapped as CRDT operation with the $set operator. // Calling RxCollection.insert() await myRxCollection.insert({ id: 'foo' points: 1 }); // is exactly equal to calling insertCRDT() await myRxCollection.insertCRDT({ ifMatch: { $set: { id: 'foo' points: 1 } } }); When the same document is inserted in multiple client instances and then replicated, a conflict will emerge and the insert-CRDTs will overwrite each other in a deterministic order. You can use insertCRDT() to make conditional insert operations with any logic. To check for the previous existence of a document, use the $exists query operation on the primary key of the document. await myRxCollection.insertCRDT({ selector: { // only run if the document did not exist before. id: { $exists: false } }, ifMatch: { // if the document did not exist, insert it $set: { id: 'foo' points: 1 } }, ifNotMatch: { // if document existed already, increment the points by +1 $inc: { points: 1 } } }); ","version":"Next","tagName":"h2"},{"title":"Deleting documents","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#deleting-documents","content":" You can delete a document with a CRDT operation by setting _deleted to true. Calling RxDocument.remove() will do exactly the same when CRDTs are activated. await doc.updateCRDT({ ifMatch: { $set: { _deleted: true } } }); // OR await doc.remove(); ","version":"Next","tagName":"h2"},{"title":"CRDTs with replication","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#crdts-with-replication","content":" CRDT operations are stored inside of a special field besides your 'normal' document fields. When replicating document data with the RxDB replication or the CouchDB replication or even any custom replication, the CRDT operations must be replicated together with the document data as if they would be 'normal' a document property. When any instances makes a write to the document, it is required to update the CRDT operations accordingly. For example if your custom backend updates a document, it must also do that by adding a CRDT operation. In dev-mode RxDB will refuse to store any document data where the document properties do not match the result of the CRDT operations. ","version":"Next","tagName":"h2"},{"title":"Why not automerge.js or yjs?","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#why-not-automergejs-or-yjs","content":" There are already CRDT libraries out there that have been considered to be used with RxDB. The biggest ones are automerge and yjs. The decision was made to not use these but instead go for a more NoSQL way of designing the CRDT format because: Users do not have to learn a new syntax but instead can use the NoSQL query operations which they already know to manipulate the JSON data of a document.RxDB is often used to replicate data with any custom backend on an already existing infrastructure. Using NoSQL operators instead of binary data in CRDTs, makes it easy to implement the exact same logic on these backends so that the backend can also do document writes and still be compliant to the RxDB CRDT plugin. So instead of using YJS or Automerge with a database, you can use RxDB with the CRDT plugin to have a more database specific CRDT approach. This gives you additional features for free such as schema validation or data migration. ","version":"Next","tagName":"h2"},{"title":"When to not use CRDTs","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#when-to-not-use-crdts","content":" CRDT can only be use when your business logic allows to represent document changes via static json operators. If you can have cases where user interaction is required to correctly merge conflicting document states, you cannot use CRDTs for that. Also when CRDTs are used, it is no longer allowed to do non-CRDT writes to the document properties. ","version":"Next","tagName":"h2"},{"title":"Dev Mode","type":0,"sectionRef":"#","url":"/dev-mode.html","content":"","keywords":"","version":"Next"},{"title":"Usage with Node.js","type":1,"pageTitle":"Dev Mode","url":"/dev-mode.html#usage-with-nodejs","content":" async function createDb() { if (process.env.NODE_ENV !== "production") { await import('rxdb/plugins/dev-mode').then( module => addRxPlugin(module.RxDBDevModePlugin) ); } const db = createRxDatabase( /* ... */ ); } ","version":"Next","tagName":"h2"},{"title":"Usage with Angular","type":1,"pageTitle":"Dev Mode","url":"/dev-mode.html#usage-with-angular","content":" import { isDevMode } from '@angular/core'; async function createDb() { if (isDevMode()){ await import('rxdb/plugins/dev-mode').then( module => addRxPlugin(module.RxDBDevModePlugin) ); } const db = createRxDatabase( /* ... */ ); // ... } ","version":"Next","tagName":"h2"},{"title":"Usage with webpack","type":1,"pageTitle":"Dev Mode","url":"/dev-mode.html#usage-with-webpack","content":" In the webpack.config.js: module.exports = { entry: './src/index.ts', /* ... */ plugins: [ // set a global variable that can be accessed during runtime new webpack.DefinePlugin({ MODE: JSON.stringify("production") }) ] /* ... */ }; In your source code: declare var MODE: 'production' | 'development'; async function createDb() { if (MODE === 'development') { await import('rxdb/plugins/dev-mode').then( module => addRxPlugin(module.RxDBDevModePlugin) ); } const db = createRxDatabase( /* ... */ ); // ... } ","version":"Next","tagName":"h2"},{"title":"Disable the dev-mode warning","type":1,"pageTitle":"Dev Mode","url":"/dev-mode.html#disable-the-dev-mode-warning","content":" When the dev-mode is enabled, it will print a console.warn() message to the console so that you do not accidentally use the dev-mode in production. To disable this warning you can call the disableWarnings() function. import { disableWarnings } from 'rxdb/plugins/dev-mode'; disableWarnings(); ","version":"Next","tagName":"h2"},{"title":"Disable the tracking iframe","type":1,"pageTitle":"Dev Mode","url":"/dev-mode.html#disable-the-tracking-iframe","content":" When used in localhost and in the browser, the dev-mode plugin can add a tracking iframe to the DOM. This is used to track the effectiveness of marketing efforts of RxDB. If you have premium access and want to disable this iframe, you can call setPremiumFlag() before creating the database. import { setPremiumFlag } from 'rxdb-premium/plugins/shared'; setPremiumFlag(); ","version":"Next","tagName":"h2"},{"title":"Downsides of Local First / Offline First","type":0,"sectionRef":"#","url":"/downsides-of-offline-first.html","content":"","keywords":"","version":"Next"},{"title":"It only works with small datasets","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#it-only-works-with-small-datasets","content":" Making data available offline means it must be loaded from the server and then stored at the clients device. You need to load the full dataset on the first pageload and on every ongoing load you need to download the new changes to that set. While in theory you could download in infinite amount of data, in practice you have a limit how long the user can wait before having an up-to-date state. You want to display chat messages like Whatsapp? No problem. Syncing all the messages a user could write, can be done with a few HTTP requests. Want to make a tool that displays server logs? Good luck downloading terabytes of data to the client just to search for a single string. This will not work. Besides the network usage, there is another limit for the size of your data. In browsers you have some options for storage: Cookies, Localstorage, WebSQL and IndexedDB. Because Cookies and Localstorage is slow and WebSQL is deprecated, you will use IndexedDB. The limit of how much data you can store in IndexedDB depends on two factors: Which browser is used and how much disc space is left on the device. You can assume that at least a couple of hundred megabytes are available at least. The maximum is potentially hundreds of gigabytes or more, but the browser implementations vary. Chrome allows the browser to use up to 60% of the total disc space per origin. Firefox allows up to 50%. But on safari you can only store up to 1GB and the browser will prompt the user on each additional 200MB increment. The problem is, that you have no chance to really predict how much data can be stored. So you have to make assumptions that are hopefully true for all of your users. Also, you have no way to increase that space like you would add another hard drive to your backend server. Once your clients reach the limit, you likely have to rewrite big parts of your applications. UPDATE (2023): Newer versions of browsers can store way more data, for example firefox stores up to 10% of the total disk size. For an overview about how much can be stored, read this guide ","version":"Next","tagName":"h2"},{"title":"Browser storage is not really persistent","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#browser-storage-is-not-really-persistent","content":" When data is stored inside IndexedDB or one of the other storage APIs, it cannot be trusted to stay there forever. Apple for example deletes the data when the website was not used in the last 7 days. The other browsers also have logic to clean up the stored data, and in the end the user itself could be the one that deletes the browsers local data. The most common way to handle this, is to replicate everything from the backend to the client again. Of course, this does not work for state that is not stored at the backend. So if you assume you can store the users private data inside the browser in a secure way, you are wrong. ","version":"Next","tagName":"h2"},{"title":"There can be conflicts","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#there-can-be-conflicts","content":" Imagine two of your users modify the same JSON document, while both are offline. After they go online again, their clients replicate the modified document to the server. Now you have two conflicting versions of the same document, and you need a way to determine how the correct new version of that document should look like. This process is called conflict resolution. The default in many offline first databases is a deterministic conflict resolution strategy. Both conflicting versions of the document are kept in the storage and when you query for the document, a winner is determined by comparing the hashes of the document and only the winning document is returned. Because the comparison is deterministic, all clients and servers will always pick the same winner. This kind of resolution only works when it is not that important that one of the document changes gets dropped. Because conflicts are rare, this might be a viable solution for some use cases. A better resolution can be applied by listening to the changestream of the database. The changestream emits an event each time a write happens to the database. The event contains information about the written document and also a flag if there is a conflicting version. For each event with a conflict, you fetch all versions for that document and create a new document that contains the winning state. With that you can implement pretty complex conflict resolution strategies, but you have to manually code it for each collection of documents. Instead of the solving conflict once at every client, it can be made a bit easier by solely relying on the backend. This can be done when all of your clients replicate with the same single backend server. With RxDB's Graphql Replication each client side change is sent to the server where conflicts can be resolved and the winning document can be sent back to the clients. Sometimes there is no way to solve a conflict with code. If your users edit text based documents or images, often only the users themselves can decide how the winning revision has to look. For these cases, you have to implement complex UI parts where the users can inspect the conflict and manage its resolution. You do not have to handle conflicts if they cannot happen in the first place. You can achieve that by designing a write only database where existing documents cannot be touched. Instead of storing the current state in a single document, you store all the events that lead to the current state. Sometimes called the "everything is a delta" strategy, others would call it Event Sourcing. Like an accountant that does not need an eraser, you append all changes and afterwards aggregate the current state at the client. // create one new document for each change to the users balance {id: new Date().toJSON(), change: 100} // balance increased by $100 {id: new Date().toJSON(), change: -50} // balance decreased by $50 {id: new Date().toJSON(), change: 200} // balance increased by $200 There is this thing called conflict-free replicated data type, short CRDT. Using a CRDT library like automerge will magically solve all of your conflict problems. Until you use it in production where you observe that implementing CRDTs has basically the same complexity as implementing conflict resolution strategies. ","version":"Next","tagName":"h2"},{"title":"Realtime is a lie","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#realtime-is-a-lie","content":" So you replicate stuff between the clients and your backend. Each change on one side directly changes the state of the other sides in realtime. But this "realtime" is not the same as in realtime computing. In the offline first world, the word realtime was introduced by firebase and is more meant as a marketing slogan than a technical description. There is an internet between your backend and your clients and everything you do on one machine takes at least once the latency until it can affect anything on the other machines. You have to keep this in mind when you develop anything where the timing is important, like a multiplayer game or a stock trading app. Even when you run a query against the local database, there is no "real" realtime. Client side databases run on JavaScript and JavaScript runs on a single CPU that might be partially blocked because the user is running some background processes. So you can never guarantee a response deadline which violates the time constraint of realtime computing. ","version":"Next","tagName":"h2"},{"title":"Eventual consistency","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#eventual-consistency","content":" An offline first app does not have a single source of truth. There is a source on the backend, one on the own client, and also each other client has its own definition of truth. At the moment your user starts the app, the local state is hopefully already replicated with the backend and all other clients. But this does not have to be true, the states can have converged and you have to plan for that. The user could update a document based on wrong assumptions because it was not fully replicated at that point in time because the user is offline. A good way to handle this problem is to show the replication state in the UI and tell the user when the replication is running, stopped, paused or finished. And some data is just too important to be "eventual consistent". Create a wire transfer in your online banking app while you are offline. You keep the smartphone laying at your night desk and when you use again in the next morning, it goes online and replicates the transaction. No thank you, do not use offline first for these kind of things, or at least you have to display the replication state of each document in the UI. ","version":"Next","tagName":"h2"},{"title":"Permissions and authentication","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#permissions-and-authentication","content":" Every offline first app that goes beyond a prototype, does likely not have the same global state for all of its users. Each user has a different set of documents that are allowed to be replicated or seen by the user. So you need some kind of authentication and permission handling to divide the documents. The easy way is to just create one database for each user on the backend and only allow to replicate that one. Creating that many databases is not really a problem with for example CouchDB, and it makes permission handling easy. But as soon as you want to query all of your data in the backend, it will bite back. Your data is not at a single place, it is distributed between all of the user specific databases. This becomes even more complex as soon as you store information together with the documents that is not allowed to be seen by outsiders. You not only have to decide which documents to replicate, but also which fields of them. So what you really want is a single datastore in the backend and then replicate only the allowed document parts to each of the users. This always requires you to implement your custom replication endpoint like what you do with RxDBs GraphQL Replication. ","version":"Next","tagName":"h2"},{"title":"You have to migrate the client database","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#you-have-to-migrate-the-client-database","content":" While developing your app, sooner or later you want to change the data layout. You want to add some new fields to documents or change the format of them. So you have to update the database schema and also migrate the stored documents. With 'normal' applications, this is already hard enough and often dangerous. You wait until midnight, stop the webserver, make a database backup, deploy the new schema and then you hope that nothing goes wrong while it updates that many documents. With offline first applications, it is even more fun. You do not only have to migrate your local backend database, you also have to provide a migration strategy for all of these client databases out there. And you also cannot migrate everything at the same time. The clients can only migrate when the new code was updated from the appstore or the user visited your website again. This could be today or in a few weeks. ","version":"Next","tagName":"h2"},{"title":"Performance is not native","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#performance-is-not-native","content":" When you create a web based offline first app, you cannot store data directly on the users filesystem. In fact there are many layers between your JavaScript code and the filesystem of the operation system. Let's say you insert a document in RxDB: You call the RxDB API to validate and store the dataRxDB calls the underlying RxStorage, for example PouchDB.Pouchdb calls its underlying storage adapterThe storage adapter calls IndexedDBThe browser runs its internal handling of the IndexedDB APIIn most browsers IndexedDB is implemented on top of SQLiteSQLite calls the OS to store the data in the filesystem All these layers are abstractions. They are not build for exactly that one use case, so you lose some performance to tunnel the data through the layer itself, and you also lose some performance because the abstraction does not exactly provide the functions that are needed by the layer above and it will overfetch data. You will not find a benchmark comparison between how many transactions per second you can run on the browser compared to a server based database. Because it makes no sense to compare them. Browsers are slower, JavaScript is slower. Is it fast enough? What you really care about is "Is it fast enough?". For most use cases, the answer is yes. Offline first apps are UI based and you do not need to process a million transactions per second, because your user will not click the save button that often. "Fast enough" means that the data is processed in under 16 milliseconds so that you can render the updated UI in the next frame. This is of course not true for all use cases, so you better think about the performance limit before starting with the implementation. ","version":"Next","tagName":"h2"},{"title":"Nothing is predictable","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#nothing-is-predictable","content":" You have a PostgreSQL database and run a query over 1000ths of rows, which takes 200 milliseconds. Works great, so you now want to do something similar at the client device in your offline first app. How long does it take? You cannot know because people have different devices, and even equal devices have different things running in the background that slow the CPUs. So you cannot predict performance and as described above, you cannot even predict the storage limit. So if your app does heavy data analytics, you might better run everything on the backend and just send the results to the client. ","version":"Next","tagName":"h2"},{"title":"There is no relational data","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#there-is-no-relational-data","content":" I started creating RxDB many years ago and while still maintaining it, I often worked with all these other offline first databases out there. RxDB and all of these other ones, are based on some kind of document databases similar to NoSQL. Often people want to have a relational database like the SQL one they use at the backend. So why are there no real relations in offline first databases? I could answer with these arguments like how JavaScript works better with document based data, how performance is better when having no joins or even how NoSQL queries are more composable. But the truth is, everything is NoSQL because it makes replication easy. An SQL query that mutates data in different tables based on some selects and joins, cannot be partially replicated without breaking the client. You have foreign keys that point to other rows and if these rows are not replicated yet, you have a problem. To implement a robust replication protocol for relational data, you need some stuff like a reliable atomic clock and you have to block queries over multiple tables while a transaction replicated. Watch this guy implementing offline first replication on top of SQLite or read this discussion about implementing offline first in supabase. So creating replication for an SQL offline first database is way more work than just adding some network protocols on top of PostgreSQL. It might not even be possible for clients that have no reliable clock. ","version":"Next","tagName":"h2"},{"title":"Electron Plugin","type":0,"sectionRef":"#","url":"/electron.html","content":"","keywords":"","version":"Next"},{"title":"RxStorage Electron IpcRenderer & IpcMain","type":1,"pageTitle":"Electron Plugin","url":"/electron.html#rxstorage-electron-ipcrenderer--ipcmain","content":" To use RxDB in electron, it is recommended to run the RxStorage in the main process and the RxDatabase in the renderer processes. With the rxdb electron plugin you can create a remote RxStorage and consume it from the renderer process. To do this in a convenient way, the RxDB electron plugin provides the helper functions exposeIpcMainRxStorage and getRxStorageIpcRenderer. Similar to the Worker RxStorage, these wrap any other RxStorage once in the main process and once in each renderer process. In the renderer you can then use the storage to create a RxDatabase which communicates with the storage of the main process to store and query data. note nodeIntegration must be enabled in Electron. // main.js const { exposeIpcMainRxStorage } = require('rxdb/plugins/electron'); const { getRxStorageMemory } = require('rxdb/plugins/storage-memory'); app.on('ready', async function () { exposeIpcMainRxStorage({ key: 'main-storage', storage: getRxStorageMemory(), ipcMain: electron.ipcMain }); }); // renderer.js const { getRxStorageIpcRenderer } = require('rxdb/plugins/electron'); const { getRxStorageMemory } = require('rxdb/plugins/storage-memory'); const db = await createRxDatabase({ name, storage: getRxStorageIpcRenderer({ key: 'main-storage', ipcRenderer: electron.ipcRenderer }) }); /* ... */ ","version":"Next","tagName":"h2"},{"title":"Related","type":1,"pageTitle":"Electron Plugin","url":"/electron.html#related","content":" Comparison of Electron Databases ","version":"Next","tagName":"h2"},{"title":"Electron Database - RxDB with different storage for SQLite, Filesystem and In-Memory","type":0,"sectionRef":"#","url":"/electron-database.html","content":"","keywords":"","version":"Next"},{"title":"Databases for Electron","type":1,"pageTitle":"Electron Database - RxDB with different storage for SQLite, Filesystem and In-Memory","url":"/electron-database.html#databases-for-electron","content":" An Electron runtime can be divided into two parts: The "main" process which is a Node.js JavaScript process that runs without a UI in the background.One or multiple "renderer" processes that consist of a Chrome browser engine and runs the user interface. Each renderer process represents one "browser tab". This is important to understand because choosing the right database depends on your use case and on which of these JavaScript runtimes you want to keep the data. ","version":"Next","tagName":"h2"},{"title":"Server Side Databases in Electron.js","type":1,"pageTitle":"Electron Database - RxDB with different storage for SQLite, Filesystem and In-Memory","url":"/electron-database.html#server-side-databases-in-electronjs","content":" Because Electron runs on a desktop computer, you might think that it should be possible to use a common "server" database like MySQL, PostgreSQL or MongoDB. In theory, you could ship the correct database server binaries with your electron application and start a process on the client's device that exposes a port to the database that can be consumed by Electron. In practice, this is not a viable way to go because shipping the correct binaries and opening ports is way to complicated and troublesome. Instead you should use a database that can be bundled and run inside of Electron, either in the main or in the renderer process. ","version":"Next","tagName":"h3"},{"title":"Localstorage / IndexedDB / WebSQL as alternatives to SQLite in Electron","type":1,"pageTitle":"Electron Database - RxDB with different storage for SQLite, Filesystem and In-Memory","url":"/electron-database.html#localstorage--indexeddb--websql-as-alternatives-to-sqlite-in-electron","content":" Because Electron uses a common Chrome web browser in the renderer process, you can access the common Web Storage APIs like Localstorage, IndexedDB and WebSQL. This is easy to set up and storing small sets of data can be achieved in a short span of time. But as soon as your application goes beyond a simple TODO-app, there are multiple obstacles that come in your way. One thing is the bad multi-tab support. If you have more than one renderer process, it becomes hard to manage database writes between them. Each browser tab could modify the database state while the others do not know of the changes and keep an outdated UI. Another thing is performance. IndexedDB is slow, mostly because it has to go through layers of browser security and abstractions. Storing and querying a lot of data might become your performance bottleneck. Localstorage and WebSQL are even slower, by the way. Using these Web Storage APIs is generally only recommended when you know for sure that there will be always only one rendering process and performance is not that relevant. The main reason for that is the security- and abstraction layers that write- and read operations have to go through when using the browsers IndexedDB API. So instead of using IndexedDB in Electron in the renderer process, you should use something that runs in the "main" process in Node.js like the Filesystem RxStorage or the In Memory RxStorage. ","version":"Next","tagName":"h3"},{"title":"RxDB","type":1,"pageTitle":"Electron Database - RxDB with different storage for SQLite, Filesystem and In-Memory","url":"/electron-database.html#rxdb","content":" RxDB is a NoSQL database for JavaScript applications. It has many features that come in handy when RxDB is used with UI based applications like your Electron app. For example, it is able to subscribe to query results of single fields of documents. It has encryption and compression features and most important it has a battle tested replication protocol that can be used to do a realtime sync with your backend. Because of the flexible storage layer of RxDB, there are many options on how to use it with Electron: The memory RxStorage that stores the data inside of the JavaScript memory without persistenceThe SQLite RxStorageThe IndexedDB RxStorageThe Dexie.js RxStorageThe Node.js Filesystem It is recommended to use the SQLite RxStorage because it has the best performance and is the easiest to set up. However it is part of the 👑 Premium Plugins which must be purchased, so to try out RxDB with Electron, you might want to use one of the other options. To start with RxDB, I would recommend using the Dexie.js RxStorage in the renderer processes. Because RxDB is able to broadcast the database state between browser tabs, having multiple renderer processes is not a problem like it would be when you use plain IndexedDB without RxDB. In production, you would always run the RxStorage in the main process with the RxStorage Electron IpcRenderer & IpcMain plugins. First, you have to install all dependencies via npm install rxdb rxjs. Then you can assemble the RxStorage and create a database with it: import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // create database const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageDexie() }); // create collections const collections = await myRxDatabase.addCollections({ humans: { /* ... */ } }); // insert document await collections.humans.insert({id: 'foo', name: 'bar'}); // run a query const result = await collections.humans.find({ selector: { name: 'bar' } }).exec(); // observe a query await collections.humans.find({ selector: { name: 'bar' } }).$.subscribe(result => {/* ... */}); For better performance in the renderer tab, you can later switch to the IndexedDB RxStorage. But in production, it is recommended to use the SQLite RxStorage or the Filesystem RxStorage in the main process so that database operations do not block the rendering of the UI. To learn more about using RxDB with Electron, you might want to check out this example project. ","version":"Next","tagName":"h3"},{"title":"SQLite in Electron.js without RxDB","type":1,"pageTitle":"Electron Database - RxDB with different storage for SQLite, Filesystem and In-Memory","url":"/electron-database.html#sqlite-in-electronjs-without-rxdb","content":" SQLite is a SQL based relational database written in the C programming language that was crafted to be embedded inside of applications and stores data locally. Operations are written in the SQL query language similar to the PostgreSQL syntax. Using SQLite in Electron is not possible in the renderer process, only in the main process. To communicate data operations between your main and your renderer processes, you have to use either @electron/remote (not recommended) or the ipcRenderer (recommended). So you start up SQLite in your main process and whenever you want to read or write data, you send the SQL queries to the main process and retrieve the result back as JSON data. To install SQLite, use the SQLite3 package which is a native Node.js module. You also need the @electron/rebuild package to rebuild the SQLite module against the currently installed Electron version. Install them with npm install sqlite3 @electron/rebuild. Then you can rebuild SQLite with ./node_modules/.bin/electron-rebuild -f -w sqlite3In the JavaScript code of your main process you can now create a database: const sqlite3 = require('sqlite3'); const db = new sqlite3.Database('/path/to/database/file.db'); // create a table and insert a row db.serialize(() => { db.run("CREATE TABLE Users (name, lastName)"); db.run("INSERT INTO Users VALUES (?, ?)", ['foo', 'bar']); }); Also you have to set up the ipcRenderer so that message from the renderer process are handled: ipcMain.handle('db-query', async (event, sqlQuery) => { return new Promise(res => { db.all(sqlQuery, (err, rows) => { res(rows); }); }); }); In your renderer process, you can now call the ipcHandler and fetch data from SQLite: const rows = await ipcRenderer.invoke('db-query', "SELECT * FROM Users"); The downside of SQLite (or SQL in general) is that it is lacking many features that are handful when using a database together with UI based applications. It is not possible to observe queries or document fields and there is no replication method to sync data with a server. This makes SQLite a good solution when you just want to store data on the client or process expensive SQL queries on the server, but it is not suitable for more complex operations like two-way replication, encryption, compression and so on. Also developer helpers like TypeScript type safety are totally out of reach. ","version":"Next","tagName":"h3"},{"title":"Follow up","type":1,"pageTitle":"Electron Database - RxDB with different storage for SQLite, Filesystem and In-Memory","url":"/electron-database.html#follow-up","content":" Learn how to use RxDB as database in electron with the Quickstart Tutorial.Check out the RxDB Electron exampleThere is a followup list of other client side database alternatives that you can try to use with Electron. ","version":"Next","tagName":"h2"},{"title":"🔒 Encrypted Local Storage with RxDB","type":0,"sectionRef":"#","url":"/encryption.html","content":"","keywords":"","version":"Next"},{"title":"Querying encrypted data","type":1,"pageTitle":"🔒 Encrypted Local Storage with RxDB","url":"/encryption.html#querying-encrypted-data","content":" RxDB handles the encryption and decryption of data internally. This means that when you work with a RxDocument, you can access the properties of the document just like you would with normal, unencrypted data. RxDB automatically decrypts the data for you when you retrieve it, making it transparent to your application code. This means the encryption works with all RxStorage like SQLite, IndexedDB, OPFS and so on. However, there's a limitation when it comes to querying encrypted fields. Encrypted fields cannot be used as operators in queries. This means you cannot perform queries like "find all documents where the encrypted field equals a certain value." RxDB does not expose the encrypted data in a way that allows direct querying based on the encrypted content. To filter or search for documents based on the contents of encrypted fields, you would need to first decrypt the data and then perform the query, which might not be efficient or practical in some cases. You could however use the memory mapped RxStorage to replicate the encrypted documents into a non-encrypted in-memory storage and then query them like normal. ","version":"Next","tagName":"h2"},{"title":"Password handling","type":1,"pageTitle":"🔒 Encrypted Local Storage with RxDB","url":"/encryption.html#password-handling","content":" RxDB does not define how you should store or retrieve the encryption password. It only requires you to provide the password on database creation which grants you flexibility in how you manage encryption passwords. You could ask the user on app-start to insert the password, or you can retrieve the password from your backend on app start (or revoke access by no longer providing the password). ","version":"Next","tagName":"h2"},{"title":"Asymmetric encryption","type":1,"pageTitle":"🔒 Encrypted Local Storage with RxDB","url":"/encryption.html#asymmetric-encryption","content":" The encryption plugin itself uses symmetric encryption with a password to guarantee best performance when reading and storing data. It is not able to do Asymmetric encryption by itself. If you need Asymmetric encryption with a private/publicKey, it is recommended to encrypted the password itself with the asymentric keys and store the encrypted password beside the other data. On app-start you can decrypt the password with the private key and use the decrypted password in the RxDB encryption plugin ","version":"Next","tagName":"h2"},{"title":"Using the RxDB Encryption Plugins","type":1,"pageTitle":"🔒 Encrypted Local Storage with RxDB","url":"/encryption.html#using-the-rxdb-encryption-plugins","content":" RxDB currently has two plugins for encryption: The free encryption-crypto-js plugin that is based on the AES algorithm of the crypto-js libraryThe 👑 premiumencryption-web-crypto plugin that is based on the native Web Crypto API which makes it faster and more secure to use. Document inserts are about 10x faster compared to crypto-js and it has a smaller build size because it uses the browsers API instead of bundling an npm module. An RxDB encryption plugin is a wrapper around any other RxStorage. 1. Wrap your RxStorage with the encryption import { wrappedKeyEncryptionCryptoJsStorage } from 'rxdb/plugins/encryption-crypto-js'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // wrap the normal storage with the encryption plugin const encryptedDexieStorage = wrappedKeyEncryptionCryptoJsStorage({ storage: getRxStorageDexie() }); 2. Create a RxDatabase with the wrapped storage Also you have to set a password when creating the database. The format of the password depends on which encryption plugin is used. import { createRxDatabase } from 'rxdb/plugins/core'; // create an encrypted database const db = await createRxDatabase({ name: 'mydatabase', storage: encryptedDexieStorage, password: 'sudoLetMeIn' }); 3. Create an RxCollection with an encrypted property To define a field as being encrypted, you have to add it to the encrypted fields list in the schema. const schema = { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 }, secret: { type: 'string' }, }, required: ['id'] encrypted: ['secret'] }; await db.addCollections({ myDocuments: { schema } }) ","version":"Next","tagName":"h2"},{"title":"Using Web-Crypto API","type":1,"pageTitle":"🔒 Encrypted Local Storage with RxDB","url":"/encryption.html#using-web-crypto-api","content":" For professionals, we have the web-crypto👑 premium plugin which is faster and more secure: import { wrappedKeyEncryptionWebCryptoStorage, createPassword } from 'rxdb-premium/plugins/encryption-web-crypto'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; // wrap the normal storage with the encryption plugin const encryptedIndexedDbStorage = wrappedKeyEncryptionWebCryptoStorage({ storage: getRxStorageIndexedDB() }); const myPasswordObject = { // Algorithm can be oneOf: 'AES-CTR' | 'AES-CBC' | 'AES-GCM' algorithm: 'AES-CTR', password: 'myRandomPasswordWithMin8Length' }; // create an encrypted database const db = await createRxDatabase({ name: 'mydatabase', storage: encryptedIndexedDbStorage, password: myPasswordObject }); /* ... */ ","version":"Next","tagName":"h2"},{"title":"Changing the password","type":1,"pageTitle":"🔒 Encrypted Local Storage with RxDB","url":"/encryption.html#changing-the-password","content":" The password is set database specific and it is not possible to change the password of a database. Opening an existing database with a different password will throw an error. To change the password you can either: Use the storage migration plugin to migrate the database state into a new database.Store a randomly created meta-password in a different RxDatabase as a value of a local document. Encrypt the meta password with the actual user password and read it out before creating the actual database. ","version":"Next","tagName":"h2"},{"title":"Encrypted attachments","type":1,"pageTitle":"🔒 Encrypted Local Storage with RxDB","url":"/encryption.html#encrypted-attachments","content":" To store the attachments data encrypted, you have to set encrypted: true in the attachments property of the schema. const mySchema = { version: 0, type: 'object', properties: { /* ... */ }, attachments: { encrypted: true // if true, the attachment-data will be encrypted with the db-password } }; ","version":"Next","tagName":"h2"},{"title":"Encryption and workers","type":1,"pageTitle":"🔒 Encrypted Local Storage with RxDB","url":"/encryption.html#encryption-and-workers","content":" If you are using Worker RxStorage or SharedWorker RxStorage with encryption, it's recommended to run encryption inside of the worker. Encryption can be very cpu intensive and would take away CPU-power from the main thread which is the main reason to use workers. You do not need to worry about setting the password inside of the worker. The password will be set when calling createRxDatabase from the main thread, and will be passed internally to the storage in the worker automatically. ","version":"Next","tagName":"h2"},{"title":"Install RxDB","type":0,"sectionRef":"#","url":"/install.html","content":"","keywords":"","version":"Next"},{"title":"npm","type":1,"pageTitle":"Install RxDB","url":"/install.html#npm","content":" To install the latest release of rxdb and its dependencies and save it to your package.json, run: npm i rxdb --save ","version":"Next","tagName":"h2"},{"title":"peer-dependency","type":1,"pageTitle":"Install RxDB","url":"/install.html#peer-dependency","content":" You also need to install the peer-dependency rxjs if you have not installed it before. npm i rxjs --save ","version":"Next","tagName":"h2"},{"title":"polyfills","type":1,"pageTitle":"Install RxDB","url":"/install.html#polyfills","content":" RxDB is coded with es8 and transpiled to es5. This means you have to install polyfills to support older browsers. For example you can use the babel-polyfills with: npm i @babel/polyfill --save If you need polyfills, you have to import them in your code. import '@babel/polyfill'; ","version":"Next","tagName":"h2"},{"title":"Polyfill the global variable","type":1,"pageTitle":"Install RxDB","url":"/install.html#polyfill-the-global-variable","content":" When you use RxDB with angular or other webpack based frameworks, you might get the error Uncaught ReferenceError: global is not defined. This is because some dependencies of RxDB assume a Node.js-specific global variable that is not added to browser runtimes by some bundlers. You have to add them by your own, like we do here. (window as any).global = window; (window as any).process = { env: { DEBUG: undefined }, }; ","version":"Next","tagName":"h2"},{"title":"Project Setup and Configuration","type":1,"pageTitle":"Install RxDB","url":"/install.html#project-setup-and-configuration","content":" In the examples folder you can find CI tested projects for different frameworks and use cases, while in the /config folder base configuration files for Webpack, Rollup, Mocha, Karma, Typescript are exposed. Consult package.json for the versions of the packages supported. ","version":"Next","tagName":"h2"},{"title":"Installing the latest RxDB build","type":1,"pageTitle":"Install RxDB","url":"/install.html#installing-the-latest-rxdb-build","content":" If you need the latest development state of RxDB, add it as git-dependency into your package.json. "dependencies": { "rxdb": "git+https://[email protected]/pubkey/rxdb.git#commitHash" } Replace commitHash with the hash of the latest build-commit. ","version":"Next","tagName":"h2"},{"title":"Import","type":1,"pageTitle":"Install RxDB","url":"/install.html#import","content":" To import rxdb, add this to your JavaScript file to import the default bundle that contains the RxDB core: import { createRxDatabase, /* ... */ } from 'rxdb'; ","version":"Next","tagName":"h2"},{"title":"Key Compression","type":0,"sectionRef":"#","url":"/key-compression.html","content":"","keywords":"","version":"Next"},{"title":"Enable key compression","type":1,"pageTitle":"Key Compression","url":"/key-compression.html#enable-key-compression","content":" The key compression plugin is a wrapper around any other RxStorage. 1. Wrap your RxStorage with the key compression plugin import { wrappedKeyCompressionStorage } from 'rxdb/plugins/key-compression'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const storageWithKeyCompression = wrappedKeyCompressionStorage({ storage: getRxStorageDexie() }); 2. Create an RxDatabase import { createRxDatabase } from 'rxdb/plugins/core'; const db = await createRxDatabase({ name: 'mydatabase', storage: storageWithKeyCompression }); 3. Create a compressed RxCollection const mySchema = { keyCompression: true, // set this to true, to enable the keyCompression version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength } /* ... */ } }; await db.addCollections({ docs: { schema: mySchema } }); ","version":"Next","tagName":"h2"},{"title":"Fulltext Search","type":0,"sectionRef":"#","url":"/fulltext-search.html","content":"","keywords":"","version":"Next"},{"title":"Benefits of using a local fulltext search","type":1,"pageTitle":"Fulltext Search","url":"/fulltext-search.html#benefits-of-using-a-local-fulltext-search","content":" Efficient Search and Indexing The plugin utilizes the FlexSearch library, known for its speed and memory efficiency. This ensures that search operations are performed quickly, even with large datasets. The search engine can handle multi-field queries, partial matching, and complex search operations, providing users with highly relevant results. Local Data Indexing With the plugin, all search operations are performed on the local data stored within the RxDB collections. This means that users can execute fulltext search queries without the need for an external server or database, which is especially beneficial for offline-first applications. The local indexing ensures that search queries are executed quickly, reducing the latency typically associated with remote database queries. Also when used in multiple browser tabs, it is ensured that through Leader Election, only exactly one tabs is doing the work of indexing without having an overhead in the other browser tabs. Real-time Indexing The plugin integrates seamlessly with RxDB's reactive nature. Every time a document is written to an RxCollection, an indexer updates the fulltext search index in real-time. This ensures that search results are always up-to-date, reflecting the most current state of the data without requiring manual reindexing. Persistent indexing The fulltext search index is efficiently persisted within the RxCollection, ensuring that the index remains intact across app restarts. When documents are added or updated in the collection, the index is incrementally updated in real-time, meaning only the changes are processed rather than reindexing the entire dataset. This incremental approach not only optimizes performance but also ensures that subsequent app launches are quick, as there's no need to reindex all the data from scratch, making the search feature both reliable and fast from the moment the app starts. When using an encrypted storage the index itself and incremental updates to it are stored fully encrypted and are only decrypted in-memory. Complex Query Support The FlexSearch-based plugin allows for sophisticated search queries, including multi-term and contextual searches. Users can perform complex searches that go beyond simple keyword matching, enabling more advanced use cases like searching for documents with specific phrases, relevance-based sorting, or even phonetic matching. Offline-First Support and Privacy As RxDB is designed with offline-first applications in mind, the fulltext search plugin supports this paradigm by ensuring that all search operations can be performed offline. This is crucial for applications that need to function in environments with intermittent or no internet connectivity, offering users a consistent and reliable search experience with zero latency. ","version":"Next","tagName":"h2"},{"title":"Using the RxDB Fulltext Search","type":1,"pageTitle":"Fulltext Search","url":"/fulltext-search.html#using-the-rxdb-fulltext-search","content":" The flexsearch search is a RxDB Premium Package 👑 which must be purchased and imported from the rxdb-premium npm package. Step 1: Add the RxDBFlexSearchPlugin to RxDB. import { RxDBFlexSearchPlugin } from 'rxdb-premium/plugins/flexsearch'; import { addRxPlugin } from 'rxdb/plugins/core'; addRxPlugin(RxDBFlexSearchPlugin); Step 2: Create a RxFulltextSearch instance on top of a collection with the addFulltextSearch() function. import { addFulltextSearch } from 'rxdb-premium/plugins/flexsearch'; const flexSearch = await addFulltextSearch({ // unique identifier. Used to store metadata and continue indexing on restarts/reloads. identifier: 'my-search', // The source collection on whose documents the search is based on collection: myRxCollection, /** * Transforms the document data to a given searchable string. * This can be done by returning a single string property of the document * or even by concatenating and transforming multiple fields like: * doc => doc.firstName + ' ' + doc.lastName */ docToString: doc => doc.firstName, /** * (Optional) * Amount of documents to index at once. * See https://rxdb.info/rx-pipeline.html */ batchSize: number; /** * (Optional) * lazy: Initialize the in memory fulltext index at the first search query. * instant: Directly initialize so that the index is already there on the first query. * Default: 'instant' */ initialization: 'instant', /** * (Optional) * @link https://github.com/nextapps-de/flexsearch#index-options */ indexOptions: {}, }); Step 3: Run a search operation: // find all documents whose searchstring contains "foobar" const foundDocuments = await flexSearch.find('foobar'); /** * You can also use search options as second parameter * @link https://github.com/nextapps-de/flexsearch#search-options */ const foundDocuments = await flexSearch.find('foobar', { limit: 10 }); ","version":"Next","tagName":"h2"},{"title":"Leader-Election","type":0,"sectionRef":"#","url":"/leader-election.html","content":"","keywords":"","version":"Next"},{"title":"Use-case-example","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#use-case-example","content":" Imagine we have a website which displays the current temperature of the visitors location in various charts, numbers or heatmaps. To always display the live-data, the website opens a websocket to our API-Server which sends the current temperature every 10 seconds. Using the way most sites are currently build, we can now open it in 5 browser-tabs and it will open 5 websockets which send data 6*5=30 times per minute. This will not only waste the power of your clients device, but also wastes your api-servers resources by opening redundant connections. ","version":"Next","tagName":"h2"},{"title":"Solution","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#solution","content":" The solution to this redundancy is the usage of a leader-election-algorithm which makes sure that always exactly one tab is managing the remote-data-access. The managing tab is the elected leader and stays leader until it is closed. No matter how many tabs are opened or closed, there must be always exactly one leader. You could now start implementing a messaging-system between your browser-tabs, hand out which one is leader, solve conflicts and reassign a new leader when the old one 'dies'. Or just use RxDB which does all these things for you. ","version":"Next","tagName":"h2"},{"title":"Add the leader election plugin","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#add-the-leader-election-plugin","content":" To enable the leader election, you have to add the leader-election plugin. import { addRxPlugin } from 'rxdb'; import { RxDBLeaderElectionPlugin } from 'rxdb/plugins/leader-election'; addRxPlugin(RxDBLeaderElectionPlugin); ","version":"Next","tagName":"h2"},{"title":"Code-example","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#code-example","content":" To make it easy, here is an example where the temperature is pulled every ten seconds and saved to a collection. The pulling starts at the moment where the opened tab becomes the leader. const db = await createRxDatabase({ name: 'weatherDB', storage: getRxStorageDexie(), password: 'myPassword', multiInstance: true }); await db.addCollections({ temperature: { schema: mySchema } }); db.waitForLeadership() .then(() => { console.log('Long lives the king!'); // <- runs when db becomes leader setInterval(async () => { const temp = await fetch('https://example.com/api/temp/'); db.temperature.insert({ degrees: temp, time: new Date().getTime() }); }, 1000 * 10); }); ","version":"Next","tagName":"h2"},{"title":"Handle Duplicate Leaders","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#handle-duplicate-leaders","content":" On rare occasions, it can happen that more than one leader is elected. This can happen when the CPU is on 100% or for any other reason the JavaScript process is fully blocked for a long time. For most cases this is not really problem because on duplicate leaders, both browser tabs replicate with the same backend anyways. To handle the duplicate leader event, you can access the leader elector and set a handler: import { getLeaderElectorByBroadcastChannel } from 'rxdb/plugins/leader-election'; const leaderElector = getLeaderElectorByBroadcastChannel(broadcastChannel); leaderElector.onduplicate = async () => { // Duplicate leader detected -> reload the page. location.reload(); } ","version":"Next","tagName":"h2"},{"title":"Live-Example","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#live-example","content":" In this example the leader is marked with the crown ♛ ","version":"Next","tagName":"h2"},{"title":"Try it out","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#try-it-out","content":" Run the angular-example where the leading tab is marked with a crown on the top-right-corner. ","version":"Next","tagName":"h2"},{"title":"Notice","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#notice","content":" The leader election is implemented via the broadcast-channel module. The leader is elected between different processes on the same javascript-runtime. Like multiple tabs in the same browser or multiple NodeJs-processes on the same machine. It will not run between different replicated instances. ","version":"Next","tagName":"h2"},{"title":"RxDB Logger Plugin","type":0,"sectionRef":"#","url":"/logger.html","content":"","keywords":"","version":"Next"},{"title":"Using the logger plugin","type":1,"pageTitle":"RxDB Logger Plugin","url":"/logger.html#using-the-logger-plugin","content":" The logger is a wrapper that can be wrapped around any RxStorage. Once your storage is wrapped, you can create your database with the wrapped storage and the logging will automatically happen. import { wrappedLoggerStorage } from 'rxdb-premium/plugins/logger'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; // wrap a storage with the logger const loggingStorage = wrappedLoggerStorage({ storage: getRxStorageIndexedDB({}) }); // create your database with the wrapped storage const db = await createRxDatabase({ name: 'mydatabase', storage: loggingStorage }); // create collections etc... ","version":"Next","tagName":"h2"},{"title":"Specify what to be logged","type":1,"pageTitle":"RxDB Logger Plugin","url":"/logger.html#specify-what-to-be-logged","content":" By default, the plugin will log all operations and it will also run a console.time()/console.timeEnd() around each operation. You can specify what to log so that your logs are less noisy. For this you provide a settings object when calling wrappedLoggerStorage(). const loggingStorage = wrappedLoggerStorage({ storage: getRxStorageIndexedDB({}), settings: { // can used to prefix all log strings, default='' prefix: 'my-prefix', /** * Be default, all settings are true. */ // if true, it will log timings with console.time() and console.timeEnd() times: true, // if false, it will not log meta storage instances like used in replication metaStorageInstances: true, // operations bulkWrite: true, findDocumentsById: true, query: true, count: true, info: true, getAttachmentData: true, getChangedDocumentsSince: true, cleanup: true, close: true, remove: true } }); ","version":"Next","tagName":"h2"},{"title":"Using custom logging functions","type":1,"pageTitle":"RxDB Logger Plugin","url":"/logger.html#using-custom-logging-functions","content":" With the logger plugin you can also run custom log functions for all operations. const loggingStorage = wrappedLoggerStorage({ storage: getRxStorageIndexedDB({}), onOperationStart: (operationsName, logId, args) => void, onOperationEnd: (operationsName, logId, args) => void, onOperationError: (operationsName, logId, args, error) => void }); ","version":"Next","tagName":"h2"},{"title":"Middleware","type":0,"sectionRef":"#","url":"/middleware.html","content":"","keywords":"","version":"Next"},{"title":"List","type":1,"pageTitle":"Middleware","url":"/middleware.html#list","content":" RxDB supports the following hooks: preInsertpostInsertpreSavepostSavepreRemovepostRemovepostCreate ","version":"Next","tagName":"h2"},{"title":"Why is there no validate-hook?","type":1,"pageTitle":"Middleware","url":"/middleware.html#why-is-there-no-validate-hook","content":" Different to mongoose, the validation on document-data is running on the field-level for every change to a document. This means if you set the value lastName of a RxDocument, then the validation will only run on the changed field, not the whole document. Therefore it is not useful to have validate-hooks when a document is written to the database. ","version":"Next","tagName":"h3"},{"title":"Use Cases","type":1,"pageTitle":"Middleware","url":"/middleware.html#use-cases","content":" Middleware are useful for atomizing model logic and avoiding nested blocks of async code. Here are some other ideas: complex validationremoving dependent documentsasynchronous defaultsasynchronous tasks that a certain action triggerstriggering custom eventsnotifications ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"Middleware","url":"/middleware.html#usage","content":" All hooks have the plain data as first parameter, and all but preInsert also have the RxDocument-instance as second parameter. If you want to modify the data in the hook, change attributes of the first parameter. All hook functions are also this-bind to the RxCollection-instance. ","version":"Next","tagName":"h2"},{"title":"Insert","type":1,"pageTitle":"Middleware","url":"/middleware.html#insert","content":" An insert-hook receives the data-object of the new document. lifecycle RxCollection.insert is calledpreInsert series-hookspreInsert parallel-hooksschema validation runsnew document is written to databasepostInsert series-hookspostInsert parallel-hooksevent is emitted to RxDatabase and RxCollection preInsert // series myCollection.preInsert(function(plainData){ // set age to 50 before saving plainData.age = 50; }, false); // parallel myCollection.preInsert(function(plainData){ }, true); // async myCollection.preInsert(function(plainData){ return new Promise(res => setTimeout(res, 100)); }, false); // stop the insert-operation myCollection.preInsert(function(plainData){ throw new Error('stop'); }, false); postInsert // series myCollection.postInsert(function(plainData, rxDocument){ }, false); // parallel myCollection.postInsert(function(plainData, rxDocument){ }, true); // async myCollection.postInsert(function(plainData, rxDocument){ return new Promise(res => setTimeout(res, 100)); }, false); ","version":"Next","tagName":"h3"},{"title":"Save","type":1,"pageTitle":"Middleware","url":"/middleware.html#save","content":" A save-hook receives the document which is saved. lifecycle RxDocument.save is calledpreSave series-hookspreSave parallel-hooksupdated document is written to databasepostSave series-hookspostSave parallel-hooksevent is emitted to RxDatabase and RxCollection preSave // series myCollection.preSave(function(plainData, rxDocument){ // modify anyField before saving plainData.anyField = 'anyValue'; }, false); // parallel myCollection.preSave(function(plainData, rxDocument){ }, true); // async myCollection.preSave(function(plainData, rxDocument){ return new Promise(res => setTimeout(res, 100)); }, false); // stop the save-operation myCollection.preSave(function(plainData, rxDocument){ throw new Error('stop'); }, false); postSave // series myCollection.postSave(function(plainData, rxDocument){ }, false); // parallel myCollection.postSave(function(plainData, rxDocument){ }, true); // async myCollection.postSave(function(plainData, rxDocument){ return new Promise(res => setTimeout(res, 100)); }, false); ","version":"Next","tagName":"h3"},{"title":"Remove","type":1,"pageTitle":"Middleware","url":"/middleware.html#remove","content":" An remove-hook receives the document which is removed. lifecycle RxDocument.remove is calledpreRemove series-hookspreRemove parallel-hooksdeleted document is written to databasepostRemove series-hookspostRemove parallel-hooksevent is emitted to RxDatabase and RxCollection preRemove // series myCollection.preRemove(function(plainData, rxDocument){ }, false); // parallel myCollection.preRemove(function(plainData, rxDocument){ }, true); // async myCollection.preRemove(function(plainData, rxDocument){ return new Promise(res => setTimeout(res, 100)); }, false); // stop the remove-operation myCollection.preRemove(function(plainData, rxDocument){ throw new Error('stop'); }, false); postRemove // series myCollection.postRemove(function(plainData, rxDocument){ }, false); // parallel myCollection.postRemove(function(plainData, rxDocument){ }, true); // async myCollection.postRemove(function(plainData, rxDocument){ return new Promise(res => setTimeout(res, 100)); }, false); ","version":"Next","tagName":"h3"},{"title":"postCreate","type":1,"pageTitle":"Middleware","url":"/middleware.html#postcreate","content":" This hook is called whenever a RxDocument is constructed. You can use postCreate to modify every RxDocument-instance of the collection. This adds a flexible way to add specify behavior to every document. You can also use it to add custom getter/setter to documents. PostCreate-hooks cannot be asynchronous. myCollection.postCreate(function(plainData, rxDocument){ Object.defineProperty(rxDocument, 'myField', { get: () => 'foobar', }); }); const doc = await myCollection.findOne().exec(); console.log(doc.myField); // 'foobar' note This hook does not run on already created or cached documents. Make sure to add postCreate-hooks before interacting with the collection. ","version":"Next","tagName":"h3"},{"title":"RxDB Error Messages","type":0,"sectionRef":"#","url":"/errors.html","content":"","keywords":"","version":"Next"},{"title":"All RxDB error messages","type":1,"pageTitle":"RxDB Error Messages","url":"/errors.html#all-rxdb-error-messages","content":" Code: UT1 Given name is no string or empty Search In CodeSearch In IssuesSearch In Chat Code: UT2 Collection- and database-names must match the regex to be compatible with couchdb databases. See https://neighbourhood.ie/blog/2020/10/13/everything-you-need-to-know-about-couchdb-database-names/ info: if your database-name specifies a folder, the name must contain the slash-char '/' or '\\' Search In CodeSearch In IssuesSearch In Chat Code: UT3 Replication-direction must either be push or pull or both. But not none Search In CodeSearch In IssuesSearch In Chat Code: UT4 Given leveldown is no valid adapter Search In CodeSearch In IssuesSearch In Chat Code: UT5 KeyCompression is set to true in the schema but no key-compression handler is used in the storage Search In CodeSearch In IssuesSearch In Chat Code: UT6 Schema contains encrypted fields but no encryption handler is used in the storage Search In CodeSearch In IssuesSearch In Chat Code: UT7 Attachments.compression is enabled but no attachment-compression plugin is used Search In CodeSearch In IssuesSearch In Chat Code: PL1 Given plugin is not RxDB plugin. Search In CodeSearch In IssuesSearch In Chat Code: PL3 A plugin with the same name was already added but it was not the exact same JavaScript object Search In CodeSearch In IssuesSearch In Chat Code: P2 BulkWrite() cannot be called with an empty array Search In CodeSearch In IssuesSearch In Chat Code: QU1 RxQuery._execOverDatabase(): op not known Search In CodeSearch In IssuesSearch In Chat Code: QU4 RxQuery.regex(): You cannot use .regex() on the primary field Search In CodeSearch In IssuesSearch In Chat Code: QU5 RxQuery.sort(): does not work because key is not defined in the schema Search In CodeSearch In IssuesSearch In Chat Code: QU6 RxQuery.limit(): cannot be called on .findOne() Search In CodeSearch In IssuesSearch In Chat Code: QU9 ThrowIfMissing can only be used in findOne queries Search In CodeSearch In IssuesSearch In Chat Code: QU10 Result empty and throwIfMissing: true Search In CodeSearch In IssuesSearch In Chat Code: QU11 RxQuery: no valid query params given Search In CodeSearch In IssuesSearch In Chat Code: QU12 Given index is not in schema Search In CodeSearch In IssuesSearch In Chat Code: QU13 A top level field of the query is not included in the schema Search In CodeSearch In IssuesSearch In Chat Code: QU14 Running a count() query in slow mode is now allowed. Either run a count() query with a selector that fully matches an index or set allowSlowCount=true when calling the createRxDatabase Search In CodeSearch In IssuesSearch In Chat Code: QU15 For count queries it is not allowed to use skip or limit Search In CodeSearch In IssuesSearch In Chat Code: QU16 $regex queries must be defined by a string, not an RegExp instance. This is because RegExp objects cannot be JSON stringified and also they are mutable which would be dangerous Search In CodeSearch In IssuesSearch In Chat Code: QU17 Chained queries cannot be used on findByIds() RxQuery instances Search In CodeSearch In IssuesSearch In Chat Code: QU18 Malformated query result data. This likely happens because you create a OPFS-storage RxDatabase inside of a worker but did not set the usesRxDatabaseInWorker setting. https://rxdb.info/rx-storage-opfs.html#setting-usesrxdatabaseinworker-when-a-rxdatabase-is-also-used-inside-of-the-worker Search In CodeSearch In IssuesSearch In Chat Code: QU19 Queries must not contain fields or properties with the value `undefined`: https://github.com/pubkey/rxdb/issues/6792#issuecomment-2624555824 Search In CodeSearch In IssuesSearch In Chat Code: MQ1 Path must be a string or object Search In CodeSearch In IssuesSearch In Chat Code: MQ2 Invalid argument Search In CodeSearch In IssuesSearch In Chat Code: MQ3 Invalid sort() argument. Must be a string, object, or array Search In CodeSearch In IssuesSearch In Chat Code: MQ4 Invalid argument. Expected instanceof mquery or plain object Search In CodeSearch In IssuesSearch In Chat Code: MQ5 Method must be used after where() when called with these arguments Search In CodeSearch In IssuesSearch In Chat Code: MQ6 Can't mix sort syntaxes. Use either array or object | .sort([['field', 1], ['test', -1]]) | .sort({ field: 1, test: -1 }) Search In CodeSearch In IssuesSearch In Chat Code: MQ7 Invalid sort value Search In CodeSearch In IssuesSearch In Chat Code: MQ8 Can't mix sort syntaxes. Use either array or object Search In CodeSearch In IssuesSearch In Chat Code: DB1 RxDocument.prepare(): another instance on this adapter has a different password Search In CodeSearch In IssuesSearch In Chat Code: DB2 RxDatabase.addCollections(): collection-names cannot start with underscore _ Search In CodeSearch In IssuesSearch In Chat Code: DB3 RxDatabase.addCollections(): collection already exists. use myDatabase[collectionName] to get it Search In CodeSearch In IssuesSearch In Chat Code: DB4 RxDatabase.addCollections(): schema is missing Search In CodeSearch In IssuesSearch In Chat Code: DB5 RxDatabase.addCollections(): collection-name not allowed Search In CodeSearch In IssuesSearch In Chat Code: DB6 RxDatabase.addCollections(): another instance created this collection with a different schema. Read this https://rxdb.info/questions-answers.html?console=qa#cant-change-the-schema Search In CodeSearch In IssuesSearch In Chat Code: DB8 CreateRxDatabase(): A RxDatabase with the same name and adapter already exists. Make sure to use this combination only once or set ignoreDuplicate to true if you do this intentional- This often happens in react projects with hot reload that reloads the code without reloading the process. Search In CodeSearch In IssuesSearch In Chat Code: DB9 IgnoreDuplicate is only allowed in dev-mode and must never be used in production Search In CodeSearch In IssuesSearch In Chat Code: DB11 CreateRxDatabase(): Invalid db-name, folder-paths must not have an ending slash Search In CodeSearch In IssuesSearch In Chat Code: DB12 RxDatabase.addCollections(): could not write to internal store Search In CodeSearch In IssuesSearch In Chat Code: DB13 CreateRxDatabase(): Invalid db-name or collection name, name contains the dollar sign Search In CodeSearch In IssuesSearch In Chat Code: DB14 No custom reactivity factory added on database creation Search In CodeSearch In IssuesSearch In Chat Code: COL1 RxDocument.insert() You cannot insert an existing document Search In CodeSearch In IssuesSearch In Chat Code: COL2 RxCollection.insert() fieldName ._id can only be used as primaryKey Search In CodeSearch In IssuesSearch In Chat Code: COL3 RxCollection.upsert() does not work without primary Search In CodeSearch In IssuesSearch In Chat Code: COL4 RxCollection.incrementalUpsert() does not work without primary Search In CodeSearch In IssuesSearch In Chat Code: COL5 RxCollection.find() if you want to search by _id, use .findOne(_id) Search In CodeSearch In IssuesSearch In Chat Code: COL6 RxCollection.findOne() needs a queryObject or string. Notice that in RxDB, primary keys must be strings and cannot be numbers. Search In CodeSearch In IssuesSearch In Chat Code: COL7 Hook must be a function Search In CodeSearch In IssuesSearch In Chat Code: COL8 Hooks-when not known Search In CodeSearch In IssuesSearch In Chat Code: COL9 RxCollection.addHook() hook-name not known Search In CodeSearch In IssuesSearch In Chat Code: COL10 RxCollection .postCreate-hooks cannot be async Search In CodeSearch In IssuesSearch In Chat Code: COL11 MigrationStrategies must be an object Search In CodeSearch In IssuesSearch In Chat Code: COL12 A migrationStrategy is missing or too much Search In CodeSearch In IssuesSearch In Chat Code: COL13 MigrationStrategy must be a function Search In CodeSearch In IssuesSearch In Chat Code: COL14 Given static method-name is not a string Search In CodeSearch In IssuesSearch In Chat Code: COL15 Static method-names cannot start with underscore _ Search In CodeSearch In IssuesSearch In Chat Code: COL16 Given static method is not a function Search In CodeSearch In IssuesSearch In Chat Code: COL17 RxCollection.ORM: statics-name not allowed Search In CodeSearch In IssuesSearch In Chat Code: COL18 Collection-method not allowed because fieldname is in the schema Search In CodeSearch In IssuesSearch In Chat Code: COL20 Storage write error Search In CodeSearch In IssuesSearch In Chat Code: COL21 The RxCollection is closed or removed already, either from this JavaScript realm or from another, like a browser tab Search In CodeSearch In IssuesSearch In Chat Code: CONFLICT Document update conflict. When changing a document you must work on the previous revision Search In CodeSearch In IssuesSearch In Chat Code: COL22 .bulkInsert() and .bulkUpsert() cannot be run with multiple documents that have the same primary key Search In CodeSearch In IssuesSearch In Chat Code: COL23 In the open-source version of RxDB, the amount of collections that can exist in parallel is limited to 16. If you already purchased the premium access, you can remove this limit: https://rxdb.info/rx-collection.html#faq Search In CodeSearch In IssuesSearch In Chat Code: DOC1 RxDocument.get$ cannot get observable of in-array fields because order cannot be guessed Search In CodeSearch In IssuesSearch In Chat Code: DOC2 Cannot observe primary path Search In CodeSearch In IssuesSearch In Chat Code: DOC3 Final fields cannot be observed Search In CodeSearch In IssuesSearch In Chat Code: DOC4 RxDocument.get$ cannot observe a non-existed field Search In CodeSearch In IssuesSearch In Chat Code: DOC5 RxDocument.populate() cannot populate a non-existed field Search In CodeSearch In IssuesSearch In Chat Code: DOC6 RxDocument.populate() cannot populate because path has no ref Search In CodeSearch In IssuesSearch In Chat Code: DOC7 RxDocument.populate() ref-collection not in database Search In CodeSearch In IssuesSearch In Chat Code: DOC8 RxDocument.set(): primary-key cannot be modified Search In CodeSearch In IssuesSearch In Chat Code: DOC9 Final fields cannot be modified Search In CodeSearch In IssuesSearch In Chat Code: DOC10 RxDocument.set(): cannot set childpath when rootPath not selected Search In CodeSearch In IssuesSearch In Chat Code: DOC11 RxDocument.save(): can't save deleted document Search In CodeSearch In IssuesSearch In Chat Code: DOC13 RxDocument.remove(): Document is already deleted Search In CodeSearch In IssuesSearch In Chat Code: DOC14 RxDocument.close() does not exist Search In CodeSearch In IssuesSearch In Chat Code: DOC15 Query cannot be an array Search In CodeSearch In IssuesSearch In Chat Code: DOC16 Since version 8.0.0 RxDocument.set() can only be called on temporary RxDocuments Search In CodeSearch In IssuesSearch In Chat Code: DOC17 Since version 8.0.0 RxDocument.save() can only be called on non-temporary documents Search In CodeSearch In IssuesSearch In Chat Code: DOC18 Document property for composed primary key is missing Search In CodeSearch In IssuesSearch In Chat Code: DOC19 Value of primary key(s) cannot be changed Search In CodeSearch In IssuesSearch In Chat Code: DOC20 PrimaryKey missing Search In CodeSearch In IssuesSearch In Chat Code: DOC21 PrimaryKey must be equal to PrimaryKey.trim(). It cannot start or end with a whitespace Search In CodeSearch In IssuesSearch In Chat Code: DOC22 PrimaryKey must not contain a linebreak Search In CodeSearch In IssuesSearch In Chat Code: DOC23 PrimaryKey must not contain a double-quote ["] Search In CodeSearch In IssuesSearch In Chat Code: DOC24 Given document data could not be structured cloned. This happens if you pass non-plain-json data into it, like a Date() object or a Function. In vue.js this happens if you use ref() on the document data which transforms it into a Proxy object. Search In CodeSearch In IssuesSearch In Chat Code: DM1 Migrate() Migration has already run Search In CodeSearch In IssuesSearch In Chat Code: DM2 Migration of document failed final document does not match final schema Search In CodeSearch In IssuesSearch In Chat Code: DM3 Migration already running Search In CodeSearch In IssuesSearch In Chat Code: DM4 Migration errored Search In CodeSearch In IssuesSearch In Chat Code: DM5 Cannot open database state with newer RxDB version. You have to migrate your database state first. See https://rxdb.info/migration-storage.html?console=storage Search In CodeSearch In IssuesSearch In Chat Code: AT1 To use attachments, please define this in your schema Search In CodeSearch In IssuesSearch In Chat Code: EN1 Password is not valid Search In CodeSearch In IssuesSearch In Chat Code: EN2 ValidatePassword: min-length of password not complied Search In CodeSearch In IssuesSearch In Chat Code: EN3 Schema contains encrypted properties but no password is given Search In CodeSearch In IssuesSearch In Chat Code: EN4 Password not valid Search In CodeSearch In IssuesSearch In Chat Code: JD1 You must create the collections before you can import their data Search In CodeSearch In IssuesSearch In Chat Code: JD2 RxCollection.importJSON(): the imported json relies on a different schema Search In CodeSearch In IssuesSearch In Chat Code: JD3 RxCollection.importJSON(): json.passwordHash does not match the own Search In CodeSearch In IssuesSearch In Chat Code: LD1 RxDocument.allAttachments$ can't use attachments on local documents Search In CodeSearch In IssuesSearch In Chat Code: LD2 RxDocument.get(): objPath must be a string Search In CodeSearch In IssuesSearch In Chat Code: LD3 RxDocument.get$ cannot get observable of in-array fields because order cannot be guessed Search In CodeSearch In IssuesSearch In Chat Code: LD4 Cannot observe primary path Search In CodeSearch In IssuesSearch In Chat Code: LD5 RxDocument.set() id cannot be modified Search In CodeSearch In IssuesSearch In Chat Code: LD6 LocalDocument: Function is not usable on local documents Search In CodeSearch In IssuesSearch In Chat Code: LD7 Local document already exists Search In CodeSearch In IssuesSearch In Chat Code: LD8 LocalDocuments not activated. Set localDocuments=true on creation, when you want to store local documents on the RxDatabase or RxCollection. Search In CodeSearch In IssuesSearch In Chat Code: RC1 Replication: already added Search In CodeSearch In IssuesSearch In Chat Code: RC2 ReplicateCouchDB() query must be from the same RxCollection Search In CodeSearch In IssuesSearch In Chat Code: RC4 RxCouchDBReplicationState.awaitInitialReplication() cannot await initial replication when live: true Search In CodeSearch In IssuesSearch In Chat Code: RC5 RxCouchDBReplicationState.awaitInitialReplication() cannot await initial replication if multiInstance because the replication might run on another instance Search In CodeSearch In IssuesSearch In Chat Code: RC6 SyncFirestore() serverTimestampField MUST NOT be part of the collections schema and MUST NOT be nested. Search In CodeSearch In IssuesSearch In Chat Code: RC7 SimplePeer requires to have process.nextTick() polyfilled, see https://rxdb.info/replication-webrtc.html?console=webrtc Search In CodeSearch In IssuesSearch In Chat Code: RC_PULL RxReplication pull handler threw an error - see .errors for more details Search In CodeSearch In IssuesSearch In Chat Code: RC_STREAM RxReplication pull stream$ threw an error - see .errors for more details Search In CodeSearch In IssuesSearch In Chat Code: RC_PUSH RxReplication push handler threw an error - see .errors for more details Search In CodeSearch In IssuesSearch In Chat Code: RC_PUSH_NO_AR RxReplication push handler did not return an array with the conflicts Search In CodeSearch In IssuesSearch In Chat Code: RC_WEBRTC_PEER RxReplication WebRTC Peer has error Search In CodeSearch In IssuesSearch In Chat Code: RC_COUCHDB_1 ReplicateCouchDB() url must end with a slash like 'https://example.com/mydatabase/' Search In CodeSearch In IssuesSearch In Chat Code: RC_COUCHDB_2 ReplicateCouchDB() did not get valid result with rows. Search In CodeSearch In IssuesSearch In Chat Code: RC_OUTDATED Outdated client, update required. Replication was canceled Search In CodeSearch In IssuesSearch In Chat Code: RC_UNAUTHORIZED Unauthorized client, update the replicationState.headers to set correct auth data Search In CodeSearch In IssuesSearch In Chat Code: RC_FORBIDDEN Client behaves wrong so the replication was canceled. Mostly happens if the client tries to write data that it is not allowed to Search In CodeSearch In IssuesSearch In Chat Code: SC1 Fieldnames do not match the regex Search In CodeSearch In IssuesSearch In Chat Code: SC2 SchemaCheck: name 'item' reserved for array-fields Search In CodeSearch In IssuesSearch In Chat Code: SC3 SchemaCheck: fieldname has a ref-array but items-type is not string Search In CodeSearch In IssuesSearch In Chat Code: SC4 SchemaCheck: fieldname has a ref but is not type string, [string,null] or array<string> Search In CodeSearch In IssuesSearch In Chat Code: SC6 SchemaCheck: primary can only be defined at top-level Search In CodeSearch In IssuesSearch In Chat Code: SC7 SchemaCheck: default-values can only be defined at top-level Search In CodeSearch In IssuesSearch In Chat Code: SC8 SchemaCheck: first level-fields cannot start with underscore _ Search In CodeSearch In IssuesSearch In Chat Code: SC10 SchemaCheck: schema defines ._rev, this will be done automatically Search In CodeSearch In IssuesSearch In Chat Code: SC11 SchemaCheck: schema needs a number >=0 as version Search In CodeSearch In IssuesSearch In Chat Code: SC13 SchemaCheck: primary is always index, do not declare it as index Search In CodeSearch In IssuesSearch In Chat Code: SC14 SchemaCheck: primary is always unique, do not declare it as index Search In CodeSearch In IssuesSearch In Chat Code: SC15 SchemaCheck: primary cannot be encrypted Search In CodeSearch In IssuesSearch In Chat Code: SC16 SchemaCheck: primary must have type: string Search In CodeSearch In IssuesSearch In Chat Code: SC17 SchemaCheck: top-level fieldname is not allowed Search In CodeSearch In IssuesSearch In Chat Code: SC18 SchemaCheck: indexes must be an array Search In CodeSearch In IssuesSearch In Chat Code: SC19 SchemaCheck: indexes must contain strings or arrays of strings Search In CodeSearch In IssuesSearch In Chat Code: SC20 SchemaCheck: indexes.array must contain strings Search In CodeSearch In IssuesSearch In Chat Code: SC21 SchemaCheck: given index is not defined in schema Search In CodeSearch In IssuesSearch In Chat Code: SC22 SchemaCheck: given indexKey is not type:string Search In CodeSearch In IssuesSearch In Chat Code: SC23 SchemaCheck: fieldname is not allowed Search In CodeSearch In IssuesSearch In Chat Code: SC24 SchemaCheck: required fields must be set via array. See https://spacetelescope.github.io/understanding-json-schema/reference/object.html#required Search In CodeSearch In IssuesSearch In Chat Code: SC25 SchemaCheck: compoundIndexes needs to be specified in the indexes field Search In CodeSearch In IssuesSearch In Chat Code: SC26 SchemaCheck: indexes needs to be specified at collection schema level Search In CodeSearch In IssuesSearch In Chat Code: SC28 SchemaCheck: encrypted fields is not defined in the schema Search In CodeSearch In IssuesSearch In Chat Code: SC29 SchemaCheck: missing object key 'properties' Search In CodeSearch In IssuesSearch In Chat Code: SC30 SchemaCheck: primaryKey is required Search In CodeSearch In IssuesSearch In Chat Code: SC32 SchemaCheck: primary field must have the type string/number/integer Search In CodeSearch In IssuesSearch In Chat Code: SC33 SchemaCheck: used primary key is not a property in the schema Search In CodeSearch In IssuesSearch In Chat Code: SC34 Fields of type string that are used in an index, must have set the maxLength attribute in the schema Search In CodeSearch In IssuesSearch In Chat Code: SC35 Fields of type number/integer that are used in an index, must have set the multipleOf attribute in the schema Search In CodeSearch In IssuesSearch In Chat Code: SC36 A field of this type cannot be used as index Search In CodeSearch In IssuesSearch In Chat Code: SC37 Fields of type number that are used in an index, must have set the minimum and maximum attribute in the schema Search In CodeSearch In IssuesSearch In Chat Code: SC38 Fields of type boolean that are used in an index, must be required in the schema Search In CodeSearch In IssuesSearch In Chat Code: SC39 The primary key must have the maxLength attribute set. Ensure you use the dev-mode plugin when developing with RxDB. Search In CodeSearch In IssuesSearch In Chat Code: SC40 $ref fields in the schema are not allowed. RxDB cannot resolve related schemas because it would have a negative performance impact.It would have to run http requests on runtime. $ref fields should be resolved during build time. Search In CodeSearch In IssuesSearch In Chat Code: SC41 Minimum, maximum and maxLength values for indexes must be real numbers, not Infinity or -Infinity Search In CodeSearch In IssuesSearch In Chat Code: DVM1 When dev-mode is enabled, your storage must use one of the schema validators at the top level. This is because most problems people have with RxDB is because they store data that is not valid to the schema which causes strange bugs and problems. Search In CodeSearch In IssuesSearch In Chat Code: VD1 Sub-schema not found, does the schemaPath exists in your schema? Search In CodeSearch In IssuesSearch In Chat Code: VD2 Object does not match schema Search In CodeSearch In IssuesSearch In Chat Code: S1 You cannot create collections after calling RxDatabase.server() Search In CodeSearch In IssuesSearch In Chat Code: GQL1 GraphQL replication: cannot find sub schema by key Search In CodeSearch In IssuesSearch In Chat Code: GQL3 GraphQL replication: pull returns more documents then batchSize Search In CodeSearch In IssuesSearch In Chat Code: CRDT1 CRDT operations cannot be used because the crdt options are not set in the schema. Search In CodeSearch In IssuesSearch In Chat Code: CRDT2 RxDocument.incrementalModify() cannot be used when CRDTs are activated. Search In CodeSearch In IssuesSearch In Chat Code: CRDT3 To use CRDTs you MUST NOT set a conflictHandler because the default CRDT conflict handler must be used Search In CodeSearch In IssuesSearch In Chat Code: DXE1 Non-required index fields are not possible with the dexie.js RxStorage: https://github.com/pubkey/rxdb/pull/6643#issuecomment-2505310082 Search In CodeSearch In IssuesSearch In Chat Code: RM1 Cannot communicate with a remote that was build on a different RxDB version. Did you forget to rebuild your workers when updating RxDB? Search In CodeSearch In IssuesSearch In Chat Code: SNH This should never happen Search In CodeSearch In IssuesSearch In Chat ","version":"Next","tagName":"h2"},{"title":"Migrate Database Data on schema changes","type":0,"sectionRef":"#","url":"/migration-schema.html","content":"","keywords":"","version":"Next"},{"title":"Providing strategies","type":1,"pageTitle":"Migrate Database Data on schema changes","url":"/migration-schema.html#providing-strategies","content":" Upon creation of a collection, you have to provide migrationStrategies when your schema's version-number is greater than 0. To do this, you have to add an object to the migrationStrategies property where a function for every schema-version is assigned. A migrationStrategy is a function which gets the old document-data as a parameter and returns the new, transformed document-data. If the strategy returns null, the document will be removed instead of migrated. myDatabase.addCollections({ messages: { schema: messageSchemaV1, migrationStrategies: { // 1 means, this transforms data from version 0 to version 1 1: function(oldDoc){ oldDoc.time = new Date(oldDoc.time).getTime(); // string to unix return oldDoc; } } } }); Asynchronous strategies can also be used: myDatabase.addCollections({ messages: { schema: messageSchemaV1, migrationStrategies: { 1: function(oldDoc){ oldDoc.time = new Date(oldDoc.time).getTime(); // string to unix return oldDoc; }, /** * 2 means, this transforms data from version 1 to version 2 * this returns a promise which resolves with the new document-data */ 2: function(oldDoc){ // in the new schema (version: 2) we defined 'senderCountry' as required field (string) // so we must get the country of the message-sender from the server const coordinates = oldDoc.coordinates; return fetch('http://myserver.com/api/countryByCoordinates/'+coordinates+'/') .then(response => { const response = response.json(); oldDoc.senderCountry = response; return oldDoc; }); } } } }); you can also filter which documents should be migrated: myDatabase.addCollections({ messages: { schema: messageSchemaV1, migrationStrategies: { // 1 means, this transforms data from version 0 to version 1 1: function(oldDoc){ oldDoc.time = new Date(oldDoc.time).getTime(); // string to unix return oldDoc; }, /** * this removes all documents older then 2017-02-12 * they will not appear in the new collection */ 2: function(oldDoc){ if(oldDoc.time < 1486940585) return null; else return oldDoc; } } } }); ","version":"Next","tagName":"h2"},{"title":"autoMigrate","type":1,"pageTitle":"Migrate Database Data on schema changes","url":"/migration-schema.html#automigrate","content":" By default, the migration automatically happens when the collection is created. Calling RxDatabase.addCollections() returns only when the migration has finished. If you have lots of data or the migrationStrategies take a long time, it might be better to start the migration 'by hand' and show the migration-state to the user as a loading-bar. const messageCol = await myDatabase.addCollections({ messages: { schema: messageSchemaV1, autoMigrate: false, // <- migration will not run at creation migrationStrategies: { 1: async function(oldDoc){ ... anything that takes very long ... return oldDoc; } } } }); // check if migration is needed const needed = await messageCol.migrationNeeded(); if(needed == false) return; // start the migration messageCol.startMigration(10); // 10 is the batch-size, how many docs will run at parallel const migrationState = messageCol.getMigrationState(); // 'start' the observable migrationState.$.subscribe({ next: state => console.dir(state), error: error => console.error(error), complete: () => console.log('done') }); // the emitted states look like this: { status: 'RUNNING' // oneOf 'RUNNING' | 'DONE' | 'ERROR' count: { total: 50, // amount of documents which must be migrated handled: 0, // amount of handled docs percent: 0 // percentage [0-100] } } If you don't want to show the state to the user, you can also use .migratePromise(): const migrationPromise = messageCol.migratePromise(10); await migratePromise; ","version":"Next","tagName":"h2"},{"title":"migrationStates()","type":1,"pageTitle":"Migrate Database Data on schema changes","url":"/migration-schema.html#migrationstates","content":" RxDatabase.migrationStates() returns an Observable that emits all migration states of any collection of the database. Use this when you add collections dynamically and want to show a loading-state of the migrations to the user. const allStatesObservable = myDatabase.migrationStates(); allStatesObservable.subscribe(allStates => { allStates.forEach(migrationState => { console.log( 'migration state of ' + migrationState.collection.name ); }); }); ","version":"Next","tagName":"h2"},{"title":"Migrating attachments","type":1,"pageTitle":"Migrate Database Data on schema changes","url":"/migration-schema.html#migrating-attachments","content":" When you store RxAttachments together with your document, they can also be changed, added or removed while running the migration. You can do this by mutating the oldDoc._attachments property. import { createBlob } from 'rxdb'; const migrationStrategies = { 1: async function(oldDoc){ // do nothing with _attachments to keep all attachments and have them in the new collection version. return oldDoc; }, 2: async function(oldDoc){ // set _attachments to an empty object to delete all existing ones during the migration. oldDoc._attachments = {}; return oldDoc; } 3: async function(oldDoc){ // update the data field of a single attachment to change its data. oldDoc._attachments.myFile.data = await createBlob( 'my new text', oldDoc._attachments.myFile.content_type ); return oldDoc; } } ","version":"Next","tagName":"h2"},{"title":"Migration on multi-tab in browsers","type":1,"pageTitle":"Migrate Database Data on schema changes","url":"/migration-schema.html#migration-on-multi-tab-in-browsers","content":" If you use RxDB in a multiInstance environment, like a browser, it will ensure that exactly one tab is running a migration of a collection. Also the migrationState.$ events are emitted between browser tabs. ","version":"Next","tagName":"h2"},{"title":"Migration and Replication","type":1,"pageTitle":"Migrate Database Data on schema changes","url":"/migration-schema.html#migration-and-replication","content":" If you use any of the RxReplication plugins, the migration will also run on the internal replication-state storage. It will migrate all assumedMasterState documents so that after the migration is done, you do not have to re-run the replication from scratch. RxDB assumes that you run the exact same migration on the servers and the clients. Notice that the replication pull-checkpoint will not be migrated. Your backend must be compatible with pull-checkpoints of older versions. ","version":"Next","tagName":"h2"},{"title":"Migration should be run on all database instances","type":1,"pageTitle":"Migrate Database Data on schema changes","url":"/migration-schema.html#migration-should-be-run-on-all-database-instances","content":" If you have multiple database instances (for example, if you are running replication inside of a Worker or SharedWorker and have created a database instance inside of the worker), schema migration should be started on all database instances. All instances must know about all migration strategies and any updated schema versions. ","version":"Next","tagName":"h2"},{"title":"Storage Migration","type":0,"sectionRef":"#","url":"/migration-storage.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"Storage Migration","url":"/migration-storage.html#usage","content":" Lets say you want to migrate from Dexie.js RxStorage to IndexedDB. import { migrateStorage } from 'rxdb/plugins/migration-storage'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; import { getRxStorageDexie } from 'rxdb-old/plugins/storage-dexie'; // create the new RxDatabase const db = await createRxDatabase({ name: dbLocation, storage: getRxStorageIndexedDB(), multiInstance: false }); await migrateStorage({ database: db as any, /** * Name of the old database, * using the storage migration requires that the * new database has a different name. */ oldDatabaseName: 'myOldDatabaseName', oldStorage: getRxStorageDexie(), // RxStorage of the old database batchSize: 500, // batch size parallel: false, // <- true if it should migrate all collections in parallel. False (default) if should migrate in serial afterMigrateBatch: (input: AfterMigrateBatchHandlerInput) => { console.log('storage migration: batch processed'); } }); ","version":"Next","tagName":"h2"},{"title":"Migrate from a previous RxDB major version","type":1,"pageTitle":"Storage Migration","url":"/migration-storage.html#migrate-from-a-previous-rxdb-major-version","content":" To migrate from a previous RxDB major version, you have to install the 'old' RxDB in the package.json { "dependencies": { "rxdb-old": "npm:[email protected]", } } The you can run the migration by providing the old storage: /* ... */ import { migrateStorage } from 'rxdb/plugins/migration-storage'; import { getRxStorageDexie } from 'rxdb-old/plugins/storage-dexie'; // <- import from the old RxDB version await migrateStorage({ database: db as any, /** * Name of the old database, * using the storage migration requires that the * new database has a different name. */ oldDatabaseName: 'myOldDatabaseName', oldStorage: getRxStorageDexie(), // RxStorage of the old database batchSize: 500, // batch size parallel: false, afterMigrateBatch: (input: AfterMigrateBatchHandlerInput) => { console.log('storage migration: batch processed'); } }); /* ... */ ","version":"Next","tagName":"h2"},{"title":"Disable Version Check on RxDB Premium 👑","type":1,"pageTitle":"Storage Migration","url":"/migration-storage.html#disable-version-check-on-rxdb-premium-","content":" RxDB Premium has a check in place that ensures that you do not accidentally use the wrong RxDB core and 👑 Premium version together which could break your database state. This can be a problem during migrations where you have multiple versions of RxDB in use and it will throw the error Version mismatch detected. You can disable that check by importing and running the disableVersionCheck() function from RxDB Premium. // RxDB Premium v15 or newer: import { disableVersionCheck } from 'rxdb-premium-old/plugins/shared'; disableVersionCheck(); // RxDB Premium v14: // for esm import { disableVersionCheck } from 'rxdb-premium-old/dist/es/shared/version-check.js'; disableVersionCheck(); // for cjs import { disableVersionCheck } from 'rxdb-premium-old/dist/lib/shared/version-check.js'; disableVersionCheck(); ","version":"Next","tagName":"h2"},{"title":"Node.js Database","type":0,"sectionRef":"#","url":"/nodejs-database.html","content":"","keywords":"","version":"Next"},{"title":"Persistent Database","type":1,"pageTitle":"Node.js Database","url":"/nodejs-database.html#persistent-database","content":" To get a "normal" database connection where the data is persisted to a file system, the RxDB real time database provides multiple storage implementations that work in Node.js. The FoundationDB storage connects to a FoundationDB cluster which itself is just a distributed key-value engine. RxDB adds the NoSQL query-engine, indexes and other features on top of it. It scales horizontally because you can always add more servers to the FoundationDB cluster to increase the capacity. Setting up a RxDB database is pretty simple. You import the FoundationDB RxStorage and tell RxDB to use that when calling createRxDatabase: import { createRxDatabase } from 'rxdb'; import { getRxStorageFoundationDB } from 'rxdb/plugins/storage-foundationdb'; const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageFoundationDB({ apiVersion: 620, clusterFile: '/path/to/fdb.cluster' }) }); // add a collection await db.addCollections({ users: { schema: mySchema } }); // run a query const result = await db.users.find({ selector: { name: 'foobar' } }).exec(); Another alternative storage is the SQLite RxStorage that stores the data inside of a SQLite filebased database. The SQLite storage is faster then FoundationDB and does not require to set up a cluster or anything because SQLite directly stores and reads the data inside of the filesystem. The downside of that is that it only scales vertically. import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsNode } from 'rxdb-premium/plugins/storage-sqlite'; import sqlite3 from 'sqlite3'; const myRxDatabase = await createRxDatabase({ name: 'path/to/database/file/foobar.db', storage: getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsNode(sqlite3) }) }); Because the SQLite RxStorage is not free and you might not want to set up a FoundationDB cluster, there is also the option to use the LokiJS RxStorage together with the filesystem adapter. This will store the data as plain json in a file and load everything into memory on startup. This works great for small prototypes but it is not recommended to be used in production. import { createRxDatabase } from 'rxdb'; const LokiFsStructuredAdapter = require('lokijs/src/loki-fs-structured-adapter.js'); import { getRxStorageLoki } from 'rxdb/plugins/storage-lokijs'; import sqlite3 from 'sqlite3'; const myRxDatabase = await createRxDatabase({ name: 'path/to/database/file/foobar.db', storage: getRxStorageLoki({ adapter: new LokiFsStructuredAdapter() }) }); Here is a performance comparison chart of the different storages (lower is better): ","version":"Next","tagName":"h2"},{"title":"RxDB as Node.js In-Memory Database","type":1,"pageTitle":"Node.js Database","url":"/nodejs-database.html#rxdb-as-nodejs-in-memory-database","content":" One of the easiest way to use RxDB in Node.js is to use the Memory RxStorage. As the name implies, it stores the data directly in-memory of the Node.js JavaScript process. This makes it really fast to read and write data but of course the data is not persisted and will be lost when the nodejs process exits. Often the in-memory option is used when RxDB is used in unit tests because it automatically cleans up everything afterwards. import { createRxDatabase } from 'rxdb'; import { getRxStorageMemory } from 'rxdb/plugins/storage-memory'; const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageMemory() }); Also notice that the default memory limit of Node.js is 4gb (might change of newer versions) so for bigger datasets you might want to increase the limit with the max-old-space-size flag: # increase the Node.js memory limit to 8GB node --max-old-space-size=8192 index.js ","version":"Next","tagName":"h2"},{"title":"Hybrid In-memory-persistence-synced storage","type":1,"pageTitle":"Node.js Database","url":"/nodejs-database.html#hybrid-in-memory-persistence-synced-storage","content":" If you want to have the performance of an in-memory database but require persistency of the data, you can use the memory-mapped storage. On database creation it will load all data into the memory and on writes it will first write the data into memory and later also write it to the persistent storage in the background. In the following example the FoundationDB storage is used, but any other RxStorage can be used as persistence layer. import { createRxDatabase } from 'rxdb'; import { getRxStorageFoundationDB } from 'rxdb/plugins/storage-foundationdb'; import { getMemoryMappedRxStorage } from 'rxdb-premium/plugins/storage-memory-mapped'; const db = await createRxDatabase({ name: 'exampledb', storage: getMemoryMappedRxStorage({ storage: getRxStorageFoundationDB({ apiVersion: 620, clusterFile: '/path/to/fdb.cluster' }) }) }); While this approach gives you a database with great performance and persistent, it has two major downsides: The database size is limited to the memory sizeWrites can be lost when the Node.js process exists between a write to the memory state and the background persisting. ","version":"Next","tagName":"h2"},{"title":"Share database between microservices with RxDB","type":1,"pageTitle":"Node.js Database","url":"/nodejs-database.html#share-database-between-microservices-with-rxdb","content":" Using a local, embedded database in Node.js works great until you have to share the data with another Node.js process or another server at all. To share the database state with other instances, RxDB provides two different methods. One is replication and the other is the remote RxStorage. The replication copies over the whole database set to other instances live-replicates all ongoing writes. This has the benefit of scaling better because each of your microservice will run queries on its own copy of the dataset. Sometimes however you might not want to store the full dataset on each microservice. Then it is better to use the remote RxStorage and connect it to the "main" database. The remote storage will run all operations the main database and return the result to the calling database. ","version":"Next","tagName":"h2"},{"title":"Follow up on RxDB+Node.js","type":1,"pageTitle":"Node.js Database","url":"/nodejs-database.html#follow-up-on-rxdbnodejs","content":" Check out the RxDB Nodejs example.If you haven't done yet, you should start learning about RxDB with the Quickstart Tutorial.I created a list of embedded JavaSCript databases that you will help you to pick a database if you do not want to use RxDB.Check out the MongoDB RxStorage that uses MongoDB for the database connection from your Node.js application and runs the RxDB real time database on top of it. ","version":"Next","tagName":"h2"},{"title":"Local First / Offline First","type":0,"sectionRef":"#","url":"/offline-first.html","content":"","keywords":"","version":"Next"},{"title":"UX is better without loading spinners","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#ux-is-better-without-loading-spinners","content":" In 'normal' web applications, most user interactions like fetching, saving or deleting data, correspond to a request to the backend server. This means that each of these interactions require the user to await the unknown latency to and from a remote server while looking at a loading spinner. In offline-first apps, the operations go directly against the local storage which happens almost instantly. There is no perceptible loading time and so it is not even necessary to implement a loading spinner at all. As soon as the user clicks, the UI represents the new state as if it was already changed in the backend. ","version":"Next","tagName":"h2"},{"title":"Multi-tab usage just works","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#multi-tab-usage-just-works","content":" Many, even big websites like amazon, reddit and stack overflow do not handle multi tab usage correctly. When a user has multiple tabs of the website open and does a login on one of these tabs, the state does not change on the other tabs. On offline first applications, there is always exactly one state of the data across all tabs. Offline first databases (like RxDB) store the data inside of IndexedDb and share the state between all tabs of the same origin. ","version":"Next","tagName":"h2"},{"title":"Latency is more important than bandwidth","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#latency-is-more-important-than-bandwidth","content":" In the past, often the bandwidth was the limiting factor on determining the loading time of an application. But while bandwidth has improved over the years, latency became the limiting factor. You can always increase the bandwidth by setting up more cables or sending more Starlink satellites to space. But reducing the latency is not so easy. It is defined by the physical properties of the transfer medium, the speed of light and the distance to the server. All of these three are hard to optimize. Offline first application benefit from that because sending the initial state to the client can be done much faster with more bandwidth. And once the data is there, we do no longer have to care about the latency to the backend server because you can run near zero latency queries locally. ","version":"Next","tagName":"h2"},{"title":"Realtime comes for free","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#realtime-comes-for-free","content":" Most websites lie to their users. They do not lie because they display wrong data, but because they display old data that was loaded from the backend at the time the user opened the site. To overcome this, you could build a realtime website where you create a websocket that streams updates from the backend to the client. This means work. Your client needs to tell the server which page is currently opened and which updates the client is interested to. Then the server can push updates over the websocket and you can update the UI accordingly. With offline first applications, you already have a realtime replication with the backend. Most offline first databases provide some concept of changestream or data subscriptions and with RxDB you can even directly subscribe to query results or single fields of documents. This makes it easy to have an always updated UI whenever data on the backend changes. ","version":"Next","tagName":"h2"},{"title":"Scales with data size, not with the amount of user interaction","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#scales-with-data-size-not-with-the-amount-of-user-interaction","content":" On normal applications, each user interaction can result in multiple requests to the backend server which increase its load. The more users interact with your application, the more backend resources you have to provide. Offline first applications do not scale up with the amount of user actions but instead they scale up with the amount of data. Once that data is transferred to the client, the user can do as many interactions with it as required without connecting to the server. ","version":"Next","tagName":"h2"},{"title":"Modern apps have longer runtimes","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#modern-apps-have-longer-runtimes","content":" In the past you used websites only for a short time. You open it, perform some action and then close it again. This made the first load time the important metric when evaluating page speed. Today web applications have changed and with it the way we use them. Single page applications are opened once and then used over the whole day. Chat apps, email clients, PWAs and hybrid apps. All of these were made to have long runtimes. This makes the time for user interactions more important than the initial loading time. Offline first applications benefit from that because there is often no loading time on user actions while loading the initial state to the client is not that relevant. ","version":"Next","tagName":"h2"},{"title":"You might not need REST","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#you-might-not-need-rest","content":" On normal web applications, you make different requests for each kind of data interaction. For that you have to define a swagger route, implement a route handler on the backend and create some client code to send or fetch data from that route. The more complex your application becomes, the more REST routes you have to maintain and implement. With offline first apps, you have a way to hack around all this cumbersome work. You just replicate the whole state from the server to the client. The replication does not only run once, you have a realtime replication and all changes at one side are automatically there on the other side. On the client, you can access every piece of state with a simple database query. While this of course only works for amounts of data that the client can load and store, it makes implementing prototypes and simple apps much faster. ","version":"Next","tagName":"h2"},{"title":"You might not need Redux","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#you-might-not-need-redux","content":" Data is hard, especially for UI applications where many things can happen at the same time. The user is clicking around. Stuff is loaded from the server. All of these things interact with the global state of the app. To manage this complexity it is common to use state management libraries like Redux or MobX. With them, you write all this lasagna code to wrap the mutation of data and to make the UI react to all these changes. On offline first apps, your global state is already there in a single place stored inside of the local database. You do not have to care whether this data came from the UI, another tab, the backend or another device of the same user. You can just make writes to the database and fetch data out of it. ","version":"Next","tagName":"h2"},{"title":"Follow up","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#follow-up","content":" Learn how to store and query data with RxDB in the RxDB QuickstartDownsides of Offline First ","version":"Next","tagName":"h2"},{"title":"Object-Data-Relational-Mapping","type":0,"sectionRef":"#","url":"/orm.html","content":"","keywords":"","version":"Next"},{"title":"statics","type":1,"pageTitle":"Object-Data-Relational-Mapping","url":"/orm.html#statics","content":" Statics are defined collection-wide and can be called on the collection. ","version":"Next","tagName":"h2"},{"title":"Add statics to a collection","type":1,"pageTitle":"Object-Data-Relational-Mapping","url":"/orm.html#add-statics-to-a-collection","content":" To add static functions, pass a statics-object when you create your collection. The object contains functions, mapped to their function-names. const heroes = await myDatabase.addCollections({ heroes: { schema: mySchema, statics: { scream: function(){ return 'AAAH!!'; } } } }); console.log(heroes.scream()); // 'AAAH!!' You can also use the this-keyword which resolves to the collection: const heroes = await myDatabase.addCollections({ heroes: { schema: mySchema, statics: { whoAmI: function(){ return this.name; } } } }); console.log(heroes.whoAmI()); // 'heroes' ","version":"Next","tagName":"h3"},{"title":"instance-methods","type":1,"pageTitle":"Object-Data-Relational-Mapping","url":"/orm.html#instance-methods","content":" Instance-methods are defined collection-wide. They can be called on the RxDocuments of the collection. ","version":"Next","tagName":"h2"},{"title":"Add instance-methods to a collection","type":1,"pageTitle":"Object-Data-Relational-Mapping","url":"/orm.html#add-instance-methods-to-a-collection","content":" const heroes = await myDatabase.addCollections({ heroes: { schema: mySchema, methods: { scream: function(){ return 'AAAH!!'; } } } }); const doc = await heroes.findOne().exec(); console.log(doc.scream()); // 'AAAH!!' Here you can also use the this-keyword: const heroes = await myDatabase.addCollections({ heroes: { schema: mySchema, methods: { whoAmI: function(){ return 'I am ' + this.name + '!!'; } } } }); await heroes.insert({ name: 'Skeletor' }); const doc = await heroes.findOne().exec(); console.log(doc.whoAmI()); // 'I am Skeletor!!' ","version":"Next","tagName":"h3"},{"title":"attachment-methods","type":1,"pageTitle":"Object-Data-Relational-Mapping","url":"/orm.html#attachment-methods","content":" Attachment-methods are defined collection-wide. They can be called on the RxAttachments of the RxDocuments of the collection. const heroes = await myDatabase.addCollections({ heroes: { schema: mySchema, attachments: { scream: function(){ return 'AAAH!!'; } } } }); const doc = await heroes.findOne().exec(); const attachment = await doc.putAttachment({ id: 'cat.txt', data: 'meow I am a kitty', type: 'text/plain' }); console.log(attachment.scream()); // 'AAAH!!' ","version":"Next","tagName":"h2"},{"title":"RxDB Documentation","type":0,"sectionRef":"#","url":"/overview.html","content":"","keywords":"","version":"Next"},{"title":"Getting Started with RxDB","type":1,"pageTitle":"RxDB Documentation","url":"/overview.html##","content":"Overview Quickstart Install Dev-mode Typescript ","version":"Next","tagName":"h2"},{"title":"Core Entities","type":1,"pageTitle":"RxDB Documentation","url":"/overview.html##","content":"RxDatabase RxSchema RxCollection RxDocument RxQuery ","version":"Next","tagName":"h2"},{"title":"💾 Storages","type":1,"pageTitle":"RxDB Documentation","url":"/overview.html##","content":"RxStorage Overview Dexie.js IndexedDB 👑 OPFS 👑 Memory SQLite 👑 Filesystem Node 👑 MongoDB DenoKV FoundationDB ","version":"Next","tagName":"h2"},{"title":"Storage Wrappers","type":1,"pageTitle":"RxDB Documentation","url":"/overview.html##","content":"Schema Validation Encryption Key Compression Logger 👑 Remote RxStorage Worker RxStorage 👑 SharedWorker RxStorage 👑 Memory Mapped RxStorage 👑 Sharding 👑 Localstorage Meta Optimizer 👑 Electron ","version":"Next","tagName":"h2"},{"title":"🔄 Replication","type":1,"pageTitle":"RxDB Documentation","url":"/overview.html##","content":"Replication HTTP Replication RxServer Replication GraphQL Replication WebSocket Replication CouchDB Replication WebRTC P2P Replication Firestore Replication NATS Replication ","version":"Next","tagName":"h2"},{"title":"Server","type":1,"pageTitle":"RxDB Documentation","url":"/overview.html##","content":"RxServer RxServer Scaling ","version":"Next","tagName":"h2"},{"title":"How RxDB works","type":1,"pageTitle":"RxDB Documentation","url":"/overview.html##","content":"Transactions Conflicts Revisions Query Cache Creating Plugins Errors ","version":"Next","tagName":"h2"},{"title":"Advanced Features","type":1,"pageTitle":"RxDB Documentation","url":"/overview.html##","content":"Migration Attachments RxPipelines Custom Reactivity RxState Local Documents Cleanup Backup Leader Election Middleware CRDT Population ORM Fulltext Search 👑 Vector Database Query Optimizer 👑 Third Party Plugins ","version":"Next","tagName":"h2"},{"title":"Performance","type":1,"pageTitle":"RxDB Documentation","url":"/overview.html##","content":"RxStorage Performance NoSQL Performance Tips Slow IndexedDB LocalStorage vs. IndexedDB vs. Cookies vs. OPFS vs. WASM-SQLite ","version":"Next","tagName":"h2"},{"title":"Releases","type":1,"pageTitle":"RxDB Documentation","url":"/overview.html##","content":"16.0.0 15.0.0 14.0.0 13.0.0 12.0.0 11.0.0 10.0.0 9.0.0 8.0.0 ","version":"Next","tagName":"h2"},{"title":"Contact","type":1,"pageTitle":"RxDB Documentation","url":"/overview.html##","content":"Consulting Discord LinkedIn ","version":"Next","tagName":"h2"},{"title":"Performance tips for RxDB and other NoSQL databases","type":0,"sectionRef":"#","url":"/nosql-performance-tips.html","content":"","keywords":"","version":"Next"},{"title":"Use bulk operations","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#use-bulk-operations","content":" When you run write operations on multiple documents, make sure you use bulk operations instead of single document operations. // wrong ❌ for(const docData of dataAr){ await myCollection.insert(docData); } // right ✔️ await myCollection.bulkInsert(dataAr); ","version":"Next","tagName":"h2"},{"title":"Help the query planner by adding operators that better restrict the index range","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#help-the-query-planner-by-adding-operators-that-better-restrict-the-index-range","content":" Often on complex queries, RxDB (and other databases) do not pick the optimal index range when querying a result set. You can add additional restrictive operators to ensure the query runs over a smaller index space and has a better performance. Lets see some examples for different query types. /** * Adding a restrictive operator for an $or query * so that it better limits the index space for the time-field. */ const orQuery = { selector: { $or: [ { time: { $gt: 1234 }, }, { time: { $eg: 1234 }, user: { $gt: 'foobar' } }, ] time: { $gte: 1234 } // <- add restrictive operator } } /** * Adding a restrictive operator for an $regex query * so that it better limits the index space for the user-field. * We know that all matching fields start with 'foo' so we can * tell the query to use that as lower constraint for the index. */ const regexQuery = { selector: { user: { $regex: '^foo(.*)0-9$', // a complex regex with a ^ in the beginning $gte: 'foo' // <- add restrictive operator } } } /** * Adding a restrictive operator for a query on an enum field. * so that it better limits the index space for the time-field. */ const enumQuery = { selector: { /** * Here lets assume our status field has the enum type ['idle', 'in-progress', 'done'] * so our restrictive operator can exclude all documents with 'done' as status. */ status: { $in: { 'idle', 'in-progress', }, $gt: 'done' // <- add restrictive operator on status } } } ","version":"Next","tagName":"h2"},{"title":"Set a specific index","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#set-a-specific-index","content":" Sometime the query planner of the database itself has no chance in picking the best index of the possible given indexes. For queries where performance is very important, you might want to explicitly specify which index must be used. const myQuery = myCollection.find({ selector: { /* ... */ }, // explicitly specify index index: [ 'fieldA', 'fieldB' ] }); ","version":"Next","tagName":"h2"},{"title":"Try different ordering of index fields","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#try-different-ordering-of-index-fields","content":" The order of the fields in a compound index is very important for performance. When optimizing index usage, you should try out different orders on the index fields and measure which runs faster. For that it is very important to run tests on real-world data where the distribution of the data is the same as in production. For example when there is a query on a user collection with an age and a gender field, it depends if the index ['gender', 'age'] performance better as ['age', 'gender'] based on the distribution of data: const query = myCollection .findOne({ selector: { age: { $gt: 18 }, gender: { $eq: 'm' } }, /** * Because the developer knows that 50% of the documents are 'male', * but only 20% are below age 18, * it makes sense to enforce using the ['gender', 'age'] index to improve performance. * This could not be known by the query planer which might have chosen ['age', 'gender'] instead. */ index: ['gender', 'age'] }); Notice that RxDB has the Query Optimizer Plugin that can be used to automatically find the best indexes. ","version":"Next","tagName":"h2"},{"title":"Make a Query \"hot\" to reduce load","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#make-a-query-hot-to-reduce-load","content":" Having a query where the up-to-date result set is needed more than once, you might want to make the query "hot" by permanently subscribing to it. This ensures that the query result is kept up to date by RxDB ant the EventReduce algorithm at any time so that at the moment you need the current results, it has them already. For example when you use RxDB at Node.js for a webserver, you should use an outer "hot" query instead of running the same query again on every request to a route. // wrong ❌ app.get('/list', (req, res) => { const result = await myCollection.find({/* ... */}).exec(); res.send(JSON.stringify(result)); }); // right ✔️ const query = myCollection.find({/* ... */}); query.subscribe(); // <- make it hot app.get('/list', (req, res) => { const result = await query.exec(); res.send(JSON.stringify(result)); }); ","version":"Next","tagName":"h2"},{"title":"Store parts of your document data as attachment","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#store-parts-of-your-document-data-as-attachment","content":" For in-app databases like RxDB, it does not make sense to partially parse the JSON of a document. Instead, always the whole document json is parsed and handled. This has a better performance because JSON.parse() in JavaScript directly calls a C++ binding which can parse really fast compared to a partial parsing in JavaScript itself. Also by always having the full document, RxDB can de-duplicate memory caches of document across multiple queries. The downside is that very very big documents with a complex structure can increase query time significantly. Documents fields with complex that are mostly not in use, can be move into an attachment. This would lead RxDB to not fetch the attachment data each time the document is loaded from disc. Instead only when explicitly asked for. const myDocument = await myCollection.insert({/* ... */}); const attachment = await myDocument.putAttachment( { id: 'otherStuff.json', data: createBlob(JSON.stringify({/* ... */}), 'application/json'), type: 'application/json' } ); ","version":"Next","tagName":"h2"},{"title":"Process queries in a worker process","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#process-queries-in-a-worker-process","content":" Moving database storage into a WebWorker can significantly improve performance in web applications that use RxDB or similar NoSQL databases. When database operations are executed in the main JavaScript thread, they can block or slow down the User Interface, especially during heavy or complex data operations. By offloading these operations to a WebWorker, you effectively separate the data processing workload from the UI thread. This means the main thread remains free to handle user interactions and render updates without delay, leading to a smoother and more responsive user experience. Additionally, WebWorkers allow for parallel data processing, which can expedite tasks like querying and indexing. This approach not only enhances UI responsiveness but also optimizes overall application performance by leveraging the multi-threading capabilities of modern browsers. With RxDB you can use the Worker and SharedWorker plugin to move the query processing away from the main thread. ","version":"Next","tagName":"h2"},{"title":"Use less plugins and hooks","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#use-less-plugins-and-hooks","content":" Utilizing fewer hooks and plugins in RxDB or similar NoSQL database systems can lead to markedly better performance. Each additional hook or plugin introduces extra layers of processing and potential overhead, which can cumulatively slow down database operations. These extensions often execute additional code or enforce extra checks with each operation, such as insertions, updates, or deletions. While they can provide valuable functionalities or custom behaviors, their overuse can inadvertently increase the complexity and execution time of basic database operations. By minimizing their use and only employing essential hooks and plugins, the system can operate more efficiently. This streamlined approach reduces the computational burden on each transaction, leading to faster response times and a more efficient overall data handling process, especially critical in high-load or real-time applications where performance is paramount. ","version":"Next","tagName":"h2"},{"title":"Creating Plugins","type":0,"sectionRef":"#","url":"/plugins.html","content":"","keywords":"","version":"Next"},{"title":"rxdb","type":1,"pageTitle":"Creating Plugins","url":"/plugins.html#rxdb","content":" The rxdb-property signals that this plugin is an rxdb-plugin. The value should always be true. ","version":"Next","tagName":"h2"},{"title":"prototypes","type":1,"pageTitle":"Creating Plugins","url":"/plugins.html#prototypes","content":" The prototypes-property contains a function for each of RxDB's internal prototype that you want to manipulate. Each function gets the prototype-object of the corresponding class as parameter and then can modify it. You can see a list of all available prototypes here ","version":"Next","tagName":"h2"},{"title":"overwritable","type":1,"pageTitle":"Creating Plugins","url":"/plugins.html#overwritable","content":" Some of RxDB's functions are not inside of a class-prototype but are static. You can set and overwrite them with the overwritable-object. You can see a list of all overwritables here. hooks Sometimes you don't want to overwrite an existing RxDB-method, but extend it. You can do this by adding hooks which will be called each time the code jumps into the hooks corresponding call. You can find a list of all hooks here. options RxDatabase and RxCollection have an additional options-parameter, which can be filled with any data required be the plugin. const collection = myDatabase.addCollections({ foo: { schema: mySchema, options: { // anything can be passed into the options foo: ()=>'bar' } } }) // Afterwards you can use these options in your plugin. collection.options.foo(); // 'bar' ","version":"Next","tagName":"h2"},{"title":"QueryCache","type":0,"sectionRef":"#","url":"/query-cache.html","content":"","keywords":"","version":"Next"},{"title":"Cache Replacement Policy","type":1,"pageTitle":"QueryCache","url":"/query-cache.html#cache-replacement-policy","content":" To not let RxDB fill up all the memory, a cache replacement policy is defined that clears up the cached queries. This is implemented as a function which runs regularly, depending on when queries are created and the database is idle. The default policy should be good enough for most use cases but defining custom ones can also make sense. ","version":"Next","tagName":"h2"},{"title":"The default policy","type":1,"pageTitle":"QueryCache","url":"/query-cache.html#the-default-policy","content":" The default policy starts cleaning up queries depending on how much queries are in the cache and how much document data they contain. It will never uncache queries that have subscribers to their resultsIt tries to always have less than 100 queries without subscriptions in the cache.It prefers to uncache queries that have never executed and are older than 30 secondsIt prefers to uncache queries that have not been used for longer time ","version":"Next","tagName":"h2"},{"title":"Other references to queries","type":1,"pageTitle":"QueryCache","url":"/query-cache.html#other-references-to-queries","content":" With JavaScript, it is not possible to count references to variables. Therefore it might happen that an uncached RxQuery is still referenced by the users code and used to get results. This should never be a problem, uncached queries must still work. Creating the same query again however, will result in having two RxQuery instances instead of one. ","version":"Next","tagName":"h2"},{"title":"Using a custom policy","type":1,"pageTitle":"QueryCache","url":"/query-cache.html#using-a-custom-policy","content":" A cache replacement policy is a normal JavaScript function according to the type RxCacheReplacementPolicy. It gets the RxCollection as first parameter and the QueryCache as second. Then it iterates over the cached RxQuery instances and uncaches the desired ones with uncacheRxQuery(rxQuery). When you create your custom policy, you should have a look at the default. To apply a custom policy to a RxCollection, add the function as attribute cacheReplacementPolicy. const collection = await myDatabase.addCollections({ humans: { schema: mySchema, cacheReplacementPolicy: function(){ /* ... */ } } }); ","version":"Next","tagName":"h2"},{"title":"Query Optimizer","type":0,"sectionRef":"#","url":"/query-optimizer.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"Query Optimizer","url":"/query-optimizer.html#usage","content":" import { findBestIndex } from 'rxdb-premium/plugins/query-optimizer'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/indexeddb'; const bestIndexes = await findBestIndex({ schema: myRxJsonSchema, /** * In this example we use the IndexedDB RxStorage, * but any other storage can be used for testing. */ storage: getRxStorageIndexedDB(), /** * Multiple queries can be optimized at the same time * which decreases the overall runtime. */ queries: { /** * Queries can be mapped by a query id, * here we use myFirstQuery as query id. */ myFirstQuery: { selector: { age: { $gt: 10 } }, }, mySecondQuery: { selector: { age: { $gt: 10 }, lastName: { $eq: 'Nakamoto' } }, } }, testData: [/** data for the documents. **/] }); ","version":"Next","tagName":"h2"},{"title":"Important details","type":1,"pageTitle":"Query Optimizer","url":"/query-optimizer.html#important-details","content":" This is a build time tool. You should use it to find the best indexes for your queries during build time. Then you store these results and you application can use the best indexes during run time. It makes no sense to run time optimization with a different RxStorage (+settings) that what you use in production. The result of the query optimizer is heavily dependent on the RxStorage and JavaScript runtime. For example it makes no sense to run the optimization in Node.js and then use the optimized indexes in the browser. It is very important that you use production liketestData. Finding the best index heavily depends on data distribution and amount of stored/queried documents. For example if you store and query users with an age field, it makes no sense to just use a random number for the age because in production the age of your users is not equally distributed. The higher you set runs, the more test cycles will be performed and the more significant will be the time measurements which leads to a better index selection. ","version":"Next","tagName":"h2"},{"title":"Population","type":0,"sectionRef":"#","url":"/population.html","content":"","keywords":"","version":"Next"},{"title":"Schema with ref","type":1,"pageTitle":"Population","url":"/population.html#schema-with-ref","content":" The ref-keyword in properties describes to which collection the field-value belongs to (has a relationship). export const refHuman = { title: 'human related to other human', version: 0, primaryKey: 'name', properties: { name: { type: 'string' }, bestFriend: { ref: 'human', // refers to collection human type: 'string' // ref-values must always be string or ['string','null'] (primary of foreign RxDocument) } } }; You can also have a one-to-many reference by using a string-array. export const schemaWithOneToManyReference = { version: 0, primaryKey: 'name', type: 'object', properties: { name: { type: 'string' }, friends: { type: 'array', ref: 'human', items: { type: 'string' } } } }; ","version":"Next","tagName":"h2"},{"title":"populate()","type":1,"pageTitle":"Population","url":"/population.html#populate","content":" ","version":"Next","tagName":"h2"},{"title":"via method","type":1,"pageTitle":"Population","url":"/population.html#via-method","content":" To get the referred RxDocument, you can use the populate()-method. It takes the field-path as attribute and returns a Promise which resolves to the foreign document or null if not found. await humansCollection.insert({ name: 'Alice', bestFriend: 'Carol' }); await humansCollection.insert({ name: 'Bob', bestFriend: 'Alice' }); const doc = await humansCollection.findOne('Bob').exec(); const bestFriend = await doc.populate('bestFriend'); console.dir(bestFriend); //> RxDocument[Alice] ","version":"Next","tagName":"h3"},{"title":"via getter","type":1,"pageTitle":"Population","url":"/population.html#via-getter","content":" You can also get the populated RxDocument with the direct getter. Therefore you have to add an underscore suffix _ to the fieldname. This works also on nested values. await humansCollection.insert({ name: 'Alice', bestFriend: 'Carol' }); await humansCollection.insert({ name: 'Bob', bestFriend: 'Alice' }); const doc = await humansCollection.findOne('Bob').exec(); const bestFriend = await doc.bestFriend_; // notice the underscore_ console.dir(bestFriend); //> RxDocument[Alice] ","version":"Next","tagName":"h3"},{"title":"Example with nested reference","type":1,"pageTitle":"Population","url":"/population.html#example-with-nested-reference","content":" const myCollection = await myDatabase.addCollections({ human: { schema: { version: 0, type: 'object', properties: { name: { type: 'string' }, family: { type: 'object', properties: { mother: { type: 'string', ref: 'human' } } } } } } }); const mother = await myDocument.family.mother_; console.dir(mother); //> RxDocument ","version":"Next","tagName":"h2"},{"title":"Example with array","type":1,"pageTitle":"Population","url":"/population.html#example-with-array","content":" const myCollection = await myDatabase.addCollections({ human: { schema: { version: 0, type: 'object', properties: { name: { type: 'string' }, friends: { type: 'array', ref: 'human', items: { type: 'string' } } } } } }); //[insert other humans here] await myCollection.insert({ name: 'Alice', friends: [ 'Bob', 'Carol', 'Dave' ] }); const doc = await humansCollection.findOne('Alice').exec(); const friends = await myDocument.friends_; console.dir(friends); //> Array.<RxDocument> ","version":"Next","tagName":"h2"},{"title":"RxDB Quickstart","type":0,"sectionRef":"#","url":"/quickstart.html","content":"","keywords":"","version":"Next"},{"title":"Next steps","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#next-steps","content":" You are now ready to dive deeper into RxDB. Start reading the full documentation here.There is a full implementation of the quickstart guide so you can clone that repository and play with the code.For frameworks and runtimes like Angular, React Native and others, check out the list of example implementations.Also please continue reading the documentation, join the community on our Discord chat, and star the GitHub repo.If you are using RxDB in a production environment and are able to support its continued development, please take a look at the 👑 Premium package which includes additional plugins and utilities. ","version":"Next","tagName":"h2"},{"title":"Signals & Co. - Custom reactivity adapters instead of RxJS Observables","type":0,"sectionRef":"#","url":"/reactivity.html","content":"","keywords":"","version":"Next"},{"title":"Adding a custom reactivity factory (in angular projects)","type":1,"pageTitle":"Signals & Co. - Custom reactivity adapters instead of RxJS Observables","url":"/reactivity.html#adding-a-custom-reactivity-factory-in-angular-projects","content":" If you have an angular project, to get custom reactivity objects out of RxDB, you have to pass a RxReactivityFactory during database creation. The RxReactivityFactory has the fromObservable() method that creates your custom reacitvity object based on an observable and an initial value. For example to use signals in angular, you can use the angular toSignal function: import { RxReactivityFactory } from 'rxdb/plugins/core'; import { Signal, untracked } from '@angular/core'; import { toSignal } from '@angular/core/rxjs-interop'; export function createReactivityFactory(injector: Injector): RxReactivityFactory<Signal<any>> { return { fromObservable(observable$, initialValue: any) { return untracked(() => toSignal(observable$, { initialValue, injector, rejectErrors: true }) ); } }; } Then you can pass this factory when you create the RxDatabase: import { createRxDatabase } from 'rxdb/plugins/core'; const database = await createRxDatabase({ name: 'mydb', storage: getRxStorageDexie(), reactivity: createReactivityFactory(inject(Injector)) }); An example of how signals are used in angular with RxDB, can be found at the RxDB Angular Example ","version":"Next","tagName":"h2"},{"title":"Adding reactivity for other Frameworks","type":1,"pageTitle":"Signals & Co. - Custom reactivity adapters instead of RxJS Observables","url":"/reactivity.html#adding-reactivity-for-other-frameworks","content":" When adding custom reactivity for other JavaScript frameworks or libraries, make sure to correctly unsubscribe whenever you call observable.subscribe() in the fromObservable() method. There are also some 👑 Premium Plugins that can be used with other (non-angular frameworks): ","version":"Next","tagName":"h2"},{"title":"Vue Shallow Refs","type":1,"pageTitle":"Signals & Co. - Custom reactivity adapters instead of RxJS Observables","url":"/reactivity.html#vue-shallow-refs","content":" // npm install vue --save import { VueRxReactivityFactory } from 'rxdb-premium/plugins/reactivity-vue'; import { createRxDatabase } from 'rxdb/plugins/core'; const database = await createRxDatabase({ name: 'mydb', storage: getRxStorageDexie(), reactivity: VueRxReactivityFactory }); ","version":"Next","tagName":"h3"},{"title":"Preact Signals","type":1,"pageTitle":"Signals & Co. - Custom reactivity adapters instead of RxJS Observables","url":"/reactivity.html#preact-signals","content":" // npm install @preact/signals-core --save import { PreactSignalsRxReactivityFactory } from 'rxdb-premium/plugins/reactivity-preact-signals'; import { createRxDatabase } from 'rxdb/plugins/core'; const database = await createRxDatabase({ name: 'mydb', storage: getRxStorageDexie(), reactivity: PreactSignalsRxReactivityFactory }); ","version":"Next","tagName":"h3"},{"title":"Accessing custom reactivity objects","type":1,"pageTitle":"Signals & Co. - Custom reactivity adapters instead of RxJS Observables","url":"/reactivity.html#accessing-custom-reactivity-objects","content":" All observable data in RxDB is marked by the single dollar sign $ like RxCollection.$ for events or RxDocument.myField$ to get the observable for a document field. To make custom reactivity objects distinguable, they are marked with double-dollar signs $$ instead. Here are some example on how to get custom reactivity objects from RxDB specific instances: // RxDocument const signal = myRxDocument.get$$('foobar'); // get signal that represents the document field 'foobar' const signal = myRxDocument.foobar$$; // same as above const signal = myRxDocument.$$; // get signal that represents whole document over time const signal = myRxDocument.deleted$$; // get signal that represents the deleted state of the document // RxQuery const signal = collection.find().$$; // get signal that represents the query result set over time const signal = collection.findOne().$$; // get signal that represents the query result set over time // RxLocalDocument const signal = myRxLocalDocument.$$; // get signal that represents the whole local document state const signal = myRxLocalDocument.get$$('foobar'); // get signal that represents the foobar field ","version":"Next","tagName":"h2"},{"title":"Limitations","type":1,"pageTitle":"Signals & Co. - Custom reactivity adapters instead of RxJS Observables","url":"/reactivity.html#limitations","content":" TypeScript typings are not fully implemented, make a PR if something is missing or not working for you.Currently not all observables things in RxDB are implemented to work with custom reactivity. Please make a PR if you have the need for any missing one. ","version":"Next","tagName":"h2"},{"title":"React Native Database","type":0,"sectionRef":"#","url":"/react-native-database.html","content":"","keywords":"","version":"Next"},{"title":"Database Solutions for React-Native","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#database-solutions-for-react-native","content":" There are multiple database solutions that can be used with React Native. While I would recommend to use RxDB for most use cases, it is still helpful to learn about other alternatives. ","version":"Next","tagName":"h2"},{"title":"AsyncStorage","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#asyncstorage","content":" AsyncStorage is a key->value storage solution that works similar to the browsers localstorage API. The big difference is that access to the AsyncStorage is not a blocking operation but instead everything is Promise based. This is a big benefit because long running writes and reads will not block your JavaScript process which would cause a laggy user interface. /** * Because it is Promise-based, * you have to 'await' the call to getItem() */ await setItem('myKey', 'myValue'); const value = await AsyncStorage.getItem('myKey'); AsyncStorage was originally included in React Native itself. But it was deprecated by the React Native Team which recommends to use a community based package instead. There is a community fork of AsyncStorage that is actively maintained and open source. AsyncStorage is fine when only a small amount of data needs to be stored and when no query capabilities besides the key-access are required. Complex queries or features are not supported which makes AsyncStorage not suitable for anything more than storing simple user settings data. ","version":"Next","tagName":"h3"},{"title":"SQLite","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#sqlite","content":" SQLite is a SQL based relational database written in C that was crafted to be embed inside of applications. Operations are written in the SQL query language and SQLite generally follows the PostgreSQL syntax. To use SQLite in React Native, you first have to include the SQLite library itself as a plugin. There a different project out there that can be used, but I would recommend to use the react-native-quick-sqlite project. First you have to install the library into your React Native project via npm install react-native-quick-sqlite. In your code you can then import the library and create a database connection: import {open} from 'react-native-quick-sqlite'; const db = open('myDb.sqlite'); Notice that SQLite is a file based database where all data is stored directly in the filesystem of the OS. Therefore to create a connection, you have to provide a filename. With the open connection you can then run SQL queries: let { rows } = db.execute('SELECT somevalue FROM sometable'); If that does not work for you, you might want to try the react-native-sqlite-storage project instead which is also very popular. Downsides of Using SQLite in UI-Based Apps While SQLite is reliable and well-tested, it has several shortcomings when it comes to using it directly in UI-based React Native applications: Lack of Observability: Out of the box, SQLite does not offer a straightforward way to observe queries or document fields. This means that implementing real-time data updates in your UI requires additional layers or libraries.Bridging Overhead: Each query or data operation must go through the React Native bridge to access the native SQLite module. This can introduce performance bottlenecks or responsiveness issues, especially for large or complex data operations.No Built-In Replication: SQLite on its own is not designed for syncing data across multiple devices or with a backend. If your app requires multi-device data syncing or offline-first features, additional tools or a custom solution are necessary.Version Management: Handling schema changes often requires a custom migration process. If your data structure evolves frequently, managing these migrations can be cumbersome. Overall, SQLite can be a good solution for straightforward, local-only data storage where complex real-time features or synchronization are not needed. For more advanced requirements, like reactive UI updates or multi-client data replication, you'll likely want a more feature-rich solution. ","version":"Next","tagName":"h3"},{"title":"PouchDB","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#pouchdb","content":" PouchDB is a JavaScript NoSQL database that follows the API of the Apache CouchDB server database. The core feature of PouchDB is the ability to do a two-way replication with any CouchDB compliant endpoint. While PouchDB is pretty mature, it has some drawbacks that blocks it from being used in a client-side React Native application. For example it has to store all documents states over time which is required to replicate with CouchDB. Also it is not easily possible to fully purge documents and so it will fill up disc space over time. A big problem is also that PouchDB is not really maintained and major bugs like wrong query results are not fixed anymore. The performance of PouchDB is a general bottleneck which is caused by how it has to store and fetch documents while being compliant to CouchDB. The only real reason to use PouchDB in React Native, is when you want to replicate with a CouchDB or Couchbase server. Because PouchDB is based on an adapter system for storage, there are two options to use it with React Native: Either use the pouchdb-adapter-react-native-sqlite adapteror the pouchdb-adapter-asyncstorage adapter. Because the asyncstorage adapter is no longer maintained, it is recommended to use the native-sqlite adapter: First you have to install the adapter and other dependencies via npm install pouchdb-adapter-react-native-sqlite react-native-quick-sqlite react-native-quick-websql. Then you have to craft a custom PouchDB class that combines these plugins: import 'react-native-get-random-values'; import PouchDB from 'pouchdb-core'; import HttpPouch from 'pouchdb-adapter-http'; import replication from 'pouchdb-replication'; import mapreduce from 'pouchdb-mapreduce'; import SQLiteAdapterFactory from 'pouchdb-adapter-react-native-sqlite'; import WebSQLite from 'react-native-quick-websql'; const SQLiteAdapter = SQLiteAdapterFactory(WebSQLite); export default PouchDB.plugin(HttpPouch) .plugin(replication) .plugin(mapreduce) .plugin(SQLiteAdapter); This can then be used to create a PouchDB database instance which can store and query documents: const db = new PouchDB('mydb.db', { adapter: 'react-native-sqlite' }); ","version":"Next","tagName":"h3"},{"title":"RxDB","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#rxdb","content":" RxDB is an local-first, NoSQL-database for JavaScript applications. It is reactive which means that you can not only query the current state, but subscribe to all state changes like the result of a query or even a single field of a document. This is great for UI-based realtime applications in a way that makes it easy to develop realtime applications like what you need in React Native. Key benefits of RxDB include: Observability and Real-Time Queries: Automatic UI updates when underlying data changes, making it much simpler to build responsive, reactive apps.Offline-First and Sync: Built-in support for syncing with CouchDB, or via GraphQL replication, allowing your app to work offline and seamlessly sync when online. It is easy to make the RxDB replication compatible with anything that supports HTTP.Encryption and Data Security: RxDB supports field-level encryption and a robust plugin ecosystem for compression and attachments.Data Modeling and Ease of Use: Offers a schema-based approach that helps catch invalid data early and ensures consistency.Performance: Optimized for storing and querying large amounts of data on mobile devices. There are multiple ways to use RxDB in React Native: Use the memory RxStorage that stores the data inside of the JavaScript memory without persistenceUse the SQLite RxStorage with the react-native-quick-sqlite plugin. It is recommended to use the SQLite RxStorage because it has the best performance and is the easiest to set up. However it is part of the 👑 Premium Plugins which must be purchased, so to try out RxDB with React Native, you might want to use the memory storage first. Later you can replace it with the SQLite storage by just changing two lines of configuration. First you have to install all dependencies via npm install rxdb rxjs rxdb-premium react-native-quick-sqlite. Then you can assemble the RxStorage and create a database with it: import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsQuickSQLite } from 'rxdb-premium/plugins/storage-sqlite'; import { open } from 'react-native-quick-sqlite'; // create database const myRxDatabase = await createRxDatabase({ // Instead of a simple name, // you can use a folder path to determine the database location name: 'exampledb', multiInstance: false, // <- Set this to false when using RxDB in React Native storage: getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsQuickSQLite(open) }) }); // create collections const collections = await myRxDatabase.addCollections({ humans: { /* ... */ } }); // insert document await collections.humans.insert({id: 'foo', name: 'bar'}); // run a query const result = await collections.humans.find({ selector: { name: 'bar' } }).exec(); // observe a query await collections.humans.find({ selector: { name: 'bar' } }).$.subscribe(result => {/* ... */}); Using the SQLite RxStorage is pretty fast, which is shown in the performance comparison. To learn more about using RxDB with React Native, you might want to check out this example project. Also RxDB provides many other features like encryption or compression. You can even store binary data as attachments or use RxDB as an ORM in React Native. ","version":"Next","tagName":"h3"},{"title":"Realm","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#realm","content":" Realm is another database solution that is particularly popular in the mobile world. Originally an independent project, Realm is now owned by MongoDB, and has seen deeper integration with MongoDB services over time. Pros: Fast, object-based database approach with an easy data model.Historically known for good performance and a simple, user-friendly API. Downsides: Forced MongoDB Cloud Usage: Realm Sync is now tightly coupled with MongoDB Realm Cloud. If you want to use their full sync or advanced features, you are essentially locked into MongoDB's ecosystem, which can be a downside if you need on-premise or custom hosting.Missing or Limited Features: While Realm covers many basic needs, some developers find that advanced queries or certain offline-first features are not as robust or flexible as other solutions.Vendor Lock-In: If you rely heavily on Realm Sync, migrating away from MongoDB's cloud can be difficult because the sync logic and data format are tightly integrated.Community Concerns: Since the MongoDB acquisition, some worry about Realm's open-source future and whether large changes or new features will remain community-friendly. Although Realm can be a good solution when used purely as a local database, if you plan on syncing data across clients or want to avoid cloud vendor lock-in, you should consider carefully how MongoDB's ownership might affect your long-term plans. ","version":"Next","tagName":"h3"},{"title":"Firebase / Firestore","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#firebase--firestore","content":" Firestore is a cloud based database technology that stores data on clients devices and replicates it with the Firebase cloud service that is run by google. It has many features like observability and authentication. The main feature lacking is the non-complete offline first support because clients cannot start the application while being offline because then the authentication does not work. After they are authenticated, being offline is no longer a problem. Also using firestore creates a vendor lock-in because it is not possible to replicate with a custom self hosted backend. To get started with Firestore in React Native, it is recommended to use the React Native Firebase open-source project. ","version":"Next","tagName":"h3"},{"title":"Follow up","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#follow-up","content":" A good way to learn using RxDB database with React Native is to check out the RxDB React Native example and use that as a tutorial.If you haven't done so yet, you should start learning about RxDB with the Quickstart Tutorial.There is a followup list of other client side database alternatives that might work with React Native. ","version":"Next","tagName":"h2"},{"title":"Replication with Firestore from Firebase","type":0,"sectionRef":"#","url":"/replication-firestore.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"Replication with Firestore from Firebase","url":"/replication-firestore.html#usage","content":" First initialize your Firestore database like you would do without RxDB. import * as firebase from 'firebase/app'; import { getFirestore, collection } from 'firebase/firestore'; const projectId = 'my-project-id'; const app = firebase.initializeApp({ projectId, databaseURL: 'http://localhost:8080?ns=' + projectId, /* ... */ }); const firestoreDatabase = getFirestore(app); const firestoreCollection = collection(firestoreDatabase, 'my-collection-name'); Then you can start the replication by calling replicateFirestore() on your RxCollection. const replicationState = replicateFirestore( { replicationIdentifier: `https://firestore.googleapis.com/${projectId}`, collection: myRxCollection, firestore: { projectId, database: firestoreDatabase, collection: firestoreCollection }, pull: {}, push: {}, /** * Either do a live or a one-time replication * [default=true] */ live: true, /** * (optional) likely you should just use the default. * * In firestore it is not possible to read out * the internally used write timestamp of a document. * Even if we could read it out, it is not indexed which * is required for fetch 'changes-since-x'. * So instead we have to rely on a custom user defined field * that contains the server time which is set by firestore via serverTimestamp() * Notice that the serverTimestampField MUST NOT be part of the collections RxJsonSchema! * [default='serverTimestamp'] */ serverTimestampField: 'serverTimestamp' } ); To observe and cancel the replication, you can use any other methods from the ReplicationState like error$, cancel() and awaitInitialReplication(). ","version":"Next","tagName":"h2"},{"title":"Handling deletes","type":1,"pageTitle":"Replication with Firestore from Firebase","url":"/replication-firestore.html#handling-deletes","content":" RxDB requires you to never fully delete documents. This is needed to be able to replicate the deletion state of a document to other instances. The firestore replication will set a boolean _deleted field to all documents to indicate the deletion state. You can change this by setting a different deletedField in the sync options. ","version":"Next","tagName":"h2"},{"title":"Do not set enableIndexedDbPersistence()","type":1,"pageTitle":"Replication with Firestore from Firebase","url":"/replication-firestore.html#do-not-set-enableindexeddbpersistence","content":" Firestore has the enableIndexedDbPersistence() feature which caches document states locally to IndexedDB. This is not needed when you replicate your Firestore with RxDB because RxDB itself will store the data locally already. ","version":"Next","tagName":"h2"},{"title":"Using the replication with an already existing Firestore Database State","type":1,"pageTitle":"Replication with Firestore from Firebase","url":"/replication-firestore.html#using-the-replication-with-an-already-existing-firestore-database-state","content":" If you have not used RxDB before and you already have documents inside of your Firestore database, you have to manually set the _deleted field to false and the serverTimestamp to all existing documents. import { getDocs, query, serverTimestamp } from 'firebase/firestore'; const allDocsResult = await getDocs(query(firestoreCollection)); allDocsResult.forEach(doc => { doc.update({ _deleted: false, serverTimestamp: serverTimestamp() }) }); Also notice that if you do writes from non-RxDB applications, you have to keep these fields in sync. It is recommended to use the Firestore triggers to ensure that. ","version":"Next","tagName":"h2"},{"title":"Filtered Replication","type":1,"pageTitle":"Replication with Firestore from Firebase","url":"/replication-firestore.html#filtered-replication","content":" You might need to replicate only a subset of your collection, either to or from Firestore. You can achieve this using push.filter and pull.filter options. const replicationState = replicateFirestore( { collection: myRxCollection, firestore: { projectId, database: firestoreDatabase, collection: firestoreCollection }, pull: { filter: [ where('ownerId', '==', userId) ] }, push: { filter: (item) => item.syncEnabled === true } } ); Keep in mind that you can not use inequality operators (<, <=, !=, not-in, >, or >=) in pull.filter since that would cause a conflict with ordering by serverTimestamp. ","version":"Next","tagName":"h2"},{"title":"Replication with CouchDB","type":0,"sectionRef":"#","url":"/replication-couchdb.html","content":"","keywords":"","version":"Next"},{"title":"Pros","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#pros","content":" Faster initial replication.Works with any RxStorage, not just PouchDB.Easier conflict handling because conflicts are handled during replication and not afterwards.Does not have to store all document revisions on the client, only stores the newest version. ","version":"Next","tagName":"h2"},{"title":"Cons","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#cons","content":" Does not support the replication of attachments.Like all CouchDB replication plugins, this one is also limited to replicating 6 collections in parallel. Read this for workarounds ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#usage","content":" Start the replication via replicateCouchDB(). import { replicateCouchDB } from 'rxdb/plugins/replication-couchdb'; const replicationState = replicateCouchDB( { replicationIdentifier: 'my-couchdb-replication', collection: myRxCollection, // url to the CouchDB endpoint (required) url: 'http://example.com/db/humans', /** * true for live replication, * false for a one-time replication. * [default=true] */ live: true, /** * A custom fetch() method can be provided * to add authentication or credentials. * Can be swapped out dynamically * by running 'replicationState.fetch = newFetchMethod;'. * (optional) */ fetch: myCustomFetchMethod, pull: { /** * Amount of documents to be fetched in one HTTP request * (optional) */ batchSize: 60, /** * Custom modifier to mutate pulled documents * before storing them in RxDB. * (optional) */ modifier: docData => {/* ... */}, /** * Heartbeat time in milliseconds * for the long polling of the changestream. * @link https://docs.couchdb.org/en/3.2.2-docs/api/database/changes.html * (optional, default=60000) */ heartbeat: 60000 }, push: { /** * How many local changes to process at once. * (optional) */ batchSize: 60, /** * Custom modifier to mutate documents * before sending them to the CouchDB endpoint. * (optional) */ modifier: docData => {/* ... */} } } ); When you call replicateCouchDB() it returns a RxCouchDBReplicationState which can be used to subscribe to events, for debugging or other functions. It extends the RxReplicationState so any other method that can be used there can also be used on the CouchDB replication state. ","version":"Next","tagName":"h2"},{"title":"Conflict handling","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#conflict-handling","content":" When conflicts appear during replication, the conflictHandler of the RxCollection is used, equal to the other replication plugins. Read more about conflict handling here. ","version":"Next","tagName":"h2"},{"title":"Auth example","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#auth-example","content":" Lets say for authentication you need to add a bearer token as HTTP header to each request. You can achieve that by crafting a custom fetch() method that add the header field. const myCustomFetch = (url, options) => { // flat clone the given options to not mutate the input const optionsWithAuth = Object.assign({}, options); // ensure the headers property exists if(!optionsWithAuth.headers) { optionsWithAuth.headers = {}; } // add bearer token to headers optionsWithAuth.headers['Authorization'] ='Basic S0VLU0UhIExFQ0...'; // call the original fetch function with our custom options. return fetch( url, optionsWithAuth ); }; const replicationState = replicateCouchDB( { replicationIdentifier: 'my-couchdb-replication', collection: myRxCollection, url: 'http://example.com/db/humans', /** * Add the custom fetch function here. */ fetch: myCustomFetch, pull: {}, push: {} } ); Also when your bearer token changes over time, you can set a new custom fetch method while the replication is running: replicationState.fetch = newCustomFetchMethod; Also there is a helper method getFetchWithCouchDBAuthorization() to create a fetch handler with authorization: import { replicateCouchDB, getFetchWithCouchDBAuthorization } from 'rxdb/plugins/replication-couchdb'; const replicationState = replicateCouchDB( { replicationIdentifier: 'my-couchdb-replication', collection: myRxCollection, url: 'http://example.com/db/humans', /** * Add the custom fetch function here. */ fetch: getFetchWithCouchDBAuthorization('myUsername', 'myPassword'), pull: {}, push: {} } ); ","version":"Next","tagName":"h2"},{"title":"Limitations","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#limitations","content":" Since CouchDB only allows synchronization through HTTP1.1 long polling requests there is a limitation of 6 active synchronization connections before the browser prevents sending any further request. This limitation is at the level of browser per tab per domain (some browser, especially older ones, might have a different limit, see here). Since this limitation is at the browser level there are several solutions: Use only a single database for all entities and set a "type" field for each of the documentsCreate multiple subdomains for CouchDB and use a max of 6 active synchronizations (or less) for eachUse a proxy (ex: HAProxy) between the browser and CouchDB and configure it to use HTTP2.0, since HTTP2.0 If you use nginx in front of your CouchDB, you can use these settings to enable http2-proxying to prevent the connection limit problem: server { http2 on; location /db { rewrite /db/(.*) /$1 break; proxy_pass http://172.0.0.1:5984; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded proxy_set_header Connection "keep_alive" } } ","version":"Next","tagName":"h2"},{"title":"Known problems","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#known-problems","content":" ","version":"Next","tagName":"h2"},{"title":"Database missing","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#database-missing","content":" In contrast to PouchDB, this plugin does NOT automatically create missing CouchDB databases. If your CouchDB server does not have a database yet, you have to create it by yourself by running a PUT request to the database name url: // create a 'humans' CouchDB database on the server const remoteDatabaseName = 'humans'; await fetch( 'http://example.com/db/' + remoteDatabaseName, { method: 'PUT' } ); ","version":"Next","tagName":"h3"},{"title":"React Native","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#react-native","content":" React Native does not have a global fetch method. You have to import fetch method with the cross-fetch package: import crossFetch from 'cross-fetch'; const replicationState = replicateCouchDB( { replicationIdentifier: 'my-couchdb-replication', collection: myRxCollection, url: 'http://example.com/db/humans', fetch: crossFetch, pull: {}, push: {} } ); ","version":"Next","tagName":"h2"},{"title":"The RxDB Plugin replication-p2p has been renamed to replication-webrtc","type":0,"sectionRef":"#","url":"/replication-p2p.html","content":"The RxDB Plugin replication-p2p has been renamed to replication-webrtc The new documentation page has been moved to here","keywords":"","version":"Next"},{"title":"Replication with NATS","type":0,"sectionRef":"#","url":"/replication-nats.html","content":"","keywords":"","version":"Next"},{"title":"Precondition","type":1,"pageTitle":"Replication with NATS","url":"/replication-nats.html#precondition","content":" For the replication endpoint the NATS cluster must have enabled JetStream and store all message data as structured JSON. The easiest way to start a compatible NATS server is to use the official docker image: docker run --rm --name rxdb-nats -p 4222:4222 nats:2.9.17 -js ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"Replication with NATS","url":"/replication-nats.html#usage","content":" To start the replication, import the replicateNats() method from the RxDB plugin and call it with the collection that must be replicated. The replication runs per RxCollection, you can replicate multiple RxCollections by starting a new replication for each of them. import { replicateNats } from 'rxdb/plugins/replication-nats'; const replicationState = replicateNats({ collection: myRxCollection, replicationIdentifier: 'my-nats-replication-collection-A', // in NATS, each stream need a name streamName: 'stream-for-replication-A', /** * The subject prefix determines how the documents are stored in NATS. * For example the document with id 'alice' will have the subject 'foobar.alice' */ subjectPrefix: 'foobar', connection: { servers: 'localhost:4222' }, live: true, pull: { batchSize: 30 }, push: { batchSize: 30 } }); ","version":"Next","tagName":"h2"},{"title":"Handling deletes","type":1,"pageTitle":"Replication with NATS","url":"/replication-nats.html#handling-deletes","content":" RxDB requires you to never fully delete documents. This is needed to be able to replicate the deletion state of a document to other instances. The NATS replication will set a boolean _deleted field to all documents to indicate the deletion state. You can change this by setting a different deletedField in the sync options. ","version":"Next","tagName":"h2"},{"title":"RxDB Server Replication","type":0,"sectionRef":"#","url":"/replication-server.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"RxDB Server Replication","url":"/replication-server.html#usage","content":" The replication server plugin is imported from the rxdb-server npm package. Then you start the replication with a given collection and endpoint url by calling replicateServer(). import { replicateServer } from 'rxdb-server/plugins/replication-server'; const replicationState = await replicateServer({ collection: usersCollection, replicationIdentifier: 'my-server-replication', url: 'http://localhost:80/users/0', // endpoint url with the servers collection schema version at the end headers: { Authorization: 'Bearer S0VLU0UhI...' }, push: {}, pull: {}, live: true }); ","version":"Next","tagName":"h2"},{"title":"outdatedClient$","type":1,"pageTitle":"RxDB Server Replication","url":"/replication-server.html#outdatedclient","content":" When you update your schema at the server and run a migration, you end up with a different replication url that has a new schema version number at the end. Your clients might still be running an old version of your application that will no longer be compatible with the endpoint. Therefore when the client tries to call a server endpoint with an outdated schema version, the outdatedClient$ observable emits to tell your client that the application must be updated. With that event you can tell the client to update the application. On browser application you might want to just reload the page on that event: replicationState.outdatedClient$.subscribe(() => { location.reload(); }); ","version":"Next","tagName":"h2"},{"title":"unauthorized$","type":1,"pageTitle":"RxDB Server Replication","url":"/replication-server.html#unauthorized","content":" When you clients auth data is not valid (or no longer valid), the server will no longer accept any requests from you client and inform the client that the auth headers must be updated. The unauthorized$ observable will emit and expects you to update the headers accordingly so that following requests will be accepted again. replicationState.unauthorized$.subscribe(() => { replicationState.setHeaders({ Authorization: 'Bearer S0VLU0UhI...' }); }); ","version":"Next","tagName":"h2"},{"title":"forbidden$","type":1,"pageTitle":"RxDB Server Replication","url":"/replication-server.html#forbidden","content":" When you client behaves wrong in any case, like update non-allowed values or changing documents that it is not allowed to, the server will drop the connection and the replication state will emit on the forbidden$ observable. It will also automatically stop the replication so that your client does not accidentally DOS attack the server. replicationState.forbidden$.subscribe(() => { console.log('Client is behaving wrong'); }); ","version":"Next","tagName":"h2"},{"title":"Custom EventSource implementation","type":1,"pageTitle":"RxDB Server Replication","url":"/replication-server.html#custom-eventsource-implementation","content":" For the server send events, the eventsource npm package is used instead of the native EventSource API. We need this because the native browser API does not support sending headers with the request which is required by the server to parse the auth data. If the eventsource package does not work for you, you can set an own implementation when creating the replication. const replicationState = await replicateServer({ /* ... */ eventSource: MyEventSourceConstructor /* ... */ }); ","version":"Next","tagName":"h2"},{"title":"Websocket Replication","type":0,"sectionRef":"#","url":"/replication-websocket.html","content":"","keywords":"","version":"Next"},{"title":"Starting the Websocket Server","type":1,"pageTitle":"Websocket Replication","url":"/replication-websocket.html#starting-the-websocket-server","content":" import { createRxDatabase } from 'rxdb'; import { startWebsocketServer } from 'rxdb/plugins/replication-websocket'; // create a RxDatabase like normal const myDatabase = await createRxDatabase({/* ... */}); // start a websocket server const serverState = await startWebsocketServer({ database: myDatabase, port: 1337, path: '/socket' }); // stop the server await serverState.close(); ","version":"Next","tagName":"h2"},{"title":"Connect to the Websocket Server","type":1,"pageTitle":"Websocket Replication","url":"/replication-websocket.html#connect-to-the-websocket-server","content":" The replication has to be started once for each collection that you want to replicate. import { replicateWithWebsocketServer } from 'rxdb/plugins/replication-websocket'; // start the replication const replicationState = await replicateWithWebsocketServer({ /** * To make the replication work, * the client collection name must be equal * to the server collection name. */ collection: myRxCollection, url: 'ws://localhost:1337/socket' }); // stop the replication await replicationState.cancel(); ","version":"Next","tagName":"h2"},{"title":"Customize","type":1,"pageTitle":"Websocket Replication","url":"/replication-websocket.html#customize","content":" We use the ws npm library, so you can use all optional configuration provided by it. This is especially important to improve performance by opting in of some optional settings. ","version":"Next","tagName":"h2"},{"title":"P2P WebRTC Replication with RxDB - Sync Data between Browsers and Devices in JavaScript","type":0,"sectionRef":"#","url":"/replication-webrtc.html","content":"","keywords":"","version":"Next"},{"title":"What is WebRTC?","type":1,"pageTitle":"P2P WebRTC Replication with RxDB - Sync Data between Browsers and Devices in JavaScript","url":"/replication-webrtc.html#what-is-webrtc","content":" WebRTC stands for Web Real-Time Communication. It is an open standard that enables browsers and native apps to exchange audio, video, or arbitrary data directly between peers, bypassing a central server after the initial connection is established. WebRTC uses NAT traversal techniques like ICE (Interactive Connectivity Establishment) to punch through firewalls and establish direct links. This peer-to-peer nature drastically reduces latency while maintaining high security and end-to-end encryption capabilities. For a deeper look at comparing WebRTC with WebSockets and WebTransport, you can read our comprehensive overview. While WebSockets or WebTransport often work in client-server contexts, WebRTC offers direct peer-to-peer connections ideal for fully decentralized data flows. ","version":"Next","tagName":"h2"},{"title":"Benefits of P2P Sync with WebRTC Compared to Client-Server Architecture","type":1,"pageTitle":"P2P WebRTC Replication with RxDB - Sync Data between Browsers and Devices in JavaScript","url":"/replication-webrtc.html#benefits-of-p2p-sync-with-webrtc-compared-to-client-server-architecture","content":" Reduced Latency - By skipping a central server hop, data travels directly from one client to another, minimizing round-trip times and improving responsiveness.Scalability - New peers can join without overloading a central infrastructure. The sync overhead increases linearly with the number of connections rather than requiring a massive server cluster.Privacy & Ownership - Data stays within the user’s devices, avoiding risks tied to storing data on third-party servers. This design aligns well with local-first or "zero-latency" apps.Resilience - In some scenarios, if the central server is unreachable, P2P connections remain operational (assuming a functioning signaling path). Apps can still replicate data among local networks like when they are in the same Wifi or LAN.Cost Savings - Reducing the reliance on a high-bandwidth server can cut hosting and bandwidth expenses, particularly in high-traffic or IoT-style use cases. ","version":"Next","tagName":"h2"},{"title":"Peer-to-Peer (P2P) WebRTC Replication with the RxDB JavaScript Database","type":1,"pageTitle":"P2P WebRTC Replication with RxDB - Sync Data between Browsers and Devices in JavaScript","url":"/replication-webrtc.html#peer-to-peer-p2p-webrtc-replication-with-the-rxdb-javascript-database","content":" Traditionally, real-time data synchronization depends on centralized servers to manage and distribute updates. In contrast, RxDB’s WebRTC P2P replication allows data to flow directly among clients, removing the server as a data store. This approach is live and fully decentralized, requiring only a signaling server for initial discovery: No master-slave concept - each peer hosts its own local RxDB.Clients (browsers, devices) connect to each other via WebRTC data channels.The RxDB replication protocol then handles pushing/pulling document changes across peers. Because RxDB is a NoSQL database and the replication protocol is straightforward, setting up robust P2P sync is far easier than orchestrating a complex client-server database architecture. ","version":"Next","tagName":"h2"},{"title":"Using RxDB with the WebRTC Replication Plugin","type":1,"pageTitle":"P2P WebRTC Replication with RxDB - Sync Data between Browsers and Devices in JavaScript","url":"/replication-webrtc.html#using-rxdb-with-the-webrtc-replication-plugin","content":" Before you use this plugin, make sure that you understand how WebRTC works. Here we build a todo-app that replicates todo-entries between clients: You can find a fully build example of this at the RxDB Quickstart Repository which you can also try out online. Four you create the database and then you can configure the replication: 1. Create the Database and Collection Here we create a database with the dexie based storage that stores data inside of IndexedDB in a browser. RxDB has a wide range of storages for other JavaScript runtimes. import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const db = await createRxDatabase({ name: 'myTodoDB', storage: getRxStorageDexie() }); await db.addCollections({ todos: { schema: { title: 'todo schema', version: 0, type: 'object', primaryKey: 'id', properties: { id: { type: 'string', maxLength: 100 }, title: { type: 'string' }, done: { type: 'boolean', default: false }, created: { type: 'string', format: 'date-time' } }, required: ['id', 'title', 'done'] } } }); // insert an example document await db.todos.insert({ id: 'todo-1', title: 'P2P demo task', done: false, created: new Date().toISOString() }); 2. Import the WebRTC replication plugin import { replicateWebRTC, getConnectionHandlerSimplePeer } from 'rxdb/plugins/replication-webrtc'; 3. Start the P2P replication To start the replication you have to call replicateWebRTC on the collection. As options you have to provide a topic and a connection handler function that implements the P2PConnectionHandlerCreator interface. As default you should start with the getConnectionHandlerSimplePeer method which uses the simple-peer library and comes shipped with RxDB. const replicationPool = await replicateWebRTC( { // Start the replication for a single collection collection: db.todos, // The topic is like a 'room-name'. All clients with the same topic // will replicate with each other. In most cases you want to use // a different topic string per user. Also you should prefix the topic with // a unique identifier for your app, to ensure you do not let your users connect // with other apps that also use the RxDB P2P Replication. topic: 'my-users-pool', /** * You need a collection handler to be able to create WebRTC connections. * Here we use the simple peer handler which uses the 'simple-peer' npm library. * To learn how to create a custom connection handler, read the source code, * it is pretty simple. */ connectionHandlerCreator: getConnectionHandlerSimplePeer({ // Set the signaling server url. // You can use the server provided by RxDB for tryouts, // but in production you should use your own server instead. signalingServerUrl: 'wss://signaling.rxdb.info/', // only in Node.js, we need the wrtc library // because Node.js does not contain the WebRTC API. wrtc: require('node-datachannel/polyfill'), // only in Node.js, we need the WebSocket library // because Node.js does not contain the WebSocket API. webSocketConstructor: require('ws').WebSocket }), pull: {}, push: {} } ); Notice that in difference to the other replication plugins, the WebRTC replication returns a replicationPool instead of a single RxReplicationState. The replicationPool contains all replication states of the connected peers in the P2P network. 4. Observe Errors To ensure we log out potential errors, observe the error$ observable of the pool. replicationPool.error$.subscribe(err => console.error('WebRTC Error:', err)); 5. Stop the Replication You can also dynamically stop the replication. replicationPool.cancel(); ","version":"Next","tagName":"h2"},{"title":"Live replications","type":1,"pageTitle":"P2P WebRTC Replication with RxDB - Sync Data between Browsers and Devices in JavaScript","url":"/replication-webrtc.html#live-replications","content":" The WebRTC replication is always live because there can not be a one-time sync when it is always possible to have new Peers that join the connection pool. Therefore you cannot set the live: false option like in the other replication plugins. ","version":"Next","tagName":"h2"},{"title":"Signaling Server","type":1,"pageTitle":"P2P WebRTC Replication with RxDB - Sync Data between Browsers and Devices in JavaScript","url":"/replication-webrtc.html#signaling-server","content":" For P2P replication to work with the RxDB WebRTC Replication Plugin, a signaling server is required. The signaling server helps peers discover each other and establish connections. RxDB ships with a default signaling server that can be used with the simple-peer connection handler. This server is made for demonstration purposes and tryouts. It is not reliable and might be offline at any time. In production you must always use your own signaling server instead! Creating a basic signaling server is straightforward. The provided example uses 'socket.io' for WebSocket communication. However, in production, you'd want to create a more robust signaling server with authentication and additional logic to suit your application's needs. Here is a quick example implementation of a signaling server that can be used with the connection handler from getConnectionHandlerSimplePeer(): import { startSignalingServerSimplePeer } from 'rxdb/plugins/replication-webrtc'; const serverState = await startSignalingServerSimplePeer({ port: 8080 // <- port }); For custom signaling servers with more complex logic, you can check the source code of the default one. ","version":"Next","tagName":"h2"},{"title":"Peer Validation","type":1,"pageTitle":"P2P WebRTC Replication with RxDB - Sync Data between Browsers and Devices in JavaScript","url":"/replication-webrtc.html#peer-validation","content":" By default the replication will replicate with every peer the signaling server tells them about. You can prevent invalid peers from replication by passing a custom isPeerValid() function that either returns true on valid peers and false on invalid peers. const replicationPool = await replicateWebRTC( { /* ... */ isPeerValid: async (peer) => { return true; } pull: {}, push: {} /* ... */ } ); ","version":"Next","tagName":"h2"},{"title":"Conflict detection in WebRTC replication","type":1,"pageTitle":"P2P WebRTC Replication with RxDB - Sync Data between Browsers and Devices in JavaScript","url":"/replication-webrtc.html#conflict-detection-in-webrtc-replication","content":" RxDB's conflict handling works by detecting and resolving conflicts that may arise when multiple clients in a decentralized database system attempt to modify the same data concurrently. A custom conflict handler can be set up, which is a plain JavaScript function. The conflict handler is run on each replicated document write and resolves the conflict if required. Find out more about RxDB conflict handling here ","version":"Next","tagName":"h2"},{"title":"Known problems","type":1,"pageTitle":"P2P WebRTC Replication with RxDB - Sync Data between Browsers and Devices in JavaScript","url":"/replication-webrtc.html#known-problems","content":" ","version":"Next","tagName":"h2"},{"title":"SimplePeer requires to have process.nextTick()","type":1,"pageTitle":"P2P WebRTC Replication with RxDB - Sync Data between Browsers and Devices in JavaScript","url":"/replication-webrtc.html#simplepeer-requires-to-have-processnexttick","content":" In the browser you might not have a process variable or process.nextTick() method. But the simple peer uses that so you have to polyfill it. In webpack you can use the process/browser package to polyfill it: const plugins = [ /* ... */ new webpack.ProvidePlugin({ process: 'process/browser', }) /* ... */ ]; In angular or other libraries you can add the polyfill manually: window.process = { nextTick: (fn, ...args) => setTimeout(() => fn(...args)), }; ","version":"Next","tagName":"h3"},{"title":"Polyfill the WebSocket and WebRTC API in Node.js","type":1,"pageTitle":"P2P WebRTC Replication with RxDB - Sync Data between Browsers and Devices in JavaScript","url":"/replication-webrtc.html#polyfill-the-websocket-and-webrtc-api-in-nodejs","content":" While all modern browsers support the WebRTC and WebSocket APIs, they is missing in Node.js which will throw the error No WebRTC support: Specify opts.wrtc option in this environment. Therefore you have to polyfill it with a compatible WebRTC and WebSocket polyfill. It is recommended to use the node-datachannel package for WebRTC which does not come with RxDB but has to be installed before via npm install node-datachannel --save. For the Websocket API use the ws package that is included into RxDB. import nodeDatachannelPolyfill from 'node-datachannel/polyfill'; import { WebSocket } from 'ws'; const replicationPool = await replicateWebRTC( { /* ... */ connectionHandlerCreator: getConnectionHandlerSimplePeer({ signalingServerUrl: 'wss://example.com:8080', wrtc: nodeDatachannelPolyfill, webSocketConstructor: WebSocket }), pull: {}, push: {} /* ... */ } ); ","version":"Next","tagName":"h3"},{"title":"Storing replicated data encrypted on client device","type":1,"pageTitle":"P2P WebRTC Replication with RxDB - Sync Data between Browsers and Devices in JavaScript","url":"/replication-webrtc.html#storing-replicated-data-encrypted-on-client-device","content":" Storing replicated data encrypted on client devices using the RxDB Encryption Plugin is a pivotal step towards bolstering data security and user privacy. The WebRTC replication plugin seamlessly integrates with the RxDB encryption plugins, providing a robust solution for encrypting sensitive information before it's stored locally. By doing so, it ensures that even if unauthorized access to the device occurs, the data remains protected and unintelligible without the encryption key (or password). This approach is particularly vital in scenarios where user-generated content or confidential data is replicated across devices, as it empowers users with control over their own data while adhering to stringent security standards. Read more about the encryption plugins here. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"P2P WebRTC Replication with RxDB - Sync Data between Browsers and Devices in JavaScript","url":"/replication-webrtc.html#follow-up","content":" Check out the RxDB Quickstart to see how to set up your first RxDB database.Explore advanced features like Custom Conflict Handling or Offline-First Performance.Try an example at RxDB Quickstart GitHub to see a working P2P Sync setup.Join the RxDB Community on GitHub or Discord if you have questions or want to share your P2P WebRTC experiences. ","version":"Next","tagName":"h2"},{"title":"Attachments","type":0,"sectionRef":"#","url":"/rx-attachment.html","content":"","keywords":"","version":"Next"},{"title":"Add the attachments plugin","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#add-the-attachments-plugin","content":" To enable the attachments, you have to add the attachments plugin. import { addRxPlugin } from 'rxdb'; import { RxDBAttachmentsPlugin } from 'rxdb/plugins/attachments'; addRxPlugin(RxDBAttachmentsPlugin); ","version":"Next","tagName":"h2"},{"title":"Enable attachments in the schema","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#enable-attachments-in-the-schema","content":" Before you can use attachments, you have to ensure that the attachments-object is set in the schema of your RxCollection. const mySchema = { version: 0, type: 'object', properties: { // . // . // . }, attachments: { encrypted: true // if true, the attachment-data will be encrypted with the db-password } }; const myCollection = await myDatabase.addCollections({ humans: { schema: mySchema } }); ","version":"Next","tagName":"h2"},{"title":"putAttachment()","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#putattachment","content":" Adds an attachment to a RxDocument. Returns a Promise with the new attachment. import { createBlob } from 'rxdb'; const attachment = await myDocument.putAttachment( { id: 'cat.txt', // (string) name of the attachment data: createBlob('meowmeow', 'text/plain'), // (string|Blob) data of the attachment type: 'text/plain' // (string) type of the attachment-data like 'image/jpeg' } ); ","version":"Next","tagName":"h2"},{"title":"getAttachment()","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#getattachment","content":" Returns an RxAttachment by its id. Returns null when the attachment does not exist. const attachment = myDocument.getAttachment('cat.jpg'); ","version":"Next","tagName":"h2"},{"title":"allAttachments()","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#allattachments","content":" Returns an array of all attachments of the RxDocument. const attachments = myDocument.allAttachments(); ","version":"Next","tagName":"h2"},{"title":"allAttachments$","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#allattachments-1","content":" Gets an Observable which emits a stream of all attachments from the document. Re-emits each time an attachment gets added or removed from the RxDocument. const all = []; myDocument.allAttachments$.subscribe( attachments => all = attachments ); ","version":"Next","tagName":"h2"},{"title":"RxAttachment","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#rxattachment","content":" The attachments of RxDB are represented by the type RxAttachment which has the following attributes/methods. ","version":"Next","tagName":"h2"},{"title":"doc","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#doc","content":" The RxDocument which the attachment is assigned to. ","version":"Next","tagName":"h3"},{"title":"id","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#id","content":" The id as string of the attachment. ","version":"Next","tagName":"h3"},{"title":"type","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#type","content":" The type as string of the attachment. ","version":"Next","tagName":"h3"},{"title":"length","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#length","content":" The length of the data of the attachment as number. ","version":"Next","tagName":"h3"},{"title":"digest","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#digest","content":" The hash of the attachments data as string. note The digest is NOT calculated by RxDB, instead it is calculated by the RxStorage. The only guarantee is that the digest will change when the attachments data changes. ","version":"Next","tagName":"h3"},{"title":"rev","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#rev","content":" The revision-number of the attachment as number. ","version":"Next","tagName":"h3"},{"title":"remove()","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#remove","content":" Removes the attachment. Returns a Promise that resolves when done. const attachment = myDocument.getAttachment('cat.jpg'); await attachment.remove(); ","version":"Next","tagName":"h3"},{"title":"getData()","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#getdata","content":" Returns a Promise which resolves the attachment's data as Blob. (async) const attachment = myDocument.getAttachment('cat.jpg'); const blob = await attachment.getData(); ","version":"Next","tagName":"h2"},{"title":"getStringData()","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#getstringdata","content":" Returns a Promise which resolves the attachment's data as string. const attachment = await myDocument.getAttachment('cat.jpg'); const data = await attachment.getStringData(); Attachment compression Storing many attachments can be a problem when the disc space of the device is exceeded. Therefore it can make sense to compress the attachments before storing them in the RxStorage. With the attachments-compression plugin you can compress the attachments data on write and decompress it on reads. This happens internally and will now change on how you use the api. The compression is run with the Compression Streams API which is only supported on newer browsers. import { wrappedAttachmentsCompressionStorage } from 'rxdb/plugins/attachments-compression'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; // create a wrapped storage with attachment-compression. const storageWithAttachmentsCompression = wrappedAttachmentsCompressionStorage({ storage: getRxStorageIndexedDB() }); const db = await createRxDatabase({ name: 'mydatabase', storage: storageWithAttachmentsCompression }); // set the compression mode at the schema level const mySchema = { version: 0, type: 'object', properties: { // . // . // . }, attachments: { compression: 'deflate' // <- Specify the compression mode here. OneOf ['deflate', 'gzip'] } }; /* ... create your collections as usual and store attachments in them. */ ","version":"Next","tagName":"h2"},{"title":"Replication with GraphQL","type":0,"sectionRef":"#","url":"/replication-graphql.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"Replication with GraphQL","url":"/replication-graphql.html#usage","content":" Before you use the GraphQL replication, make sure you've learned how the RxDB replication works. ","version":"Next","tagName":"h2"},{"title":"Creating a compatible GraphQL Server","type":1,"pageTitle":"Replication with GraphQL","url":"/replication-graphql.html#creating-a-compatible-graphql-server","content":" At the server-side, there must exist an endpoint which returns newer rows when the last checkpoint is used as input. For example lets say you create a QuerypullHuman which returns a list of document writes that happened after the given checkpoint. For the push-replication, you also need a MutationpushHuman which lets RxDB update data of documents by sending the previous document state and the new client document state. Also for being able to stream all ongoing events, we need a Subscription called streamHuman. input HumanInput { id: ID!, name: String!, lastName: String!, updatedAt: Float!, deleted: Boolean! } type Human { id: ID!, name: String!, lastName: String!, updatedAt: Float!, deleted: Boolean! } input Checkpoint { id: String!, updatedAt: Float! } type HumanPullBulk { documents: [Human]! checkpoint: Checkpoint } type Query { pullHuman(checkpoint: Checkpoint, limit: Int!): HumanPullBulk! } input HumanInputPushRow { assumedMasterState: HeroInputPushRowT0AssumedMasterStateT0 newDocumentState: HeroInputPushRowT0NewDocumentStateT0! } type Mutation { # Returns a list of all conflicts # If no document write caused a conflict, return an empty list. pushHuman(rows: [HumanInputPushRow!]): [Human] } # headers are used to authenticate the subscriptions # over websockets. input Headers { AUTH_TOKEN: String!; } type Subscription { streamHuman(headers: Headers): HumanPullBulk! } The GraphQL resolver for the pullHuman would then look like: const rootValue = { pullHuman: args => { const minId = args.checkpoint ? args.checkpoint.id : ''; const minUpdatedAt = args.checkpoint ? args.checkpoint.updatedAt : 0; // sorted by updatedAt first and the id as second const sortedDocuments = documents.sort((a, b) => { if (a.updatedAt > b.updatedAt) return 1; if (a.updatedAt < b.updatedAt) return -1; if (a.updatedAt === b.updatedAt) { if (a.id > b.id) return 1; if (a.id < b.id) return -1; else return 0; } }); // only return documents newer than the input document const filterForMinUpdatedAtAndId = sortedDocuments.filter(doc => { if (doc.updatedAt < minUpdatedAt) return false; if (doc.updatedAt > minUpdatedAt) return true; if (doc.updatedAt === minUpdatedAt) { // if updatedAt is equal, compare by id if (doc.id > minId) return true; else return false; } }); // only return some documents in one batch const limitedDocs = filterForMinUpdatedAtAndId.slice(0, args.limit); // use the last document for the checkpoint const lastDoc = limitedDocs[limitedDocs.length - 1]; const retCheckpoint = { id: lastDoc.id, updatedAt: lastDoc.updatedAt } return { documents: limitedDocs, checkpoint: retCheckpoint } return limited; } } For examples for the other resolvers, consult the GraphQL Example Project. ","version":"Next","tagName":"h3"},{"title":"RxDB Client","type":1,"pageTitle":"Replication with GraphQL","url":"/replication-graphql.html#rxdb-client","content":" Pull replication For the pull-replication, you first need a pullQueryBuilder. This is a function that gets the last replication checkpoint and a limit as input and returns an object with a GraphQL-query and its variables (or a promise that resolves to the same object). RxDB will use the query builder to construct what is later sent to the GraphQL endpoint. const pullQueryBuilder = (checkpoint, limit) => { /** * The first pull does not have a checkpoint * so we fill it up with defaults */ if (!checkpoint) { checkpoint = { id: '', updatedAt: 0 }; } const query = `query PullHuman($checkpoint: CheckpointInput, $limit: Int!) { pullHuman(checkpoint: $checkpoint, limit: $limit) { documents { id name age updatedAt deleted } checkpoint { id updatedAt } } }`; return { query, operationName: 'PullHuman', variables: { checkpoint, limit } }; }; With the queryBuilder, you can then setup the pull-replication. import { replicateGraphQL } from 'rxdb/plugins/replication-graphql'; const replicationState = replicateGraphQL( { collection: myRxCollection, // urls to the GraphQL endpoints url: { http: 'http://example.com/graphql' }, pull: { queryBuilder: pullQueryBuilder, // the queryBuilder from above modifier: doc => doc, // (optional) modifies all pulled documents before they are handled by RxDB dataPath: undefined, // (optional) specifies the object path to access the document(s). Otherwise, the first result of the response data is used. /** * Amount of documents that the remote will send in one request. * If the response contains less than [batchSize] documents, * RxDB will assume there are no more changes on the backend * that are not replicated. * This value is the same as the limit in the pullHuman() schema. * [default=100] */ batchSize: 50 }, // headers which will be used in http requests against the server. headers: { Authorization: 'Bearer abcde...' }, /** * Options that have been inherited from the RxReplication */ deletedField: 'deleted', live: true, retryTime = 1000 * 5, waitForLeadership = true, autoStart = true, } ); Push replication For the push-replication, you also need a queryBuilder. Here, the builder receives a changed document as input which has to be send to the server. It also returns a GraphQL-Query and its data. const pushQueryBuilder = rows => { const query = ` mutation PushHuman($writeRows: [HumanInputPushRow!]) { pushHuman(writeRows: $writeRows) { id name age updatedAt deleted } } `; const variables = { writeRows: rows }; return { query, operationName: 'PushHuman', variables }; }; With the queryBuilder, you can then setup the push-replication. const replicationState = replicateGraphQL( { collection: myRxCollection, // urls to the GraphQL endpoints url: { http: 'http://example.com/graphql' }, push: { queryBuilder: pushQueryBuilder, // the queryBuilder from above /** * batchSize (optional) * Amount of document that will be pushed to the server in a single request. */ batchSize: 5, /** * modifier (optional) * Modifies all pushed documents before they are send to the GraphQL endpoint. * Returning null will skip the document. */ modifier: doc => doc }, headers: { Authorization: 'Bearer abcde...' }, pull: { /* ... */ }, /* ... */ } ); Pull Stream To create a realtime replication, you need to create a pull stream that pulls ongoing writes from the server. The pull stream gets the headers of the RxReplicationState as input, so that it can be authenticated on the backend. const pullStreamQueryBuilder = (headers) => { const query = `subscription onStream($headers: Headers) { streamHero(headers: $headers) { documents { id, name, age, updatedAt, deleted }, checkpoint { id updatedAt } } }`; return { query, variables: { headers } }; }; With the pullStreamQueryBuilder you can then start a realtime replication. const replicationState = replicateGraphQL( { collection: myRxCollection, // urls to the GraphQL endpoints url: { http: 'http://example.com/graphql', ws: 'ws://example.com/subscriptions' // <- The websocket has to use a different url. }, push: { batchSize: 100, queryBuilder: pushQueryBuilder }, headers: { Authorization: 'Bearer abcde...' }, pull: { batchSize: 100, queryBuilder: pullQueryBuilder, streamQueryBuilder: pullStreamQueryBuilder, includeWsHeaders: false, // Includes headers as connection parameter to Websocket. // Websocket options that can be passed as a parameter to initialize the subscription // Can be applied anything from the graphql-ws ClientOptions - https://the-guild.dev/graphql/ws/docs/interfaces/client.ClientOptions // Except these parameters: 'url', 'shouldRetry', 'webSocketImpl' - locked for internal usage // Note: if you provide connectionParams as a wsOption, make sure it returns any necessary headers (e.g. authorization) // because providing your own connectionParams prevents headers from being included automatically wsOptions: { retryAttempts: 10, } }, deletedField: 'deleted' } ); note If it is not possible to create a websocket server on your backend, you can use any other method of pull out the ongoing events from the backend and then you can send them into RxReplicationState.emitEvent(). ","version":"Next","tagName":"h3"},{"title":"Transforming null to undefined in optional fields","type":1,"pageTitle":"Replication with GraphQL","url":"/replication-graphql.html#transforming-null-to-undefined-in-optional-fields","content":" GraphQL fills up non-existent optional values with null while RxDB required them to be undefined. Therefore, if your schema contains optional properties, you have to transform the pulled data to switch out null to undefined const replicationState: RxGraphQLReplicationState<RxDocType> = replicateGraphQL( { collection: myRxCollection, url: {/* ... */}, headers: {/* ... */}, push: {/* ... */}, pull: { queryBuilder: pullQueryBuilder, modifier: (doc => { // We have to remove optional non-existent field values // they are set as null by GraphQL but should be undefined Object.entries(doc).forEach(([k, v]) => { if (v === null) { delete doc[k]; } }); return doc; }) }, /* ... */ } ); ","version":"Next","tagName":"h3"},{"title":"pull.responseModifier","type":1,"pageTitle":"Replication with GraphQL","url":"/replication-graphql.html#pullresponsemodifier","content":" With the pull.responseModifier you can modify the whole response from the GraphQL endpoint before it is processed by RxDB. For example if your endpoint is not capable of returning a valid checkpoint, but instead only returns the plain document array, you can use the responseModifier to aggregate the checkpoint from the returned documents. import { } from 'rxdb'; const replicationState: RxGraphQLReplicationState<RxDocType> = replicateGraphQL( { collection: myRxCollection, url: {/* ... */}, headers: {/* ... */}, push: {/* ... */}, pull: { responseModifier: async function( plainResponse, // the exact response that was returned from the server origin, // either 'handler' if plainResponse came from the pull.handler, or 'stream' if it came from the pull.stream requestCheckpoint // if origin==='handler', the requestCheckpoint contains the checkpoint that was send to the backend ) { /** * In this example we aggregate the checkpoint from the documents array * that was returned from the graphql endpoint. */ const docs = plainResponse; return { documents: docs, checkpoint: docs.length === 0 ? requestCheckpoint : { name: lastOfArray(docs).name, updatedAt: lastOfArray(docs).updatedAt } }; } }, /* ... */ } ); ","version":"Next","tagName":"h3"},{"title":"push.responseModifier","type":1,"pageTitle":"Replication with GraphQL","url":"/replication-graphql.html#pushresponsemodifier","content":" It's also possible to modify the response of a push mutation. For example if your server returns more than the just conflicting docs: type PushResponse { conflicts: [Human] conflictMessages: [ReplicationConflictMessage] } type Mutation { # Returns a PushResponse type that contains the conflicts along with other information pushHuman(rows: [HumanInputPushRow!]): PushResponse! } import {} from "rxdb"; const replicationState: RxGraphQLReplicationState<RxDocType> = replicateGraphQL( { collection: myRxCollection, url: {/* ... */}, headers: {/* ... */}, push: { responseModifier: async function (plainResponse) { /** * In this example we aggregate the conflicting documents from a response object */ return plainResponse.conflicts; }, }, pull: {/* ... */}, /* ... */ } ); Helper Functions RxDB provides the helper functions graphQLSchemaFromRxSchema(), pullQueryBuilderFromRxSchema(), pullStreamBuilderFromRxSchema() and pushQueryBuilderFromRxSchema() that can be used to generate handlers and schemas from the RxJsonSchema. To learn how to use them, please inspect the GraphQL Example. ","version":"Next","tagName":"h3"},{"title":"RxGraphQLReplicationState","type":1,"pageTitle":"Replication with GraphQL","url":"/replication-graphql.html#rxgraphqlreplicationstate","content":" When you call myCollection.syncGraphQL() it returns a RxGraphQLReplicationState which can be used to subscribe to events, for debugging or other functions. It extends the RxReplicationState with some GraphQL specific methods. .setHeaders() Changes the headers for the replication after it has been set up. replicationState.setHeaders({ Authorization: `...` }); Sending Cookies The underlying fetch framework uses a same-origin policy for credentials per default. That means, cookies and session data is only shared if you backend and frontend run on the same domain and port. Pass the credential parameter to include cookies in requests to servers from different origins via: replicationState.setCredentials('include'); or directly pass it in the replicateGraphQL function: replicateGraphQL( { collection: myRxCollection, /* ... */ credentials: 'include', /* ... */ } ); See the fetch spec for more information about available options. note To play around, check out the full example of the RxDB GraphQL replication with server and client ","version":"Next","tagName":"h3"},{"title":"HTTP Replication from a custom server to RxDB clients","type":0,"sectionRef":"#","url":"/replication-http.html","content":"","keywords":"","version":"Next"},{"title":"Setup","type":1,"pageTitle":"HTTP Replication from a custom server to RxDB clients","url":"/replication-http.html#setup","content":" RxDB does not have a specific HTTP-replication plugin because the replication primitives plugin is simple enough to start a HTTP replication on top of it. We import the replicateRxCollection function and start the replication from there for a single RxCollection. // > client.ts import { replicateRxCollection } from 'rxdb/plugins/replication'; const replicationState = await replicateRxCollection({ collection: myRxCollection, replicationIdentifier: 'my-http-replication', push: { /* add settings from below */ }, pull: { /* add settings from below */ } }); On the server side, we start an express server that has a MongoDB connection and serves the HTTP requests of the client. // > server.ts import { MongoClient } from 'mongodb'; import express from 'express'; const mongoClient = new MongoClient('mongodb://localhost:27017/'); const mongoConnection = await mongoClient.connect(); const mongoDatabase = mongoConnection.db('myDatabase'); const mongoCollection = await mongoDatabase.collection('myDocs'); const app = express(); app.use(express.json()); /* ... add routes from below */ app.listen(80, () => { console.log(`Example app listening on port 80`) }); ","version":"Next","tagName":"h2"},{"title":"Pull from the server to the client","type":1,"pageTitle":"HTTP Replication from a custom server to RxDB clients","url":"/replication-http.html#pull-from-the-server-to-the-client","content":" First we need to implement the pull handler. This is used by the RxDB replication to fetch all documents writes that happened after a given checkpoint. The checkpoint format is not determined by RxDB, instead the server can use any type of changepoint that can be used to iterate across document writes. Here we will just use a unix timestamp updatedAt and a string id. On the client we add the pull.handler to the replication setting. The handler request the correct server url and fetches the documents. // > client.ts const replicationState = await replicateRxCollection({ /* ... */ pull: { async handler(checkpointOrNull, batchSize){ const updatedAt = checkpointOrNull ? checkpointOrNull.updatedAt : 0; const id = checkpointOrNull ? checkpointOrNull.id : ''; const response = await fetch(`https://localhost/pull?updatedAt=${updatedAt}&id=${id}&limit=${batchSize}`); const data = await response.json(); return { documents: data.documents, checkpoint: data.checkpoint }; } } /* ... */ }); The server responds with an array of document data based on the given checkpoint and a new checkpoint. Also the server has to respect the batchSize so that RxDB knows when there are no more new documents and the server returns a non-full array. // > server.ts import { lastOfArray } from 'rxdb/plugins/core'; app.get('/pull', (req, res) => { const id = req.query.id; const updatedAt = parseFloat(req.query.updatedAt); const documents = await mongoCollection.find({ $or: [ /** * Notice that we have to compare the updatedAt AND the id field * because the updateAt field is not unique and when two documents have * the same updateAt, we can still "sort" them by their id. */ { updateAt: { $gt: updatedAt } }, { updateAt: { $eq: updatedAt } id: { $gt: id } } ] }).limit(parseInt(req.query.batchSize, 10)).toArray(); const newCheckpoint = documents.length === 0 ? { id, updatedAt } : { id: lastOfArray(documents).id, updatedAt: lastOfArray(documents).updatedAt }; res.setHeader('Content-Type', 'application/json'); res.end(JSON.stringify({ documents, checkpoint: newCheckpoint })); }); ","version":"Next","tagName":"h2"},{"title":"Push from the Client to the Server","type":1,"pageTitle":"HTTP Replication from a custom server to RxDB clients","url":"/replication-http.html#push-from-the-client-to-the-server","content":" To send client side writes to the server, we have to implement the push.handler. It gets an array of change rows as input and has to return only the conflicting documents that did not have been written to the server. Each change row contains a newDocumentState and an optional assumedMasterState. // > client.ts const replicationState = await replicateRxCollection({ /* ... */ push: { async handler(changeRows){ const rawResponse = await fetch('https://localhost/push', { method: 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify(changeRows) }); const conflictsArray = await rawResponse.json(); return conflictsArray; } } /* ... */ }); On the server we first have to detect if the assumedMasterState is correct for each row. If yes, we have to write the new document state to the database, otherwise we have to return the "real" master state in the conflict array. note For simplicity in this tutorial, we do not use transactions. In reality you should run the full push function inside of a MongoDB transaction to ensure that no other process can mix up the document state while the writes are processed. Also you should call batch operations on MongoDB instead of running the operations for each change row. The server also creates an event that is emitted to the pullStream$ which is later used in the pull.stream$. // > server.ts import { lastOfArray } from 'rxdb/plugins/core'; import { Subject } from 'rxjs'; // used in the pull.stream$ below let lastEventId = 0; const pullStream$ = new Subject(); app.get('/push', (req, res) => { const changeRows = req.body; const conflicts = []; const event = { id: lastEventId++, documents: [], checkpoint: null }; for(const changeRow of changeRows){ const realMasterState = mongoCollection.findOne({id: changeRow.newDocumentState.id}); if( realMasterState && !changeRow.assumedMasterState || ( realMasterState && changeRow.assumedMasterState && /* * For simplicity we detect conflicts on the server by only compare the updateAt value. * In reality you might want to do a more complex check or do a deep-equal comparison. */ realMasterState.updatedAt !== changeRow.assumedMasterState.updatedAt ) ) { // we have a conflict conflicts.push(realMasterState); } else { // no conflict -> write the document mongoCollection.updateOne( {id: changeRow.newDocumentState.id}, changeRow.newDocumentState ); event.documents.push(changeRow.newDocumentState); event.checkpoint = { id: changeRow.newDocumentState.id, updatedAt: changeRow.newDocumentState.updatedAt }; } } if(event.documents.length > 0){ myPullStream$.next(event); } res.setHeader('Content-Type', 'application/json'); res.end(JSON.stringify(conflicts)); }); ","version":"Next","tagName":"h2"},{"title":"pullStream$ for ongoing changes","type":1,"pageTitle":"HTTP Replication from a custom server to RxDB clients","url":"/replication-http.html#pullstream-for-ongoing-changes","content":" While the normal pull handler is used when the replication is in iteration mode, we also need a stream of ongoing changes when the replication is in event observation mode. The pull.stream$ is implemented with server send events that are send from the server to the client. The client connects to an url and receives server-sent-events that contain all ongoing writes. // > client.ts import { Subject } from 'rxjs'; const myPullStream$ = new Subject(); const eventSource = new EventSource('http://localhost/pullStream', { withCredentials: true }); eventSource.onmessage = event => { const eventData = JSON.parse(event.data); myPullStream$.next({ documents: eventData.documents, checkpoint: eventData.checkpoint }); }; const replicationState = await replicateRxCollection({ /* ... */ pull: { /* ... */ stream$: myPullStream$.asObservable() } /* ... */ }); On the server we have to implement the pullStream route and emit the events. We use the pullStream$ observable from above to fetch all ongoing events and respond them to the client. // > server.ts app.get('/pullStream', (req, res) => { res.writeHead(200, { 'Content-Type': 'text/event-stream', 'Connection': 'keep-alive', 'Cache-Control': 'no-cache' }); const subscription = pullStream$.subscribe(event => res.write('data: ' + JSON.stringify(event) + '\\n\\n')); req.on('close', () => subscription.unsubscribe()); }); ","version":"Next","tagName":"h2"},{"title":"pullStream$ RESYNC flag","type":1,"pageTitle":"HTTP Replication from a custom server to RxDB clients","url":"/replication-http.html#pullstream-resync-flag","content":" In case the client looses the connection, the EventSource will automatically reconnect but there might have been some changes that have been missed out in the meantime. The replication has to be informed that it might have missed events by emitting a RESYNC flag from the pull.stream$. The replication will then catch up by switching to the iteration mode until it is in sync with the server again. // > client.ts eventSource.onerror = () => myPullStream$.next('RESYNC'); The purpose of the RESYNC flag is to tell the client that "something might have changed" and then the client can react on that information without having to run operations in an interval. If your backend is not capable of emitting the actual documents and checkpoint in the pull stream, you could just map all events to the RESYNC flag. This would make the replication work with a slight performance drawback: // > client.ts import { Subject } from 'rxjs'; const myPullStream$ = new Subject(); const eventSource = new EventSource('http://localhost/pullStream', { withCredentials: true }); eventSource.onmessage = () => myPullStream$.next('RESYNC'); const replicationState = await replicateRxCollection({ pull: { stream$: myPullStream$.asObservable() } }); ","version":"Next","tagName":"h3"},{"title":"Missing implementation details","type":1,"pageTitle":"HTTP Replication from a custom server to RxDB clients","url":"/replication-http.html#missing-implementation-details","content":" Here we only covered the basics of doing a HTTP replication between RxDB clients and a server. We did not cover the following aspects of the implementation: Authentication: To authenticate the client on the server, you might want to send authentication headers with the HTTP requestsSkip events on the pull.stream$ for the client that caused the changes to improve performance. ","version":"Next","tagName":"h2"},{"title":"RxDB Database Replication Protocol","type":0,"sectionRef":"#","url":"/replication.html","content":"","keywords":"","version":"Next"},{"title":"Design Decisions","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#design-decisions","content":" In contrast to other database replication protocols, the RxDB replication protocol is optimized for client side apps. Backend-Agnostic: Because it relies on a clearly defined interface and simple methods, the RxDB replication protocol can integrate with any existing backend infrastructure. There's no need to set up a special server or database solution - just implement the pullHandler, pushHandler, and pullStream according to your existing architecture. Optimized for Browser Environments: The protocol takes advantage of bulk document handling in one go, which drastically reduces network overhead. By grouping updates and fetches into batches, client apps can push and pull large sets of changes efficiently, keeping your UI responsive. Straightforward Implementation: The RxDB replication logic mirrors a "git-like" approach for merging and conflict resolution, making it easy to understand. The steps are clear: you pull updates, push local changes, and handle conflicts if both sides have updated the same document. Offline-First Support: By incorporating conflict handling at the client side, the protocol fully supports offline-first apps. Users can continue making changes while offline, and those updates will sync seamlessly once a connection is reestablished - all without risking data loss or having undefined behavior. ","version":"Next","tagName":"h2"},{"title":"Replication protocol on the document level","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#replication-protocol-on-the-document-level","content":" On the RxDocument level, the replication works like git, where the fork/client contains all new writes and must be merged with the master/server before it can push its new state to the master/server. A---B-----------D master/server state \\ / B---C---D fork/client state The client pulls the latest state B from the master.The client does some changes C+D.The client pushes these changes to the master by sending the latest known master state B and the new client state D of the document.If the master state is equal to the latest master B state of the client, the new client state D is set as the latest master state.If the master also had changes and so the latest master change is different then the one that the client assumes, we have a conflict that has to be resolved on the client. ","version":"Next","tagName":"h2"},{"title":"Replication protocol on the transfer level","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#replication-protocol-on-the-transfer-level","content":" When document states are transferred, all handlers use batches of documents for better performance. The server must implement the following methods to be compatible with the replication: pullHandler Get the last checkpoint (or null) as input. Returns all documents that have been written after the given checkpoint. Also returns the checkpoint of the latest written returned document.pushHandler a method that can be called by the client to send client side writes to the master. It gets an array with the assumedMasterState and the newForkState of each document write as input. It must return an array that contains the master document states of all conflicts. If there are no conflicts, it must return an empty array.pullStream an observable that emits batches of all master writes and the latest checkpoint of the write batches. +--------+ +--------+ | | pullHandler() | | | |---------------------> | | | | | | | | | | | Client | pushHandler() | Server | | |---------------------> | | | | | | | | pullStream$ | | | | <-------------------------| | +--------+ +--------+ The replication runs in two different modes: ","version":"Next","tagName":"h2"},{"title":"Checkpoint iteration","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#checkpoint-iteration","content":" On first initial replication, or when the client comes online again, a checkpoint based iteration is used to catch up with the server state. A checkpoint is a subset of the fields of the last pulled document. When the checkpoint is send to the backend via pullHandler(), the backend must be able to respond with all documents that have been written after the given checkpoint. For example if your documents contain an id and an updatedAt field, these two can be used as checkpoint. When the checkpoint iteration reaches the last checkpoint, where the backend returns an empty array because there are no newer documents, the replication will automatically switch to the event observation mode. ","version":"Next","tagName":"h3"},{"title":"Event observation","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#event-observation","content":" While the client is connected to the backend, the events from the backend are observed via pullStream$ and persisted to the client. If your backend for any reason is not able to provide a full pullStream$ that contains all events and the checkpoint, you can instead only emit RESYNC events that tell RxDB that anything unknown has changed on the server and it should run the pull replication via checkpoint iteration. When the client goes offline and online again, it might happen that the pullStream$ has missed out some events. Therefore the pullStream$ should also emit a RESYNC event each time the client reconnects, so that the client can become in sync with the backend via the checkpoint iteration mode. ","version":"Next","tagName":"h3"},{"title":"Data layout on the server","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#data-layout-on-the-server","content":" To use the replication you first have to ensure that: documents are deterministic sortable by their last write time deterministic means that even if two documents have the same last write time, they have a predictable sort order. This is most often ensured by using the primaryKey as second sort parameter as part of the checkpoint. documents are never deleted, instead the _deleted field is set to true. This is needed so that the deletion state of a document exists in the database and can be replicated with other instances. If your backend uses a different field to mark deleted documents, you have to transform the data in the push/pull handlers or with the modifiers. For example if your documents look like this: const docData = { "id": "foobar", "name": "Alice", "lastName": "Wilson", /** * Contains the last write timestamp * so all documents writes can be sorted by that value * when they are fetched from the remote instance. */ "updatedAt": 1564483474, /** * Instead of physically deleting documents, * a deleted document gets replicated. */ "_deleted": false } Then your data is always sortable by updatedAt. This ensures that when RxDB fetches 'new' changes via pullHandler(), it can send the latest updatedAt+id checkpoint to the remote endpoint and then receive all newer documents. By default, the field is _deleted. If your remote endpoint uses a different field to mark deleted documents, you can set the deletedField in the replication options which will automatically map the field on all pull and push requests. ","version":"Next","tagName":"h2"},{"title":"Conflict handling","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#conflict-handling","content":" When multiple clients (or the server) modify the same document at the same time (or when they are offline), it can happen that a conflict arises during the replication. A---B1---C1---X master/server state \\ / B1---C2 fork/client state In the case above, the client would tell the master to move the document state from B1 to C2 by calling pushHandler(). But because the actual master state is C1 and not B1, the master would reject the write by sending back the actual master state C1.RxDB resolves all conflicts on the client so it would call the conflict handler of the RxCollection and create a new document state D that can then be written to the master. A---B1---C1---X---D master/server state \\ / \\ / B1---C2---D fork/client state The default conflict handler will always drop the fork state and use the master state. This ensures that clients that are offline for a very long time, do not accidentally overwrite other peoples changes when they go online again. You can specify a custom conflict handler by setting the property conflictHandler when calling addCollection(). Learn how to create a custom conflict handler. ","version":"Next","tagName":"h2"},{"title":"replicateRxCollection()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#replicaterxcollection","content":" You can start the replication of a single RxCollection by calling replicateRxCollection() like in the following: import { replicateRxCollection } from 'rxdb/plugins/replication'; import { lastOfArray } from 'rxdb'; const replicationState = await replicateRxCollection({ collection: myRxCollection, /** * An id for the replication to identify it * and so that RxDB is able to resume the replication on app reload. * If you replicate with a remote server, it is recommended to put the * server url into the replicationIdentifier. */ replicationIdentifier: 'my-rest-replication-to-https://example.com/api/sync', /** * By default it will do an ongoing realtime replication. * By settings live: false the replication will run once until the local state * is in sync with the remote state, then it will cancel itself. * (optional), default is true. */ live: true, /** * Time in milliseconds after when a failed backend request * has to be retried. * This time will be skipped if a offline->online switch is detected * via navigator.onLine * (optional), default is 5 seconds. */ retryTime: 5 * 1000, /** * When multiInstance is true, like when you use RxDB in multiple browser tabs, * the replication should always run in only one of the open browser tabs. * If waitForLeadership is true, it will wait until the current instance is leader. * If waitForLeadership is false, it will start replicating, even if it is not leader. * [default=true] */ waitForLeadership: true, /** * If this is set to false, * the replication will not start automatically * but will wait for replicationState.start() being called. * (optional), default is true */ autoStart: true, /** * Custom deleted field, the boolean property of the document data that * marks a document as being deleted. * If your backend uses a different fieldname then '_deleted', set the fieldname here. * RxDB will still store the documents internally with '_deleted', setting this field * only maps the data on the data layer. * * If a custom deleted field contains a non-boolean value, the deleted state * of the documents depends on if the value is truthy or not. So instead of providing a boolean * * deleted value, you could also work with using a 'deletedAt' timestamp instead. * * [default='_deleted'] */ deletedField: 'deleted', /** * Optional, * only needed when you want to replicate local changes to the remote instance. */ push: { /** * Push handler */ async handler(docs) { /** * Push the local documents to a remote REST server. */ const rawResponse = await fetch('https://example.com/api/sync/push', { method: 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify({ docs }) }); /** * Contains an array with all conflicts that appeared during this push. * If there were no conflicts, return an empty array. */ const response = await rawResponse.json(); return response; }, /** * Batch size, optional * Defines how many documents will be given to the push handler at once. */ batchSize: 5, /** * Modifies all documents before they are given to the push handler. * Can be used to swap out a custom deleted flag instead of the '_deleted' field. * If the push modifier return null, the document will be skipped and not send to the remote. * Notice that the modifier can be called multiple times and should not contain any side effects. * (optional) */ modifier: d => d }, /** * Optional, * only needed when you want to replicate remote changes to the local state. */ pull: { /** * Pull handler */ async handler(lastCheckpoint, batchSize) { const minTimestamp = lastCheckpoint ? lastCheckpoint.updatedAt : 0; /** * In this example we replicate with a remote REST server */ const response = await fetch( `https://example.com/api/sync/?minUpdatedAt=${minTimestamp}&limit=${batchSize}` ); const documentsFromRemote = await response.json(); return { /** * Contains the pulled documents from the remote. * Not that if documentsFromRemote.length < batchSize, * then RxDB assumes that there are no more un-replicated documents * on the backend, so the replication will switch to 'Event observation' mode. */ documents: documentsFromRemote, /** * The last checkpoint of the returned documents. * On the next call to the pull handler, * this checkpoint will be passed as 'lastCheckpoint' */ checkpoint: documentsFromRemote.length === 0 ? lastCheckpoint : { id: lastOfArray(documentsFromRemote).id, updatedAt: lastOfArray(documentsFromRemote).updatedAt } }; }, batchSize: 10, /** * Modifies all documents after they have been pulled * but before they are used by RxDB. * Notice that the modifier can be called multiple times and should not contain any side effects. * (optional) */ modifier: d => d, /** * Stream of the backend document writes. * See below. * You only need a stream$ when you have set live=true */ stream$: pullStream$.asObservable() }, }); /** * Creating the pull stream for realtime replication. * Here we use a websocket but any other way of sending data to the client can be used, * like long polling or server-sent events. */ const pullStream$ = new Subject<RxReplicationPullStreamItem<any, any>>(); let firstOpen = true; function connectSocket() { const socket = new WebSocket('wss://example.com/api/sync/stream'); /** * When the backend sends a new batch of documents+checkpoint, * emit it into the stream$. * * event.data must look like this * { * documents: [ * { * id: 'foobar', * _deleted: false, * updatedAt: 1234 * } * ], * checkpoint: { * id: 'foobar', * updatedAt: 1234 * } * } */ socket.onmessage = event => pullStream$.next(event.data); /** * Automatically reconnect the socket on close and error. */ socket.onclose = () => connectSocket(); socket.onerror = () => socket.close(); socket.onopen = () => { if(firstOpen) { firstOpen = false; } else { /** * When the client is offline and goes online again, * it might have missed out events that happened on the server. * So we have to emit a RESYNC so that the replication goes * into 'Checkpoint iteration' mode until the client is in sync * and then it will go back into 'Event observation' mode again. */ pullStream$.next('RESYNC'); } } } ","version":"Next","tagName":"h2"},{"title":"Multi Tab support","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#multi-tab-support","content":" For better performance, the replication runs only in one instance when RxDB is used in multiple browser tabs or Node.js processes. By setting waitForLeadership: false you can enforce that each tab runs its own replication cycles. If used in a multi instance setting, so when at database creation multiInstance: false was not set, you need to import the leader election plugin so that RxDB can know how many instances exist and which browser tab should run the replication. ","version":"Next","tagName":"h2"},{"title":"Error handling","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#error-handling","content":" When sending a document to the remote fails for any reason, RxDB will send it again in a later point in time. This happens for all errors. The document write could have already reached the remote instance and be processed, while only the answering fails. The remote instance must be designed to handle this properly and to not crash on duplicate data transmissions. Depending on your use case, it might be ok to just write the duplicate document data again. But for a more resilient error handling you could compare the last write timestamps or add a unique write id field to the document. This field can then be used to detect duplicates and ignore re-send data. Also the replication has an .error$ stream that emits all RxError objects that arise during replication. Notice that these errors contain an inner .parameters.errors field that contains the original error. Also they contain a .parameters.direction field that indicates if the error was thrown during pull or push. You can use these to properly handle errors. For example when the client is outdated, the server might respond with a 426 Upgrade Required error code that can then be used to force a page reload. replicationState.error$.subscribe((error) => { if( error.parameters.errors && error.parameters.errors[0] && error.parameters.errors[0].code === 426 ) { // client is outdated -> enforce a page reload location.reload(); } }); ","version":"Next","tagName":"h2"},{"title":"Security","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#security","content":" Be aware that client side clocks can never be trusted. When you have a client-backend replication, the backend should overwrite the updatedAt timestamp or use another field, when it receives the change from the client. ","version":"Next","tagName":"h2"},{"title":"RxReplicationState","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#rxreplicationstate","content":" The function replicateRxCollection() returns a RxReplicationState that can be used to manage and observe the replication. ","version":"Next","tagName":"h2"},{"title":"Observable","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#observable","content":" To observe the replication, the RxReplicationState has some Observable properties: // emits each document that was received from the remote myRxReplicationState.received$.subscribe(doc => console.dir(doc)); // emits each document that was send to the remote myRxReplicationState.sent$.subscribe(doc => console.dir(doc)); // emits all errors that happen when running the push- & pull-handlers. myRxReplicationState.error$.subscribe(error => console.dir(error)); // emits true when the replication was canceled, false when not. myRxReplicationState.canceled$.subscribe(bool => console.dir(bool)); // emits true when a replication cycle is running, false when not. myRxReplicationState.active$.subscribe(bool => console.dir(bool)); ","version":"Next","tagName":"h3"},{"title":"awaitInitialReplication()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#awaitinitialreplication","content":" With awaitInitialReplication() you can await the initial replication that is done when a full replication cycle was successful finished for the first time. The returned promise will never resolve if you cancel the replication before the initial replication can be done. await myRxReplicationState.awaitInitialReplication(); ","version":"Next","tagName":"h3"},{"title":"awaitInSync()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#awaitinsync","content":" Returns a Promise that resolves when: awaitInitialReplication() has emitted.All local data is replicated with the remote.No replication cycle is running or in retry-state. warning When multiInstance: true and waitForLeadership: true and another tab is already running the replication, awaitInSync() will not resolve until the other tab is closed and the replication starts in this tab. await myRxReplicationState.awaitInSync(); warning awaitInitialReplication() and awaitInSync() should not be used to block the application A common mistake in RxDB usage is when developers want to block the app usage until the application is in sync. Often they just await the promise of awaitInitialReplication() or awaitInSync() and show a loading spinner until they resolve. This is dangerous and should not be done because: When multiInstance: true and waitForLeadership: true (default) and another tab is already running the replication, awaitInitialReplication() will not resolve until the other tab is closed and the replication starts in this tab.Your app can no longer be started when the device is offline because there the awaitInitialReplication() will never resolve and the app cannot be used. Instead you should store the last in-sync time in a local document and observe its value on all instances. For example if you want to block clients from using the app if they have not been in sync for the last 24 hours, you could use this code: // update last-in-sync-flag each time replication is in sync await myCollection.insertLocal('last-in-sync', { time: 0 }).catch(); // ensure flag exists myReplicationState.active$.pipe( mergeMap(async() => { await myReplicationState.awaitInSync(); await myCollection.upsertLocal('last-in-sync', { time: Date.now() }) }) ); // observe the flag and toggle loading spinner await showLoadingSpinner(); const oneDay = 1000 * 60 * 60 * 24; await firstValueFrom( myCollection.getLocal$('last-in-sync').pipe( filter(d => d.get('time') > (Date.now() - oneDay)) ) ); await hideLoadingSpinner(); ","version":"Next","tagName":"h3"},{"title":"reSync()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#resync","content":" Triggers a RESYNC cycle where the replication goes into checkpoint iteration until the client is in sync with the backend. Used in unit tests or when no proper pull.stream$ can be implemented so that the client only knows that something has been changed but not what. myRxReplicationState.reSync(); If your backend is not capable of sending events to the client at all, you could run reSync() in an interval so that the client will automatically fetch server changes after some time at least. // trigger RESYNC each 10 seconds. setInterval(() => myRxReplicationState.reSync(), 10 * 1000); ","version":"Next","tagName":"h3"},{"title":"cancel()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#cancel","content":" Cancels the replication. Returns a promise that resolved when everything has been cleaned up. await myRxReplicationState.cancel(); ","version":"Next","tagName":"h3"},{"title":"pause()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#pause","content":" Pauses a running replication. The replication can later be resumed with RxReplicationState.start(). await myRxReplicationState.pause(); await myRxReplicationState.start(); // restart ","version":"Next","tagName":"h3"},{"title":"remove()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#remove","content":" Cancels the replication and deletes the metadata of the replication state. This can be used to restart the replication "from scratch". Calling .remove() will only delete the replication metadata, it will NOT delete the documents from the collection of the replication. await myRxReplicationState.remove(); ","version":"Next","tagName":"h3"},{"title":"isStopped()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#isstopped","content":" Returns true if the replication is stopped. This can be if a non-live replication is finished or a replication got canceled. replicationState.isStopped(); // true/false ","version":"Next","tagName":"h3"},{"title":"isPaused()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#ispaused","content":" Returns true if the replication is paused. replicationState.isPaused(); // true/false ","version":"Next","tagName":"h3"},{"title":"Setting a custom initialCheckpoint","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#setting-a-custom-initialcheckpoint","content":" By default, the push replication will start from the beginning of time and push all documents from there to the remote. By setting a custom push.initialCheckpoint, you can tell the replication to only push writes that are newer than the given checkpoint. // store the latest checkpoint of a collection let lastLocalCheckpoint: any; myCollection.checkpoint$.subscribe(checkpoint => lastLocalCheckpoint = checkpoint); // start the replication but only push documents that are newer than the lastLocalCheckpoint const replicationState = replicateRxCollection({ collection: myCollection, replicationIdentifier: 'my-custom-replication-with-init-checkpoint', /* ... */ push: { handler: /* ... */, initialCheckpoint: lastLocalCheckpoint } }); The same can be done for the other direction by setting a pull.initialCheckpoint. Notice that here we need the remote checkpoint from the backend instead of the one from the RxDB storage. // get the last pull checkpoint from the server const lastRemoteCheckpoint = await (await fetch('http://example.com/pull-checkpoint')).json(); // start the replication but only pull documents that are newer than the lastRemoteCheckpoint const replicationState = replicateRxCollection({ collection: myCollection, replicationIdentifier: 'my-custom-replication-with-init-checkpoint', /* ... */ pull: { handler: /* ... */, initialCheckpoint: lastRemoteCheckpoint } }); ","version":"Next","tagName":"h3"},{"title":"toggleOnDocumentVisible","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#toggleondocumentvisible","content":" (experimental) Set this to true to ensure the replication also runs if the tab is currently visbile. This fixes problem in browsers where the replicating leader-elected tab becomes stale or hibernated by the browser to save battery life. If the tab is losing visibility, the replication will be paused automatically and then restarted if either the tab becomes leader or the tab becomes visible again. const replicationState = replicateRxCollection({ toggleOnDocumentVisible: true, /* ... */ }); ","version":"Next","tagName":"h3"},{"title":"Attachment replication (beta)","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#attachment-replication-beta","content":" Attachment replication is supported in the RxDB replication protocol itself. However not all replication plugins support it. If you start the replication with a collection which has enabled RxAttachments attachments data will be added to all push- and write data. The pushed documents will contain an _attachments object which contains: The attachment meta data (id, length, digest) of all non-attachmentsThe full attachment data of all attachments that have been updated/added from the client.Deleted attachments are spared out in the pushed document. With this data, the backend can decide onto which attachments must be deleted, added or overwritten. Accordingly, the pulled document must contain the same data, if the backend has a new document state with updated attachments. ","version":"Next","tagName":"h3"},{"title":"FAQ","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#faq","content":" I have infinite loops in my replication, how to debug? When you have infinite loops in your replication or random re-runs of http requests after some time, the reason is likely that your pull-handler is crashing. The debug this, add a log to the error$ handler to debug it. myRxReplicationState.error$.subscribe(err => console.log('error$', err)). ","version":"Next","tagName":"h2"},{"title":"RxDatabase","type":0,"sectionRef":"#","url":"/rx-database.html","content":"","keywords":"","version":"Next"},{"title":"Creation","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#creation","content":" The database is created by the asynchronous .createRxDatabase() function of the core RxDB module. It has the following parameters: import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const db = await createRxDatabase({ name: 'heroesdb', // <- name storage: getRxStorageDexie(), // <- RxStorage /* Optional parameters: */ password: 'myPassword', // <- password (optional) multiInstance: true, // <- multiInstance (optional, default: true) eventReduce: true, // <- eventReduce (optional, default: false) cleanupPolicy: {} // <- custom cleanup policy (optional) }); ","version":"Next","tagName":"h2"},{"title":"name","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#name","content":" The database-name is a string which uniquely identifies the database. When two RxDatabases have the same name and use the same RxStorage, their data can be assumed as equal and they will share events between each other. Depending on the storage or adapter this can also be used to define the filesystem folder of your data. ","version":"Next","tagName":"h3"},{"title":"storage","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#storage","content":" RxDB works on top of an implementation of the RxStorage interface. This interface is an abstraction that allows you to use different underlying databases that actually handle the documents. Depending on your use case you might use a different storage with different tradeoffs in performance, bundle size or supported runtimes. There are many RxStorage implementations that can be used depending on the JavaScript environment and performance requirements. For example you can use the Dexie RxStorage in the browser or use the MongoDB RxStorage in Node.js. List of RxStorage implementations // use the Dexie.js RxStorage that stores data in IndexedDB. import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const dbDexie = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageDexie() }); // ...or use the MongoDB RxStorage in Node.js. import { getRxStorageMongoDB } from 'rxdb/plugins/storage-mongodb'; const dbMongo = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageMongoDB({ connection: 'mongodb://localhost:27017,localhost:27018,localhost:27019' }) }); ","version":"Next","tagName":"h3"},{"title":"password","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#password","content":" (optional)If you want to use encrypted fields in the collections of a database, you have to set a password for it. The password must be a string with at least 12 characters. Read more about encryption here. ","version":"Next","tagName":"h3"},{"title":"multiInstance","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#multiinstance","content":" (optional=true)When you create more than one instance of the same database in a single javascript-runtime, you should set multiInstance to true. This will enable the event sharing between the two instances. For example when the user has opened multiple browser windows, events will be shared between them so that both windows react to the same changes.multiInstance should be set to false when you have single-instances like a single Node.js-process, a react-native-app, a cordova-app or a single-window electron app which can decrease the startup time because no instance coordination has to be done. ","version":"Next","tagName":"h3"},{"title":"eventReduce","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#eventreduce","content":" (optional=false) One big benefit of having a realtime database is that big performance optimizations can be done when the database knows a query is observed and the updated results are needed continuously. RxDB uses the EventReduce Algorithm to optimize observer or recurring queries. For better performance, you should always set eventReduce: true. This will also be the default in the next major RxDB version. ","version":"Next","tagName":"h3"},{"title":"ignoreDuplicate","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#ignoreduplicate","content":" (optional=false)If you create multiple RxDatabase-instances with the same name and same adapter, it's very likely that you have done something wrong. To prevent this common mistake, RxDB will throw an error when you do this. In some rare cases like unit-tests, you want to do this intentional by setting ignoreDuplicate to true. Because setting ignoreDuplicate: true in production will decrease the performance by having multiple instances of the same database, ignoreDuplicate is only allowed to be set in dev-mode. const db1 = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageDexie(), ignoreDuplicate: true }); const db2 = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageDexie(), ignoreDuplicate: true // this create-call will not throw because you explicitly allow it }); ","version":"Next","tagName":"h3"},{"title":"hashFunction","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#hashfunction","content":" By default, RxDB will use crypto.subtle.digest('SHA-256', data) for hashing. If you need a different hash function or the crypto.subtle API is not supported in your JavaScript runtime, you can provide an own hash function instead. A hash function gets a string as input and returns a Promise that resolves a string. // example hash function that runs in plain JavaScript import { sha256 } from 'ohash'; function myOwnHashFunction(input: string) { return Promise.resolve(sha256(input)); } const db = await createRxDatabase({ hashFunction: myOwnHashFunction /* ... */ }); If you get the error message TypeError: Cannot read properties of undefined (reading 'digest') this likely means that you are neither running on localhost nor on https which is why your browser might not allow access to crypto.subtle.digest. ","version":"Next","tagName":"h3"},{"title":"Methods","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#methods","content":" ","version":"Next","tagName":"h2"},{"title":"Observe with $","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#observe-with-","content":" Calling this will return an rxjs-Observable which streams all write events of the RxDatabase. myDb.$.subscribe(changeEvent => console.dir(changeEvent)); ","version":"Next","tagName":"h3"},{"title":"exportJSON()","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#exportjson","content":" Use this function to create a json-export from every piece of data in every collection of this database. You can pass true as a parameter to decrypt the encrypted data-fields of your document. Before exportJSON() and importJSON() can be used, you have to add the json-dump plugin. import { addRxPlugin } from 'rxdb'; import { RxDBJsonDumpPlugin } from 'rxdb/plugins/json-dump'; addRxPlugin(RxDBJsonDumpPlugin); myDatabase.exportJSON() .then(json => console.dir(json)); ","version":"Next","tagName":"h3"},{"title":"importJSON()","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#importjson","content":" To import the json-dumps into your database, use this function. // import the dump to the database emptyDatabase.importJSON(json) .then(() => console.log('done')); ","version":"Next","tagName":"h3"},{"title":"backup()","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#backup","content":" Writes the current (or ongoing) database state to the filesystem. Read more ","version":"Next","tagName":"h3"},{"title":"waitForLeadership()","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#waitforleadership","content":" Returns a Promise which resolves when the RxDatabase becomes elected leader. ","version":"Next","tagName":"h3"},{"title":"requestIdlePromise()","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#requestidlepromise","content":" Returns a promise which resolves when the database is in idle. This works similar to requestIdleCallback but tracks the idle-ness of the database instead of the CPU. Use this for semi-important tasks like cleanups which should not affect the speed of important tasks. myDatabase.requestIdlePromise().then(() => { // this will run at the moment the database has nothing else to do myCollection.customCleanupFunction(); }); // with timeout myDatabase.requestIdlePromise(1000 /* time in ms */).then(() => { // this will run at the moment the database has nothing else to do // or the timeout has passed myCollection.customCleanupFunction(); }); ","version":"Next","tagName":"h3"},{"title":"close()","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#close","content":" Closes the databases object-instance. This is to free up memory and stop all observers and replications. Returns a Promise that resolves when the database is closed. Closing a database will not remove the databases data. When you create the database again with createRxDatabase(), all data will still be there. await myDatabase.close(); ","version":"Next","tagName":"h3"},{"title":"remove()","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#remove","content":" Wipes all documents from the storage. Use this to free up disc space. await myDatabase.remove(); // database instance is now gone // You can also clear a database without removing its instance import { removeRxDatabase } from 'rxdb'; removeRxDatabase('mydatabasename', 'localstorage'); ","version":"Next","tagName":"h3"},{"title":"isRxDatabase","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#isrxdatabase","content":" Returns true if the given object is an instance of RxDatabase. Returns false if not. import { isRxDatabase } from 'rxdb'; const is = isRxDatabase(myObj); ","version":"Next","tagName":"h3"},{"title":"RxDocument","type":0,"sectionRef":"#","url":"/rx-document.html","content":"","keywords":"","version":"Next"},{"title":"insert","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#insert","content":" To insert a document into a collection, you have to call the collection's .insert()-function. await myCollection.insert({ name: 'foo', lastname: 'bar' }); ","version":"Next","tagName":"h2"},{"title":"find","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#find","content":" To find documents in a collection, you have to call the collection's .find()-function. See RxQuery. const docs = await myCollection.find().exec(); // <- find all documents ","version":"Next","tagName":"h2"},{"title":"Functions","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#functions","content":" ","version":"Next","tagName":"h2"},{"title":"get()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#get","content":" This will get a single field of the document. If the field is encrypted, it will be automatically decrypted before returning. const name = myDocument.get('name'); // returns the name // OR const name = myDocument.name; ","version":"Next","tagName":"h3"},{"title":"get$()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#get-1","content":" This function returns an observable of the given paths-value. The current value of this path will be emitted each time the document changes. // get the live-updating value of 'name' var isName; myDocument.get$('name') .subscribe(newName => { isName = newName; }); await myDocument.incrementalPatch({name: 'foobar2'}); console.dir(isName); // isName is now 'foobar2' // OR myDocument.name$ .subscribe(newName => { isName = newName; }); ","version":"Next","tagName":"h3"},{"title":"proxy-get","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#proxy-get","content":" All properties of a RxDocument are assigned as getters so you can also directly access values instead of using the get()-function. // Identical to myDocument.get('name'); var name = myDocument.name; // Can also get nested values. var nestedValue = myDocument.whatever.nestedfield; // Also usable with observables: myDocument.firstName$.subscribe(newName => console.log('name is: ' + newName)); // > 'name is: Stefe' await myDocument.incrementalPatch({firstName: 'Steve'}); // > 'name is: Steve' ","version":"Next","tagName":"h3"},{"title":"update()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#update","content":" Updates the document based on the mongo-update-syntax, based on the mingo library. /** * If not done before, you have to add the update plugin. */ import { addRxPlugin } from 'rxdb'; import { RxDBUpdatePlugin } from 'rxdb/plugins/update'; addRxPlugin(RxDBUpdatePlugin); await myDocument.update({ $inc: { age: 1 // increases age by 1 }, $set: { firstName: 'foobar' // sets firstName to foobar } }); ","version":"Next","tagName":"h3"},{"title":"modify()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#modify","content":" Updates a documents data based on a function that mutates the current data and returns the new value. const changeFunction = (oldData) => { oldData.age = oldData.age + 1; oldData.name = 'foooobarNew'; return oldData; } await myDocument.modify(changeFunction); console.log(myDocument.name); // 'foooobarNew' ","version":"Next","tagName":"h3"},{"title":"patch()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#patch","content":" Overwrites the given attributes over the documents data. await myDocument.patch({ name: 'Steve', age: undefined // setting an attribute to undefined will remove it }); console.log(myDocument.name); // 'Steve' ","version":"Next","tagName":"h3"},{"title":"Prevent conflicts with the incremental methods","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#prevent-conflicts-with-the-incremental-methods","content":" Making a normal change to the non-latest version of a RxDocument will lead to a 409 CONFLICT error because RxDB uses revision checks instead of transactions. To make a change to a document, no matter what the current state is, you can use the incremental methods: // update await myDocument.incrementalUpdate({ $inc: { age: 1 // increases age by 1 } }); // modify await myDocument.incrementalModify(docData => { docData.age = docData.age + 1; return docData; }); // patch await myDocument.incrementalPatch({ age: 100 }); // remove await myDocument.incrementalRemove({ age: 100 }); ","version":"Next","tagName":"h3"},{"title":"getLatest()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#getlatest","content":" Returns the latest known state of the RxDocument. const myDocument = await myCollection.findOne('foobar').exec(); const docAfterEdit = await myDocument.incrementalPatch({ age: 10 }); const latestDoc = myDocument.getLatest(); console.log(docAfterEdit === latestDoc); // > true ","version":"Next","tagName":"h3"},{"title":"Observe $","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#observe-","content":" Calling this will return an rxjs-Observable which the current newest state of the RxDocument. // get all changeEvents myDocument.$ .subscribe(currentRxDocument => console.dir(currentRxDocument)); ","version":"Next","tagName":"h3"},{"title":"remove()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#remove","content":" This removes the document from the collection. Notice that this will not purge the document from the store but set _deleted:true so that it will be no longer returned on queries. To fully purge a document, use the cleanup plugin. myDocument.remove(); ","version":"Next","tagName":"h3"},{"title":"deleted$","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#deleted","content":" Emits a boolean value, depending on whether the RxDocument is deleted or not. let lastState = null; myDocument.deleted$.subscribe(state => lastState = state); console.log(lastState); // false await myDocument.remove(); console.log(lastState); // true ","version":"Next","tagName":"h3"},{"title":"get deleted","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#get-deleted","content":" A getter to get the current value of deleted$. console.log(myDocument.deleted); // false await myDocument.remove(); console.log(myDocument.deleted); // true ","version":"Next","tagName":"h3"},{"title":"toJSON()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#tojson","content":" Returns the document's data as plain json object. This will return an immutable object. To get something that can be modified, use toMutableJSON() instead. const json = myDocument.toJSON(); console.dir(json); /* { passportId: 'h1rg9ugdd30o', firstName: 'Carolina', lastName: 'Gibson', age: 33 ... */ You can also set withMetaFields: true to get additional meta fields like the revision, attachments or the deleted flag. const json = myDocument.toJSON(true); console.dir(json); /* { passportId: 'h1rg9ugdd30o', firstName: 'Carolina', lastName: 'Gibson', _deleted: false, _attachments: { ... }, _rev: '1-aklsdjfhaklsdjhf...' */ ","version":"Next","tagName":"h3"},{"title":"toMutableJSON()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#tomutablejson","content":" Same as toJSON() but returns a deep cloned object that can be mutated afterwards. Remember that deep cloning is performance expensive and should only be done when necessary. const json = myDocument.toMutableJSON(); json.firstName = 'Alice'; // The returned document can be mutated All methods of RxDocument are bound to the instance When you get a method from a RxDocument, the method is automatically bound to the documents instance. This means you do not have to use things like myMethod.bind(myDocument) like you would do in jsx. ","version":"Next","tagName":"h3"},{"title":"isRxDocument","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#isrxdocument","content":" Returns true if the given object is an instance of RxDocument. Returns false if not. const is = isRxDocument(myObj); ","version":"Next","tagName":"h3"},{"title":"RxCollection","type":0,"sectionRef":"#","url":"/rx-collection.html","content":"","keywords":"","version":"Next"},{"title":"Creating a Collection","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#creating-a-collection","content":" To create one or more collections you need a RxDatabase object which has the .addCollections()-method. Every collection needs a collection name and a valid RxJsonSchema. Other attributes are optional. const myCollections = await myDatabase.addCollections({ // key = collectionName humans: { schema: mySchema, statics: {}, // (optional) ORM-functions for this collection methods: {}, // (optional) ORM-functions for documents attachments: {}, // (optional) ORM-functions for attachments options: {}, // (optional) Custom parameters that might be used in plugins migrationStrategies: {}, // (optional) autoMigrate: true, // (optional) [default=true] cacheReplacementPolicy: function(){}, // (optional) custom cache replacement policy conflictHandler: function(){} // (optional) a custom conflict handler can be used }, // you can create multiple collections at once animals: { // ... } }); ","version":"Next","tagName":"h2"},{"title":"name","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#name","content":" The name uniquely identifies the collection and should be used to refine the collection in the database. Two different collections in the same database can never have the same name. Collection names must match the following regex: ^[a-z][a-z0-9]*$. ","version":"Next","tagName":"h3"},{"title":"schema","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#schema","content":" The schema defines how the documents of the collection are structured. RxDB uses a schema format, similar to JSON schema. Read more about the RxDB schema format here. ","version":"Next","tagName":"h3"},{"title":"ORM-functions","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#orm-functions","content":" With the parameters statics, methods and attachments, you can define ORM-functions that are applied to each of these objects that belong to this collection. See ORM/DRM. ","version":"Next","tagName":"h3"},{"title":"Migration","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#migration","content":" With the parameters migrationStrategies and autoMigrate you can specify how migration between different schema-versions should be done. See Migration. ","version":"Next","tagName":"h3"},{"title":"Get a collection from the database","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#get-a-collection-from-the-database","content":" To get an existing collection from the database, call the collection name directly on the database: // newly created collection const collections = await db.addCollections({ heroes: { schema: mySchema } }); const collection2 = db.heroes; console.log(collections.heroes === collection2); //> true ","version":"Next","tagName":"h2"},{"title":"Functions","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#functions","content":" ","version":"Next","tagName":"h2"},{"title":"Observe $","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#observe-","content":" Calling this will return an rxjs-Observable which streams every change to data of this collection. myCollection.$.subscribe(changeEvent => console.dir(changeEvent)); // you can also observe single event-types with insert$ update$ remove$ myCollection.insert$.subscribe(changeEvent => console.dir(changeEvent)); myCollection.update$.subscribe(changeEvent => console.dir(changeEvent)); myCollection.remove$.subscribe(changeEvent => console.dir(changeEvent)); ","version":"Next","tagName":"h3"},{"title":"insert()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#insert","content":" Use this to insert new documents into the database. The collection will validate the schema and automatically encrypt any encrypted fields. Returns the new RxDocument. const doc = await myCollection.insert({ name: 'foo', lastname: 'bar' }); ","version":"Next","tagName":"h3"},{"title":"insertIfNotExists()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#insertifnotexists","content":" The insertIfNotExists() method attempts to insert a new document into the collection only if a document with the same primary key does not already exist. This is useful for ensuring uniqueness without having to manually check for existing records before inserting or handling conflicts. Returns either the newly added RxDocument or the previous existing document. const doc = await myCollection.insertIfNotExists({ name: 'foo', lastname: 'bar' }); ","version":"Next","tagName":"h3"},{"title":"bulkInsert()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#bulkinsert","content":" When you have to insert many documents at once, use bulk insert. This is much faster than calling .insert() multiple times. Returns an object with a success- and error-array. const result = await myCollection.bulkInsert([{ name: 'foo1', lastname: 'bar1' }, { name: 'foo2', lastname: 'bar2' }]); // > { // success: [RxDocument, RxDocument], // error: [] // } note bulkInsert will not fail on update conflicts and you cannot expect that on failure the other documents are not inserted. Also the call to bulkInsert() it will not throw if a single document errors because of validation errors. Instead it will return the error in the .error property of the returned object. ","version":"Next","tagName":"h3"},{"title":"bulkRemove()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#bulkremove","content":" When you want to remove many documents at once, use bulk remove. Returns an object with a success- and error-array. const result = await myCollection.bulkRemove([ 'primary1', 'primary2' ]); // > { // success: [RxDocument, RxDocument], // error: [] // } Instead of providing the document ids, you can also use the RxDocument instances. This can have better performance if your code knows them already at the moment of removing them: const result = await myCollection.bulkRemove([ myRxDocument1, myRxDocument2, /* ... */ ]); ","version":"Next","tagName":"h3"},{"title":"upsert()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#upsert","content":" Inserts the document if it does not exist within the collection, otherwise it will overwrite it. Returns the new or overwritten RxDocument. const doc = await myCollection.upsert({ name: 'foo', lastname: 'bar2' }); ","version":"Next","tagName":"h3"},{"title":"bulkUpsert()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#bulkupsert","content":" Same as upsert() but runs over multiple documents. Improves performance compared to running many upsert() calls. Returns an error and a success array. const docs = await myCollection.bulkUpsert([ { name: 'foo', lastname: 'bar2' }, { name: 'bar', lastname: 'foo2' } ]); /** * { * success: [RxDocument, RxDocument] * error: [], * } */ ","version":"Next","tagName":"h3"},{"title":"incrementalUpsert()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#incrementalupsert","content":" When you run many upsert operations on the same RxDocument in a very short timespan, you might get a 409 Conflict error. This means that you tried to run a .upsert() on the document, while the previous upsert operation was still running. To prevent these types of errors, you can run incremental upsert operations. The behavior is similar to RxDocument.incrementalModify. const docData = { name: 'Bob', // primary lastName: 'Kelso' }; myCollection.upsert(docData); myCollection.upsert(docData); // -> throws because of parallel update to the same document myCollection.incrementalUpsert(docData); myCollection.incrementalUpsert(docData); myCollection.incrementalUpsert(docData); // wait until last upsert finished await myCollection.incrementalUpsert(docData); // -> works ","version":"Next","tagName":"h3"},{"title":"find()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#find","content":" To find documents in your collection, use this method. See RxQuery.find(). // find all that are older than 18 const olderDocuments = await myCollection .find() .where('age') .gt(18) .exec(); // execute ","version":"Next","tagName":"h3"},{"title":"findOne()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#findone","content":" This does basically what find() does, but it returns only a single document. You can pass a primary value to find a single document more easily. To find documents in your collection, use this method. See RxQuery.find(). // get document with name:foobar myCollection.findOne({ selector: { name: 'foo' } }).exec().then(doc => console.dir(doc)); // get document by primary, functionally identical to above query myCollection.findOne('foo') .exec().then(doc => console.dir(doc)); ","version":"Next","tagName":"h3"},{"title":"findByIds()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#findbyids","content":" Find many documents by their id (primary value). This has a way better performance than running multiple findOne() or a find() with a big $or selector. Returns a Map where the primary key of the document is mapped to the document. Documents that do not exist or are deleted, will not be inside of the returned Map. const ids = [ 'alice', 'bob', /* ... */ ]; const docsMap = await myCollection.findByIds(ids); console.dir(docsMap); // Map(2) note The Map returned by findByIds is not guaranteed to return elements in the same order as the list of ids passed to it. ","version":"Next","tagName":"h3"},{"title":"exportJSON()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#exportjson","content":" Use this function to create a json export from every document in the collection. Before exportJSON() and importJSON() can be used, you have to add the json-dump plugin. import { addRxPlugin } from 'rxdb'; import { RxDBJsonDumpPlugin } from 'rxdb/plugins/json-dump'; addRxPlugin(RxDBJsonDumpPlugin); myCollection.exportJSON() .then(json => console.dir(json)); ","version":"Next","tagName":"h3"},{"title":"importJSON()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#importjson","content":" To import the json dump into your collection, use this function. // import the dump to the database myCollection.importJSON(json) .then(() => console.log('done')); Note that importing will fire events for each inserted document. ","version":"Next","tagName":"h3"},{"title":"remove()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#remove","content":" Removes all known data of the collection and its previous versions. This removes the documents, the schemas, and older schemaVersions. await myCollection.remove(); // collection is now removed and can be re-created ","version":"Next","tagName":"h3"},{"title":"close()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#close","content":" Removes the collection's object instance from the RxDatabase. This is to free up memory and stop all observers and replications. It will not delete the collections data. When you create the collection again with database.addCollections(), the newly added collection will still have all data. await myCollection.close(); ","version":"Next","tagName":"h3"},{"title":"onClose / onRemove()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#onclose--onremove","content":" With these you can add a function that is run when the collection was closed or removed. This works even across multiple browser tabs so you can detect when another tab removes the collection and you application can behave accordingly. await myCollection.onClose(() => console.log('I am closed')); await myCollection.onRemove(() => console.log('I am removed')); ","version":"Next","tagName":"h3"},{"title":"isRxCollection","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#isrxcollection","content":" Returns true if the given object is an instance of RxCollection. Returns false if not. const is = isRxCollection(myObj); ","version":"Next","tagName":"h3"},{"title":"FAQ","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#faq","content":" When I reload the browser window, will my collections still be in the database? No, the javascript instance of the collections will not automatically load into the database on page reloads. You have to call the addCollections() method each time you create your database. This will create the JavaScript object instance of the RxCollection so that you can use it in the RxDatabase. The persisted data will be automatically in your RxCollection each time you create it. How to remove the limit of 16 collections? In the open-source version of RxDB, the amount of RxCollections that can exist in parallel is limited to 16. To remove this limit, you can purchase the Premium Plugins and call the setPremiumFlag() function before creating a database: import { setPremiumFlag } from 'rxdb-premium/plugins/shared'; setPremiumFlag(); ","version":"Next","tagName":"h2"},{"title":"Local Documents","type":0,"sectionRef":"#","url":"/rx-local-document.html","content":"","keywords":"","version":"Next"},{"title":"Add the local documents plugin","type":1,"pageTitle":"Local Documents","url":"/rx-local-document.html#add-the-local-documents-plugin","content":" To enable the local documents, you have to add the local-documents plugin. import { addRxPlugin } from 'rxdb'; import { RxDBLocalDocumentsPlugin } from 'rxdb/plugins/local-documents'; addRxPlugin(RxDBLocalDocumentsPlugin); ","version":"Next","tagName":"h2"},{"title":"Activate the plugin for a RxDatabase or RxCollection","type":1,"pageTitle":"Local Documents","url":"/rx-local-document.html#activate-the-plugin-for-a-rxdatabase-or-rxcollection","content":" For better performance, the local document plugin does not create a storage for every database or collection that is created. Instead you have to set localDocuments: true when you want to store local documents in the instance. // activate local documents on a RxDatabase const myDatabase = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageDexie(), localDocuments: true // <- activate this to store local documents in the database }); myDatabase.addCollections({ messages: { schema: messageSchema, localDocuments: true // <- activate this to store local documents in the collection } }); note If you want to store local documents in a RxCollection but NOT in the RxDatabase, you MUST NOT set localDocuments: true in the RxDatabase because it will only slow down the initial database creation. ","version":"Next","tagName":"h2"},{"title":"insertLocal()","type":1,"pageTitle":"Local Documents","url":"/rx-local-document.html#insertlocal","content":" Creates a local document for the database or collection. Throws if a local document with the same id already exists. Returns a Promise which resolves the new RxLocalDocument. const localDoc = await myCollection.insertLocal( 'foobar', // id { // data foo: 'bar' } ); // you can also use local-documents on a database const localDoc = await myDatabase.insertLocal( 'foobar', // id { // data foo: 'bar' } ); ","version":"Next","tagName":"h2"},{"title":"upsertLocal()","type":1,"pageTitle":"Local Documents","url":"/rx-local-document.html#upsertlocal","content":" Creates a local document for the database or collection if not exists. Overwrites the if exists. Returns a Promise which resolves the RxLocalDocument. const localDoc = await myCollection.upsertLocal( 'foobar', // id { // data foo: 'bar' } ); ","version":"Next","tagName":"h2"},{"title":"getLocal()","type":1,"pageTitle":"Local Documents","url":"/rx-local-document.html#getlocal","content":" Find a RxLocalDocument by its id. Returns a Promise which resolves the RxLocalDocument or null if not exists. const localDoc = await myCollection.getLocal('foobar'); ","version":"Next","tagName":"h2"},{"title":"getLocal$()","type":1,"pageTitle":"Local Documents","url":"/rx-local-document.html#getlocal-1","content":" Like getLocal() but returns an Observable that emits the document or null if not exists. const subscription = myCollection.getLocal$('foobar').subscribe(documentOrNull => { console.dir(documentOrNull); // > RxLocalDocument or null }); ","version":"Next","tagName":"h2"},{"title":"RxLocalDocument","type":1,"pageTitle":"Local Documents","url":"/rx-local-document.html#rxlocaldocument","content":" A RxLocalDocument behaves like a normal RxDocument. const localDoc = await myCollection.getLocal('foobar'); // access data const foo = localDoc.get('foo'); // change data localDoc.set('foo', 'bar2'); await localDoc.save(); // observe data localDoc.get$('foo').subscribe(value => { /* .. */ }); // remove it await localDoc.remove(); note Because the local document does not have a schema, accessing the documents data-fields via pseudo-proxy will not work. const foo = localDoc.foo; // undefined const foo = localDoc.get('foo'); // works! localDoc.foo = 'bar'; // does not work! localDoc.set('foo', 'bar'); // works For the usage with typescript, you can have access to the typed data of the document over toJSON() declare type MyLocalDocumentType = { foo: string } const localDoc = await myCollection.upsertLocal<MyLocalDocumentType>( 'foobar', // id { // data foo: 'bar' } ); // typescript will know that foo is a string const foo: string = localDoc.toJSON().foo; ","version":"Next","tagName":"h2"},{"title":"Scaling the RxServer","type":0,"sectionRef":"#","url":"/rx-server-scaling.html","content":"","keywords":"","version":"Next"},{"title":"Vertical Scaling","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#vertical-scaling","content":" Vertical Scaling aka "scaling up" has the goal to get more power out of a single server by utilizing more of the servers compute. Vertical scaling should be the first step when you decide it is time to scale. ","version":"Next","tagName":"h2"},{"title":"Run multiple JavaScript processes","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#run-multiple-javascript-processes","content":" To utilize more compute power of your server, the first step is to scale vertically by running the RxDB server on multiple processes in parallel. RxDB itself is already build to support multiInstance-usage on the client, like when the user has opened multiple browser tabs at once. The same method works also on the server side in Node.js. You can spawn multiple JavaScript processes that use the same RxDatabase and the instances will automatically communicate with each other and distribute their data and events with the BroadcastChannel. By default the multiInstance param is set to true when calling createRxDatabase(), so you do not have to change anything. To make all processes accessible through the same endpoint, you can put a load-balancer like nginx in front of them. ","version":"Next","tagName":"h3"},{"title":"Using workers to split up the load","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#using-workers-to-split-up-the-load","content":" Another way to increases the server capacity is to put the storage into a Worker thread so that the "main" thread with the webserver can handle more requests. This might be easier to set up compared to using multiple JavaScript processes and a load balancer. ","version":"Next","tagName":"h3"},{"title":"Use an in-memory storage at the user facing level","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#use-an-in-memory-storage-at-the-user-facing-level","content":" Another way to serve more requests to your end users, is to use an in-memory storage that has the best read- and write performance. It outperformans persistend storages by a factor of 10x. So instead of directly serving requests from the persistence layer, you add an in-memory layer on top of that. You could either do a replication from your memory database to the persistend one, or you use the memory mapped storage which has this build in. import { getRxStorageMemory } from 'rxdb/plugins/storage-memory'; import { replicateRxCollection } from 'rxdb/plugins/replication'; import { getRxStorageFilesystemNode } from 'rxdb-premium/plugins/storage-filesystem-node'; import { getMemoryMappedRxStorage } from 'rxdb-premium/plugins/storage-memory-mapped'; const myRxDatabase = await createRxDatabase({ name: 'mydb', storage: getMemoryMappedRxStorage({ storage: getRxStorageFilesystemNode({ basePath: path.join(__dirname, 'my-database-folder') }) }) }); await myDatabase.addCollections({/* ... */}); const myServer = await startRxServer({ database: myRxDatabase, port: 443 }); But notice that you have to check your persistence requirements. When a write happens to the memory layer and the server crashes while it has not persisted, in rare cases the write operation might get lost. You can remove that risk by setting awaitWritePersistence: true on the memory mapped storage settings. ","version":"Next","tagName":"h3"},{"title":"Horizontal Scaling","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#horizontal-scaling","content":" To scale the RxDB Server above a single physical hardware unit, there are different solutions where the decision depends on the exact use case. ","version":"Next","tagName":"h2"},{"title":"Single Datastore with multiple branches","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#single-datastore-with-multiple-branches","content":" Thke most common way to use multiple servers with RxDB is to split up the server into a tree with a root "datastore" and multiple "branches". The datastore contains the persisted data and only servers as a replication endpoint for the branches. The branches themself will replicate data to and from the datastore and server requests to the end users. This is mostly useful on read-heavy applications because reads will directly run on the branches without ever reaching the main datastore and you can always add more branches to scale up. Even adding additional layers of "datastores" is possible so the tree can grow (or shrink) with the demand. ","version":"Next","tagName":"h3"},{"title":"Moving the branches to \"the edge\"","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#moving-the-branches-to-the-edge","content":" Instead of running the "branches" of the tree on the same physical location as the datastore, it often makes sense to move the branches into a datacenter near the end users. Because the RxDB replication algorithm is made to work with slow and even partially offline users, using it for physically separated servers will work the same way. Latency is not that important because writes and reads will not decrease performance by blocking each other and the replication can run in the background without blocking other servers during transaction. ","version":"Next","tagName":"h3"},{"title":"Replicate Databases for Microservices","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#replicate-databases-for-microservices","content":" If your application is build with a microservice architecture and your microservices are also build in Node.js, you can scale the database horizontally by moving the database into the microservices and use the RxDB replication to do a realtime sync between the microservices and a main "datastore" server. The "datastore" server would then only handle the replication requests or do some additional things like logging or backups. The compute for reads and writes will then mainly be done on the microservices themself. This simplifies setting up more and more microservices without decreasing the performance of the whole system. ","version":"Next","tagName":"h3"},{"title":"Use a self-scaling RxStorage","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#use-a-self-scaling-rxstorage","content":" An alternative to scaling up the RxDB servers themself, you can also switch to a RxStorage which scales up internally. For example the FoundationDB storage or MongoDB can work on top of a cluster that can increase load by adding more servers to itself. With that you can always add more Node.js RxDB processes that connect to the same cluster and server requests from it. ","version":"Next","tagName":"h3"},{"title":"RxPipeline (beta)","type":0,"sectionRef":"#","url":"/rx-pipeline.html","content":"","keywords":"","version":"Next"},{"title":"Creating a RxPipeline","type":1,"pageTitle":"RxPipeline (beta)","url":"/rx-pipeline.html#creating-a-rxpipeline","content":" Pipelines are created on top of a source RxCollection and have another RxCollection as destination. An identifier is used to identify the state of the pipeline so that different pipelines have a different processing checkpoint state. A plain JavaScript function handler is used to process the data of the source collection writes. const pipeline = await mySourceCollection.addPipeline({ identifier: 'my-pipeline', destination: myDestinationCollection, handler: async (docs) => { /** * Here you can process the documents and to writes to * the destination collection. */ for (const doc of docs) { await myDestinationCollection.insert({ id: doc.primary, category: doc.category }); } } }); beta The pipeline plugin is in beta mode and the API might be changed without a major RxDB release. ","version":"Next","tagName":"h2"},{"title":"Pipeline handlers must be idempotent","type":1,"pageTitle":"RxPipeline (beta)","url":"/rx-pipeline.html#pipeline-handlers-must-be-idempotent","content":" Because a JavaScript process can exit at any time, like when the user closes a browser tab, the pipeline handler function must be idempotent. This means when it only runs partially and is started again with the same input, it should still end up in the correct results. ","version":"Next","tagName":"h2"},{"title":"Pipeline handlers must not throw","type":1,"pageTitle":"RxPipeline (beta)","url":"/rx-pipeline.html#pipeline-handlers-must-not-throw","content":" Pipeline handlers must never throw. If you run operations inside of the handler that might cause errors, you must wrap the handlers code with a try catch by yourself and also handle retries. If your handler throws, your pipeline will be stuck and no longer be usable, which should never happen. ","version":"Next","tagName":"h2"},{"title":"Be careful when doing http requests in the handler","type":1,"pageTitle":"RxPipeline (beta)","url":"/rx-pipeline.html#be-careful-when-doing-http-requests-in-the-handler","content":" When you run http requests inside of your handler, you no longer have an offline first application because reads to the destination collection will be blocked until all handlers have finished. When your client is offline, therefore the collection is blocked for reads and writes. ","version":"Next","tagName":"h2"},{"title":"Use Cases for RxPipeline","type":1,"pageTitle":"RxPipeline (beta)","url":"/rx-pipeline.html#use-cases-for-rxpipeline","content":" The RxPipeline is a handy building block for different features and plugins. You can use it to aggregate data or restructure local data. ","version":"Next","tagName":"h2"},{"title":"UseCase: Re-Index data that comes from replication","type":1,"pageTitle":"RxPipeline (beta)","url":"/rx-pipeline.html#usecase-re-index-data-that-comes-from-replication","content":" Sometimes you want to replicate atomic documents over the wire but locally you want to split these documents for better indexing. For example you replicate email documents that have multiple receivers in a string-array. While string-arrays cannot be indexes, locally you need a way to query for all emails of a given receiver. To handle this case you can set up a RxPipeline that writes the mapping into a separate collection: const pipeline = await emailCollection.addPipeline({ identifier: 'map-email-receivers', destination: emailByReceiverCollection, handler: async (docs) => { for (const doc of docs) { // remove previous mapping await emailByReceiverCollection.find({emailId: doc.primary}).remove(); // add new mapping if(!doc.deleted) { await emailByReceiverCollection.bulkInsert( doc.receivers.map(receiver => ({ emailId: doc.primary, receiver: receiver })) ); } } } }); With this you can efficiently query for "all emails that a person received" by running: const mailIds = await emailByReceiverCollection.find({receiver: '[email protected]'}).exec(); ","version":"Next","tagName":"h3"},{"title":"UseCase: Fulltext Search","type":1,"pageTitle":"RxPipeline (beta)","url":"/rx-pipeline.html#usecase-fulltext-search","content":" You can utilize the pipeline plugin to index text data for efficient fulltext search. const pipeline = await emailCollection.addPipeline({ identifier: 'email-fulltext-search', destination: mailByWordCollection, handler: async (docs) => { for (const doc of docs) { // remove previous mapping await mailByWordCollection.find({emailId: doc.primary}).remove(); // add new mapping if(!doc.deleted) { const words = doc.text.split(' '); await mailByWordCollection.bulkInsert( words.map(word => ({ emailId: doc.primary, word: word })) ); } } } }); With this you can efficiently query for "all emails that contain a given word" by running: const mailIds = await emailByReceiverCollection.find({word: 'foobar'}).exec(); ","version":"Next","tagName":"h3"},{"title":"UseCase: Download data based on source documents","type":1,"pageTitle":"RxPipeline (beta)","url":"/rx-pipeline.html#usecase-download-data-based-on-source-documents","content":" When you have to fetch data for each document of a collection from a server, you can use the pipeline to ensure all documents have their data downloaded and no document is missed out. const pipeline = await emailCollection.addPipeline({ identifier: 'download-data', destination: serverDataCollection, handler: async (docs) => { for (const doc of docs) { const response = await fetch('https://example.com/doc/' + doc.primary); const serverData = await response.json(); await serverDataCollection.upsert({ id: doc.primary, data: serverData }); } } }); ","version":"Next","tagName":"h3"},{"title":"RxPipeline method","type":1,"pageTitle":"RxPipeline (beta)","url":"/rx-pipeline.html#rxpipeline-method","content":" ","version":"Next","tagName":"h2"},{"title":"awaitIdle()","type":1,"pageTitle":"RxPipeline (beta)","url":"/rx-pipeline.html#awaitidle","content":" You can await the idleness of a pipeline with await myRxPipeline.awaitIdle(). This will await a promise that resolved when the pipeline has processed all documents and is not running anymore. ","version":"Next","tagName":"h3"},{"title":"close()","type":1,"pageTitle":"RxPipeline (beta)","url":"/rx-pipeline.html#close","content":" await myRxPipeline.close() stops the pipeline so that is no longer doing stuff. This is automatically called when the RxCollection or RxDatabase of the pipeline is closed. ","version":"Next","tagName":"h3"},{"title":"remove()","type":1,"pageTitle":"RxPipeline (beta)","url":"/rx-pipeline.html#remove","content":" await myRxPipeline.remove() removes the pipeline and all metadata which it has stored. Recreating the pipeline afterwards will start processing all source document from scratch. ","version":"Next","tagName":"h3"},{"title":"RxSchema","type":0,"sectionRef":"#","url":"/rx-schema.html","content":"","keywords":"","version":"Next"},{"title":"Example","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#example","content":" In this example-schema we define a hero-collection with the following settings: the version-number of the schema is 0the name-property is the primaryKey. This means its a unique, indexed, required string which can be used to definitely find a single document.the color-field is required for every documentthe healthpoints-field must be a number between 0 and 100the secret-field stores an encrypted valuethe birthyear-field is final which means it is required and cannot be changedthe skills-attribute must be an array with objects which contain the name and the damage-attribute. There is a maximum of 5 skills per hero.Allows adding attachments and store them encrypted { "title": "hero schema", "version": 0, "description": "describes a simple hero", "primaryKey": "name", "type": "object", "properties": { "name": { "type": "string", "maxLength": 100 // <- the primary key must have set maxLength }, "color": { "type": "string" }, "healthpoints": { "type": "number", "minimum": 0, "maximum": 100 }, "secret": { "type": "string" }, "birthyear": { "type": "number", "final": true, "minimum": 1900, "maximum": 2050 }, "skills": { "type": "array", "maxItems": 5, "uniqueItems": true, "items": { "type": "object", "properties": { "name": { "type": "string" }, "damage": { "type": "number" } } } } }, "required": [ "name", "color" ], "encrypted": ["secret"], "attachments": { "encrypted": true } } ","version":"Next","tagName":"h2"},{"title":"Create a collection with the schema","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#create-a-collection-with-the-schema","content":" await myDatabase.addCollections({ heroes: { schema: myHeroSchema } }); console.dir(myDatabase.heroes.name); // heroes ","version":"Next","tagName":"h2"},{"title":"version","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#version","content":" The version field is a number, starting with 0. When the version is greater than 0, you have to provide the migrationStrategies to create a collection with this schema. ","version":"Next","tagName":"h2"},{"title":"primaryKey","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#primarykey","content":" The primaryKey field contains the fieldname of the property that will be used as primary key for the whole collection. The value of the primary key of the document must be a string, unique, final and is required. ","version":"Next","tagName":"h2"},{"title":"composite primary key","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#composite-primary-key","content":" You can define a composite primary key which gets composed from multiple properties of the document data. const mySchema = { keyCompression: true, // set this to true, to enable the keyCompression version: 0, title: 'human schema with composite primary', primaryKey: { // where should the composed string be stored key: 'id', // fields that will be used to create the composed key fields: [ 'firstName', 'lastName' ], // separator which is used to concat the fields values. separator: '|' }, type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, firstName: { type: 'string' }, lastName: { type: 'string' } }, required: [ 'id', 'firstName', 'lastName' ] }; You can then find a document by using the relevant parts to create the composite primaryKey: // inserting with composite primary await myRxCollection.insert({ // id, <- do not set the id, it will be filled by RxDB firstName: 'foo', lastName: 'bar' }); // find by composite primary const id = myRxCollection.schema.getPrimaryOfDocumentData({ firstName: 'foo', lastName: 'bar' }); const myRxDocument = myRxCollection.findOne(id).exec(); ","version":"Next","tagName":"h3"},{"title":"Indexes","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#indexes","content":" RxDB supports secondary indexes which are defined at the schema-level of the collection. Index is only allowed on field types string, integer and number. Some RxStorages allow to use boolean fields as index. Depending on the field type, you must have set some meta attributes like maxLength or minimum. This is required so that RxDB is able to know the maximum string representation length of a field, which is needed to craft custom indexes on several RxStorage implementations. note RxDB will always append the primaryKey to all indexes to ensure a deterministic sort order of query results. You do not have to add the primaryKey to any index. ","version":"Next","tagName":"h2"},{"title":"Index-example","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#index-example","content":" const schemaWithIndexes = { version: 0, title: 'human schema with indexes', keyCompression: true, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, firstName: { type: 'string', maxLength: 100 // <- string-fields that are used as an index, must have set maxLength. }, lastName: { type: 'string' }, active: { type: 'boolean' }, familyName: { type: 'string' }, balance: { type: 'number', // number fields that are used in an index, must have set minimum, maximum and multipleOf minimum: 0, maximum: 100000, multipleOf: 0.01 }, creditCards: { type: 'array', items: { type: 'object', properties: { cvc: { type: 'number' } } } } }, required: [ 'id', 'active' // <- boolean fields that are used in an index, must be required. ], indexes: [ 'firstName', // <- this will create a simple index for the `firstName` field ['active', 'firstName'], // <- this will create a compound-index for these two fields 'active' ] }; internalIndexes When you use RxDB on the server-side, you might want to use internalIndexes to speed up internal queries. Read more ","version":"Next","tagName":"h3"},{"title":"attachments","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#attachments","content":" To use attachments in the collection, you have to add the attachments-attribute to the schema. See RxAttachment. ","version":"Next","tagName":"h2"},{"title":"default","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#default","content":" Default values can only be defined for first-level fields. Whenever you insert a document unset fields will be filled with default-values. const schemaWithDefaultAge = { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, firstName: { type: 'string' }, lastName: { type: 'string' }, age: { type: 'integer', default: 20 // <- default will be used } }, required: ['id'] }; ","version":"Next","tagName":"h2"},{"title":"final","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#final","content":" By setting a field to final, you make sure it cannot be modified later. Final fields are always required. Final fields cannot be observed because they will not change. Advantages: With final fields you can ensure that no-one accidentally modifies the data.When you enable the eventReduce algorithm, some performance-improvements are done. const schemaWithFinalAge = { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, firstName: { type: 'string' }, lastName: { type: 'string' }, age: { type: 'integer', final: true } }, required: ['id'] }; Not everything within the jsonschema-spec is allowed The schema is not only used to validate objects before they are written into the database, but also used to map getters to observe and populate single fieldnames, keycompression and other things. Therefore you can not use every schema which would be valid for the spec of json-schema.org. For example, fieldnames must match the regex ^[a-zA-Z][[a-zA-Z0-9_]*]?[a-zA-Z0-9]$ and additionalProperties is always set to false. But don't worry, RxDB will instantly throw an error when you pass an invalid schema into it. ","version":"Next","tagName":"h2"},{"title":"FAQ","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#faq","content":" How can I store a Date? With RxDB you can only store plain JSON data inside of a document. You cannot store a JavaScript new Date() instance directly. This is for performance reasons and because Date() is a mutable thing where changing it at any time might cause strange problem that are hard to debug. To store a date in RxDB, you have to define a string field with a format attribute: { "type": "string", "format": "date-time" } When storing the data you have to first transform your Date object into a string Date.toISOString(). Because the date-time is sortable, you can do whatever query operations on that field and even use it as an index. How to store schemaless data? By design, RxDB requires that every collection has a schema. This means you cannot create a truly "schema-less" collection where top-level fields are unknown at schema creation time. RxDB must know about all fields of a document at the top level to perform validation, index creation, and other internal optimizations. However, there is a way to store data of arbitrary structure at sub-fields. To do this, define a property with type: "object" in your schema. For example: { "version": 0, "primaryKey": "id", "type": "object", "properties": { "id": { "type": "string", "maxLength": 100 }, "myDynamicData": { "type": "object" // Here you can store any JSON data // because it's an open object. } }, "required": ["id"] } Why does RxDB automatically set additionalProperties: false at the top level RxDB automatically sets additionalProperties: false at the top level of a schema to ensure that all top-level fields are known in advance. This design choice offers several benefits: Prevents collisions with RxDocument class properties: RxDB documents have built-in class methods (e.g., .toJSON, .save) at the top level. By forbidding unknown top-level properties, we avoid accidental naming collisions with these built-in methods. Avoids conflicts with user-defined ORM functions: Developers can add custom ORM methods to RxDocuments. If top-level properties were unbounded, a property name could accidentally conflict with a method name, leading to unexpected behavior. Improves TypeScript typings: If RxDB didn't know about all top-level fields, the document type would effectively become any. That means a simple typo like myDocument.toJOSN() would only be caught at runtime, not at build time. By disallowing unknown properties, TypeScript can provide strict typing and catch errors sooner. Can't change the schema of a collection When you make changes to the schema of a collection, you sometimes can get an error likeError: addCollections(): another instance created this collection with a different schema. This means you have created a collection before and added document-data to it. When you now just change the schema, it is likely that the new schema does not match the saved documents inside of the collection. This would cause strange bugs and would be hard to debug, so RxDB check's if your schema has changed and throws an error. To change the schema in production-mode, do the following steps: Increase the version by 1Add the appropriate migrationStrategies so the saved data will be modified to match the new schema In development-mode, the schema-change can be simplified by one of these strategies: Use the memory-storage so your db resets on restart and your schema is not saved permanentlyCall removeRxDatabase('mydatabasename', RxStorage); before creating a new RxDatabase-instanceAdd a timestamp as suffix to the database-name to create a new one each run like name: 'heroesDB' + new Date().getTime() ","version":"Next","tagName":"h2"},{"title":"RxState","type":0,"sectionRef":"#","url":"/rx-state.html","content":"","keywords":"","version":"Next"},{"title":"Creating a RxState","type":1,"pageTitle":"RxState","url":"/rx-state.html#creating-a-rxstate","content":" A RxState instance is created on top of a RxDatabase. The state will automatically be persisted with the storage that was used when setting up the RxDatabase. To use it you first have to import the RxDBStatePlugin and add it to RxDB with addRxPlugin(). To create a state call the addState() method on the database instance. Calling addState multiple times will automatically de-duplicated and only create a single RxState object. import { createRxDatabase, addRxPlugin } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // first add the RxState plugin to RxDB import { RxDBStatePlugin } from 'rxdb/plugins/state'; addRxPlugin(RxDBStatePlugin); const database = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageDexie(), }); // create a state instance const myState = await database.addState(); // you can also create states with a given namespace const myChildState = await database.addState('myNamepsace'); ","version":"Next","tagName":"h2"},{"title":"Writing data and Persistence","type":1,"pageTitle":"RxState","url":"/rx-state.html#writing-data-and-persistence","content":" Writing data to the state happen by a so called modifier. It is a simple JavaScript function that gets the current value as input and returns the new, modified value. For example to increase the value of myField by one, you would use a modifier that increases the current value: // initially set value to zero await myState.set('myField', v => 0); // increase value by one await myState.set('myField', v => v + 1); // update value to be 42 await myState.set('myField', v => 42); The modifier is used instead of a direct assignment to ensure correct behavior when other JavaScript realms write to the state at the same time, like other browser tabs or webworkers. On conflicts, the modifier will just be run again to ensure deterministic and correct behavior. Therefore mutation is async, you have to await the call to the set function when you care about the moment when the change actually happened. ","version":"Next","tagName":"h2"},{"title":"Get State Data","type":1,"pageTitle":"RxState","url":"/rx-state.html#get-state-data","content":" The state stored inside of a RxState instance can be seen as a big single JSON object that contains all data. You can fetch the whole object or partially get a single properties or nested ones. Fetching data can either happen with the .get() method or by accessing the field directly like myRxState.myField. // get root state data const val = myState.get(); // get single property const val = myState.get('myField'); const val = myState.myField; // get nested property const val = myState.get('myField.childfield'); const val = myState.myField.childfield; // get nested array property const val = myState.get('myArrayField[0].foobar'); const val = myState.myArrayField[0].foobar; ","version":"Next","tagName":"h2"},{"title":"Observability","type":1,"pageTitle":"RxState","url":"/rx-state.html#observability","content":" Instead of fetching the state once, you can also observe the state with either rxjs observables or custom reactivity handlers like signals or hooks. Rxjs observables can be created by either using the .get$() method or by accessing the top level property suffixed with a dollar sign like myState.myField$. const observable = myState.get$('myField'); const observable = myState.myField$; // then you can subscribe to that observable observable.subscribe(newValue => { // update the UI }); Subscription works across multiple JavaScript realms like browser tabs or Webworkers. ","version":"Next","tagName":"h2"},{"title":"RxState with signals and hooks","type":1,"pageTitle":"RxState","url":"/rx-state.html#rxstate-with-signals-and-hooks","content":" With the double-dollar sign you can also access custom reactivity instances like signals or hooks. These are easier to use compared to rxjs, depending on which JavaScript framework you are using. For example in angular to use signals, you would first add a reactivity factory to your database and then access the signals of the RxState: import { RxReactivityFactory, createRxDatabase } from 'rxdb/plugins/core'; import { toSignal } from '@angular/core/rxjs-interop'; const reactivityFactory: RxReactivityFactory<ReactivityType> = { fromObservable(obs, initialValue) { return toSignal(obs, { initialValue }); } }; const database = await createRxDatabase({ name: 'mydb', storage: getRxStorageDexie(), reactivity: reactivityFactory }); const myState = await database.addState(); const mySignal = myState.get$$('myField'); const mySignal = myState.myField$$; ","version":"Next","tagName":"h2"},{"title":"Cleanup RxState operations","type":1,"pageTitle":"RxState","url":"/rx-state.html#cleanup-rxstate-operations","content":" For faster writes, changes to the state are only written as list of operations to disc. After some time you might have too many operations written which would delay the initial state creation. To automatically merge the state operations into a single operation and clear the old operations, you should add the Cleanup Plugin before creating the RxDatabase: import { addRxPlugin } from 'rxdb'; import { RxDBCleanupPlugin } from 'rxdb/plugins/cleanup'; addRxPlugin(RxDBCleanupPlugin); ","version":"Next","tagName":"h2"},{"title":"Correctness over Performance","type":1,"pageTitle":"RxState","url":"/rx-state.html#correctness-over-performance","content":" RxState is optimized for correctness, not for performance. Compared to other state libraries, RxState directly persists data to storage and ensures write conflicts are handled properly. Other state libraries are handles mainly in-memory and lazily persist to disc without caring about conflicts or multiple browser tabs which can cause problems and hard to reproduce bugs. RxState still uses RxDB which has a range of great performing storages so the write speed is more than sufficient. Also to further improve write performance you can use more RxState instances (with an different namespace) to split writes across multiple storage instances. Reads happen directly in-memory which makes RxState read performance comparable to other state libraries. ","version":"Next","tagName":"h2"},{"title":"RxState Replication","type":1,"pageTitle":"RxState","url":"/rx-state.html#rxstate-replication","content":" Because the state data is stored inside of an internal RxCollection you can easily use the RxDB Replication to sync data between users or devices of the same user. For example with the P2P WebRTC replication you can start the replication on the collection and automatically sync the RxState operations between users directly: import { replicateWebRTC, getConnectionHandlerSimplePeer } from 'rxdb/plugins/replication-webrtc'; const database = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageDexie(), }); const myState = await database.addState(); const replicationPool = await replicateWebRTC( { collection: myState.collection, topic: 'my-state-replication-pool', connectionHandlerCreator: getConnectionHandlerSimplePeer({}), pull: {}, push: {} } ); ","version":"Next","tagName":"h2"},{"title":"RxDB Database on top of Deno Key Value Store (beta)","type":0,"sectionRef":"#","url":"/rx-storage-denokv.html","content":"","keywords":"","version":"Next"},{"title":"What is DenoKV","type":1,"pageTitle":"RxDB Database on top of Deno Key Value Store (beta)","url":"/rx-storage-denokv.html#what-is-denokv","content":" DenoKV is a strongly consistent key-value storage, globally replicated for low-latency reads across 35 worldwide regions via Deno Deploy. When you release your Deno application on Deno Deploy, it will start a instance on each of the 35 worldwide regions. This edge deployment guarantees minimal latency when serving requests to end users devices around the world. DenoKV is a shared storage which shares its state across all instances. But, because DenoKV is "only" a Key-Value storage, it only supports basic CRUD operations on datasets and indexes. Complex features like queries, encryption, compression or client-server replication, are missing. Using RxDB on top of DenoKV fills this gap and makes it easy to build realtime offline-first application on top of Deno backend. ","version":"Next","tagName":"h2"},{"title":"Use cases","type":1,"pageTitle":"RxDB Database on top of Deno Key Value Store (beta)","url":"/rx-storage-denokv.html#use-cases","content":" Using RxDB-DenoKV instead of plain DenoKV, can have a wide range of benefits depending on your use case. Reduce vendor lock-in: RxDB has a swappable storage layer which allows you to swap out the underlying storage of your database. If you ever decide to move away from DenoDeploy or Deno at all, you do not have to refactor your whole application and instead just swap the storage plugin. For example if you decide migrate to Node.js, you can use the FoundationDB RxStorage and store your data there. DenoKV is also implemented on top of FoundationDB so you can get similar performance. Alternatively RxDB supports a wide range of storage plugins you can decide from. Add reactiveness: DenoKV is a plain request-response datastore. While it supports observation of single rows by id, it does not allow to observe row-ranges or events. This makes it hard to impossible to build realtime applications with it because polling would be the only way to watch ranges of key-value pairs. With RxDB on top of DenoKV, changes to the database are shared between DenoDeploy instances so when you observe a query you can be sure that it is always up to date, no matter which instance has changed the document. Internally RxDB uses the Deno BroadcastChannel API to share events between instances. Reuse Client and Server Code: When you use RxDB on the server and on the client side, many parts of your code can be reused on both sides which decreases development time significantly. Replicate from DenoKV to a local RxDB state: Instead of running all operations against the global DenoKV, you can run a realtime-replication between a DenoKV-RxDatabase and a locally stored dataset or maybe even an in-memory stored one. This improves query performance and can reduce your Deno Deploy cloud costs because less operations run against the DenoKV, they only locally instead. Replicate with other backends: The RxDB replication protocol is pretty simple and allows you to easily build a replication with any backend architecture. For example if you already have your data stored in a self-hosted MySQL server, you can use RxDB to do a realtime replication of that data into a DenoKV RxDatabase instance. RxDB also has many plugins for replication with backend/protocols like GraphQL, Websocket, CouchDB, WebRTC, Firestore and NATS. ","version":"Next","tagName":"h2"},{"title":"Using the DenoKV RxStorage","type":1,"pageTitle":"RxDB Database on top of Deno Key Value Store (beta)","url":"/rx-storage-denokv.html#using-the-denokv-rxstorage","content":" To use the DenoKV RxStorage with RxDB, you import the getRxStorageDenoKV function from the plugin and set it as storage when calling createRxDatabase import { createRxDatabase } from 'rxdb'; import { getRxStorageDenoKV } from 'rxdb/plugins/storage-denokv'; const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageDenoKV({ /** * Consistency level, either 'strong' or 'eventual' * (Optional) default='strong' */ consistencyLevel: 'strong', /** * Path which is used in the first argument of Deno.openKv(settings.openKvPath) * (Optional) default='' */ openKvPath: './foobar', /** * Some operations have to run in batches, * you can test different batch sizes to improve performance. * (Optional) default=100 */ batchSize: number }) }); On top of that RxDatabase you can then create your collections and run operations. Follow the quickstart to learn more about how to use RxDB. ","version":"Next","tagName":"h2"},{"title":"Using non-DenoKV storages in Deno","type":1,"pageTitle":"RxDB Database on top of Deno Key Value Store (beta)","url":"/rx-storage-denokv.html#using-non-denokv-storages-in-deno","content":" When you use other storages than the DenoKV storage inside of a Deno app, make sure you set multiInstance: false when creating the database. Also you should only run one process per Deno-Deploy instance. This ensures your events are not mixed up by the BroadcastChannel across instances which would lead to wrong behavior. // DenoKV based database const db = await createRxDatabase({ name: 'denokvdatabase', storage: getRxStorageDenoKV(), /** * Use multiInstance: true so that the Deno Broadcast Channel * emits event across DenoDeploy instances * (true is also the default, so you can skip this setting) */ multiInstance: true }); // Non-DenoKV based database const db = await createRxDatabase({ name: 'denokvdatabase', storage: getRxStorageFilesystemNode(), /** * Use multiInstance: false so that it does not share events * across instances because the stored data is anyway not shared * between them. */ multiInstance: false }); ","version":"Next","tagName":"h2"},{"title":"Limitations","type":1,"pageTitle":"RxDB Database on top of Deno Key Value Store (beta)","url":"/rx-storage-denokv.html#limitations","content":" The DenoKV RxStorage is in currently in beta mode. There might be breaking changes without a major RxDB version release. ","version":"Next","tagName":"h2"},{"title":"RxStorage Dexie.js","type":0,"sectionRef":"#","url":"/rx-storage-dexie.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"RxStorage Dexie.js","url":"/rx-storage-dexie.html#usage","content":" 1. Import the Dexie Storage import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; 2. Create a Database const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageDexie() }); ","version":"Next","tagName":"h2"},{"title":"Overwrite/Polyfill the native IndexedDB","type":1,"pageTitle":"RxStorage Dexie.js","url":"/rx-storage-dexie.html#overwritepolyfill-the-native-indexeddb","content":" Node.js has no IndexedDB API. To still run the Dexie RxStorage in Node.js, for example to run unit tests, you have to polyfill it. You can do that by using the fake-indexeddb module and pass it to the getRxStorageDexie() function. import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; //> npm install fake-indexeddb --save const fakeIndexedDB = require('fake-indexeddb'); const fakeIDBKeyRange = require('fake-indexeddb/lib/FDBKeyRange'); const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageDexie({ indexedDB: fakeIndexedDB, IDBKeyRange: fakeIDBKeyRange }) }); ","version":"Next","tagName":"h2"},{"title":"Using addons","type":1,"pageTitle":"RxStorage Dexie.js","url":"/rx-storage-dexie.html#using-addons","content":" Dexie.js has its own plugin system with many plugins for encryption, replication or other use cases. With the Dexie.js RxStorage you can use the same plugins by passing them to the getRxStorageDexie() function. const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageDexie({ addons: [ /* Your Dexie.js plugins */ ] }) }); ","version":"Next","tagName":"h2"},{"title":"Disabling the non-premium console log","type":1,"pageTitle":"RxStorage Dexie.js","url":"/rx-storage-dexie.html#disabling-the-non-premium-console-log","content":" We want to be transparent with our community, and you'll notice a console message when using the free Dexie.js based RxStorage implementation. This message serves to inform you about the availability of faster storage solutions within our 👑 Premium Plugins. We understand that this might be a minor inconvenience, and we sincerely apologize for that. However, maintaining and improving RxDB requires substantial resources, and our premium users help us ensure its sustainability. If you find value in RxDB and wish to remove this message, we encourage you to explore our premium storage options, which are optimized for professional use and production environments. Thank you for your understanding and support. If you already have premium access and want to use the Dexie.js RxStorage without the log, you can call the setPremiumFlag() function to disable the log. import { setPremiumFlag } from 'rxdb-premium/plugins/shared'; setPremiumFlag(); ","version":"Next","tagName":"h2"},{"title":"Performance comparison with other RxStorage plugins","type":1,"pageTitle":"RxStorage Dexie.js","url":"/rx-storage-dexie.html#performance-comparison-with-other-rxstorage-plugins","content":" The performance of the Dexie.js RxStorage is good enough for most use cases but other storages can have way better performance metrics: ","version":"Next","tagName":"h2"},{"title":"RxQuery","type":0,"sectionRef":"#","url":"/rx-query.html","content":"","keywords":"","version":"Next"},{"title":"find()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#find","content":" To create a basic RxQuery, call .find() on a collection and insert selectors. The result-set of normal queries is an array with documents. // find all that are older then 18 const query = myCollection .find({ selector: { age: { $gt: 18 } } }); ","version":"Next","tagName":"h2"},{"title":"findOne()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#findone","content":" A findOne-query has only a single RxDocument or null as result-set. // find alice const query = myCollection .findOne({ selector: { name: 'alice' } }); // find the youngest one const query = myCollection .findOne({ selector: {}, sort: [ {age: 'asc'} ] }); // find one document by the primary key const query = myCollection.findOne('foobar'); ","version":"Next","tagName":"h2"},{"title":"exec()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#exec","content":" Returns a Promise that resolves with the result-set of the query. const query = myCollection.find(); const results = await query.exec(); console.dir(results); // > [RxDocument,RxDocument,RxDocument..] On .findOne() queries, you can call .exec(true) to ensure your document exists and to make TypeScript handling easier: // docOrUndefined can be the type RxDocument or null which then has to be handled to be typesafe. const docOrUndefined = await myCollection.findOne().exec(); // with .exec(true), it will throw if the document cannot be found and always have the type RxDocument const doc = await myCollection.findOne().exec(true); ","version":"Next","tagName":"h2"},{"title":"Observe $","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#observe-","content":" An BehaviorSubjectsee that always has the current result-set as value. This is extremely helpful when used together with UIs that should always show the same state as what is written in the database. const query = myCollection.find(); const querySub = query.$.subscribe(results => { console.log('got results: ' + results.length); }); // > 'got results: 5' // BehaviorSubjects emit on subscription await myCollection.insert({/* ... */}); // insert one // > 'got results: 6' // $.subscribe() was called again with the new results // stop watching this query querySub.unsubscribe() ","version":"Next","tagName":"h2"},{"title":"update()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#update","content":" Runs an update on every RxDocument of the query-result. // to use the update() method, you need to add the update plugin. import { RxDBUpdatePlugin } from 'rxdb/plugins/update'; addRxPlugin(RxDBUpdatePlugin); const query = myCollection.find({ selector: { age: { $gt: 18 } } }); await query.update({ $inc: { age: 1 // increases age of every found document by 1 } }); ","version":"Next","tagName":"h2"},{"title":"patch() / incrementalPatch()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#patch--incrementalpatch","content":" Runs the RxDocument.patch() function on every RxDocument of the query result. const query = myCollection.find({ selector: { age: { $gt: 18 } } }); await query.patch({ age: 12 // set the age of every found to 12 }); ","version":"Next","tagName":"h2"},{"title":"modify() / incrementalModify()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#modify--incrementalmodify","content":" Runs the RxDocument.modify() function on every RxDocument of the query result. const query = myCollection.find({ selector: { age: { $gt: 18 } } }); await query.modify((docData) => { docData.age = docData.age + 1; // increases age of every found document by 1 return docData; }); ","version":"Next","tagName":"h2"},{"title":"remove() / incrementalRemove()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#remove--incrementalremove","content":" Deletes all found documents. Returns a promise which resolves to the deleted documents. // All documents where the age is less than 18 const query = myCollection.find({ selector: { age: { $lt: 18 } } }); // Remove the documents from the collection const removedDocs = await query.remove(); ","version":"Next","tagName":"h2"},{"title":"doesDocumentDataMatch()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#doesdocumentdatamatch","content":" Returns true if the given document data matches the query. const documentData = { id: 'foobar', age: 19 }; myCollection.find({ selector: { age: { $gt: 18 } } }).doesDocumentDataMatch(documentData); // > true myCollection.find({ selector: { age: { $gt: 20 } } }).doesDocumentDataMatch(documentData); // > false ","version":"Next","tagName":"h2"},{"title":"Query Builder Plugin","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#query-builder-plugin","content":" To use chained query methods, you can also use the query-builder plugin. // add the query builder plugin import { addRxPlugin } from 'rxdb'; import { RxDBQueryBuilderPlugin } from 'rxdb/plugins/query-builder'; addRxPlugin(RxDBQueryBuilderPlugin); // now you can use chained query methods const query = myCollection.find().where('age').gt(18); const result = await query.exec(); ","version":"Next","tagName":"h2"},{"title":"Query Examples","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#query-examples","content":" Here some examples to fast learn how to write queries without reading the docs. Pouch-find-docs - learn how to use mango-queriesmquery-docs - learn how to use chained-queries // directly pass search-object myCollection.find({ selector: { name: { $eq: 'foo' } } }) .exec().then(documents => console.dir(documents)); /* * find by using sql equivalent '%like%' syntax * This example will fe: match 'foo' but also 'fifoo' or 'foofa' or 'fifoofa' * Notice that in RxDB queries, a regex is represented as a $regex string with the $options parameter for flags. * Using a RegExp instance is not allowed because they are not JSON.stringify()-able and also * RegExp instances are mutable which could cause undefined behavior when the RegExp is mutated * after the query was parsed. */ myCollection.find({ selector: { name: { $regex: '.*foo.*' } } }) .exec().then(documents => console.dir(documents)); // find using a composite statement eg: $or // This example checks where name is either foo or if name is not existent on the document myCollection.find({ selector: { $or: [ { name: { $eq: 'foo' } }, { name: { $exists: false } }] } }) .exec().then(documents => console.dir(documents)); // do a case insensitive search // This example will match 'foo' or 'FOO' or 'FoO' etc... myCollection.find({ selector: { name: { $regex: '^foo$', $options: 'i' } } }) .exec().then(documents => console.dir(documents)); // chained queries myCollection.find().where('name').eq('foo') .exec().then(documents => console.dir(documents)); RxDB will always append the primary key to the sort parameters For several performance optimizations, like the EventReduce algorithm, RxDB expects all queries to return a deterministic sort order that does not depend on the insert order of the documents. To ensure a deterministic ordering, RxDB will always append the primary key as last sort parameter to all queries and to all indexes. This works in contrast to most other databases where a query without sorting would return the documents in the order in which they had been inserted to the database. ","version":"Next","tagName":"h2"},{"title":"Setting a specific index","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#setting-a-specific-index","content":" By default, the query will be send to the RxStorage, where a query planner will determine which one of the available indexes must be used. But the query planner cannot know everything and sometimes will not pick the most optimal index. To improve query performance, you can specify which index must be used, when running the query. const query = myCollection .findOne({ selector: { age: { $gt: 18 }, gender: { $eq: 'm' } }, /** * Because the developer knows that 50% of the documents are 'male', * but only 20% are below age 18, * it makes sense to enforce using the ['gender', 'age'] index to improve performance. * This could not be known by the query planer which might have chosen ['age', 'gender'] instead. */ index: ['gender', 'age'] }); ","version":"Next","tagName":"h2"},{"title":"Count","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#count","content":" When you only need the amount of documents that match a query, but you do not need the document data itself, you can use a count query for better performance. The performance difference compared to a normal query differs depending on which RxStorage implementation is used. const query = myCollection.count({ selector: { age: { $gt: 18 } } // 'limit' and 'skip' MUST NOT be set for count queries. }); // get the count result once const matchingAmount = await query.exec(); // > number // observe the result query.$.subscribe(amount => { console.log('Currently has ' + amount + ' documents'); }); note Count queries have a better performance than normal queries because they do not have to fetch the full document data out of the storage. Therefore it is not possible to run a count() query with a selector that requires to fetch and compare the document data. So if your query selector does not fully match an index of the schema, it is not allowed to run it. These queries would have no performance benefit compared to normal queries but have the tradeoff of not using the fetched document data for caching. /** * The following will throw an error because * the count operation cannot run on any specific index range * because the $regex operator is used. */ const query = myCollection.count({ selector: { age: { $regex: 'foobar' } } }); /** * The following will throw an error because * the count operation cannot run on any specific index range * because there is no ['age' ,'otherNumber'] index * defined in the schema. */ const query = myCollection.count({ selector: { age: { $gt: 20 }, otherNumber: { $gt: 10 } } }); If you want to count these kind of queries, you should do a normal query instead and use the length of the result set as counter. This has the same performance as running a non-fully-indexed count which has to fetch all document data from the database and run a query matcher. // get count manually once const resultSet = await myCollection.find({ selector: { age: { $regex: 'foobar' } } }).exec(); const count = resultSet.length; // observe count manually const count$ = myCollection.find({ selector: { age: { $regex: 'foobar' } } }).$.pipe( map(result => result.length) ); /** * To allow non-fully-indexed count queries, * you can also specify that by setting allowSlowCount=true * when creating the database. */ const database = await createRxDatabase({ name: 'mydatabase', allowSlowCount: true, // set this to true [default=false] /* ... */ }); ","version":"Next","tagName":"h2"},{"title":"allowSlowCount","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#allowslowcount","content":" To allow non-fully-indexed count queries, you can also specify that by setting allowSlowCount: true when creating the database. Doing this is mostly not wanted, because it would run the counting on the storage without having the document stored in the RxDB document cache. This is only recommended if the RxStorage is running remotely like in a WebWorker and you not always want to send the document-data between the worker and the main thread. In this case you might only need the count-result instead to save performance. ","version":"Next","tagName":"h3"},{"title":"RxQuery's are immutable","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#rxquerys-are-immutable","content":" Because RxDB is a reactive database, we can do heavy performance-optimisation on query-results which change over time. To be able to do this, RxQuery's have to be immutable. This means, when you have a RxQuery and run a .where() on it, the original RxQuery-Object is not changed. Instead the where-function returns a new RxQuery-Object with the changed where-field. Keep this in mind if you create RxQuery's and change them afterwards. Example: const queryObject = myCollection.find().where('age').gt(18); // Creates a new RxQuery object, does not modify previous one queryObject.sort('name'); const results = await queryObject.exec(); console.dir(results); // result-documents are not sorted by name const queryObjectSort = queryObject.sort('name'); const results = await queryObjectSort.exec(); console.dir(results); // result-documents are now sorted ","version":"Next","tagName":"h2"},{"title":"isRxQuery","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#isrxquery","content":" Returns true if the given object is an instance of RxQuery. Returns false if not. const is = isRxQuery(myObj); ","version":"Next","tagName":"h3"},{"title":"Design Decisions","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#design-decisions","content":" Like most other noSQL-Databases, RxDB uses the mango-query-syntax similar to MongoDB and others. We use the JSON based Mango Query Syntax because: Mango Queries work better with TypeScript compared to SQL strings.Mango Queries are composeable and easy to transform by code without joining SQL strings.Queries can be run very fast and efficient with only a minimal query planer to plan the best indexes and operations.NoSQL queries can be optimized with the EventReduce algorithm to improve performance of observed and cached queries. ","version":"Next","tagName":"h2"},{"title":"FAQ","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#faq","content":" Can I specify which document fields are returned by an RxDB query? No, RxDB does not support partial document retrieval. Because RxDB is a client-side database with limited memory, it caches and de-duplicates entire documents across multiple queries. Even if you only need a few fields, most storages must still fetch the entire JSON data, so subselecting fields would not significantly improve performance. Therefore, RxDB always returns full documents. If you only need certain fields, you can filter them out in your application code or consider storing just the necessary data in a separate collection. Why doesn't RxDB support aggregations on queries? RxDB runs entirely on the client side. Any "aggregation" or data processing you might do within RxDB would still happen in the same JavaScript environment as your application code. Therefore, there's no real performance advantage or difference between doing the aggregation in RxDB vs. doing it in your own code after fetching the data. As a result, RxDB doesn't provide built-in aggregation methods. Instead, just query the documents you need and perform any calculations directly in your app's code. Why does RxDB not support cross-collection queries? RxDB is a client-side database and does not provide built-in cross-collection queries or transactions. Instead, you can execute multiple queries in your JavaScript code and combine their results as needed. Because everything runs in the same environment, this approach offers the same performance you would get if cross-collection queries were built in - without the added complexity. Why Doesn't RxDB Support Case-Insensitive Search? RxDB relies on various storage engines as its backend, and these storage engines generally do not support case-insensitive search natively, like IndexedDB or FoundationDB. This limitation arises from the design of these engines, which prioritize efficiency and flexibility for specific types of queries rather than universal features like case-insensitivity. Although RxDB does not offer built-in support for case-insensitive search, there are two common workarounds: Store Data in a Meta-Field for Lowercase Search: To enable case-insensitive search, you can store an additional field in your documents where the relevant text data is preprocessed and saved in lowercase. const document = { name: 'John Doe', nameLowercase: 'john doe' // Meta-field }; await myCollection.insert(document); const query = myCollection.find({ selector: { nameLowercase: { $eq: 'john doe' } } }); Use a Regex Query: Regular expressions can perform case-insensitive searches. For example: const query = myCollection.find({ selector: { name: { $regex: '^john doe$', $options: 'i' } // Case-insensitive regex } }); However, this method has a significant downside: regex queries often cannot leverage indexes efficiently. As a result, they may be slower, especially for large datasets. ","version":"Next","tagName":"h2"},{"title":"Filesystem Node RxStorage (beta)","type":0,"sectionRef":"#","url":"/rx-storage-filesystem-node.html","content":"","keywords":"","version":"Next"},{"title":"Pros","type":1,"pageTitle":"Filesystem Node RxStorage (beta)","url":"/rx-storage-filesystem-node.html#pros","content":" Easier setup compared to SQLiteFast ","version":"Next","tagName":"h3"},{"title":"Cons","type":1,"pageTitle":"Filesystem Node RxStorage (beta)","url":"/rx-storage-filesystem-node.html#cons","content":" It is part of the RxDB Premium 👑 plugin that must be purchased. ","version":"Next","tagName":"h3"},{"title":"Usage","type":1,"pageTitle":"Filesystem Node RxStorage (beta)","url":"/rx-storage-filesystem-node.html#usage","content":" import { createRxDatabase } from 'rxdb'; import { getRxStorageFilesystemNode } from 'rxdb-premium/plugins/storage-filesystem-node'; const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageFilesystemNode({ basePath: path.join(__dirname, 'my-database-folder'), /** * Set inWorker=true if you use this RxStorage * together with the WebWorker plugin. */ inWorker: false }) }); /* ... */ ","version":"Next","tagName":"h2"},{"title":"RxDB Server","type":0,"sectionRef":"#","url":"/rx-server.html","content":"","keywords":"","version":"Next"},{"title":"Starting a RxServer","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#starting-a-rxserver","content":" To create an RxServer, you have to install the rxdb-server package with npm install rxdb-server --save and then you can import the createRxServer() function and create a server on a given RxDatabase and adapter. After adding the endpoints to the server, do not forget to call myServer.start() to start the actually http-server. import { createRxServer } from 'rxdb-server/plugins/server'; /** * We use the express adapter which is the one that comes with RxDB core */ import { RxServerAdapterExpress } from 'rxdb-server/plugins/adapter-express'; const myServer = await createRxServer({ database: myRxDatabase, adapter: RxServerAdapterExpress, port: 443 }); // add endpoints here (see below) // after adding the endpoints, start the server await myServer.start(); ","version":"Next","tagName":"h2"},{"title":"Using RxServer with Fastify","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#using-rxserver-with-fastify","content":" There is also a RxDB Premium 👑 adapter to use the RxServer with Fastify instead of express. Fastify has shown to have better performance and in general is more modern. import { createRxServer } from 'rxdb-server/plugins/server'; import { RxServerAdapterFastify } from 'rxdb-premium/plugins/server-adapter-fastify'; const myServer = await createRxServer({ database: myRxDatabase, adapter: RxServerAdapterFastify, port: 443 }); await myServer.start(); ","version":"Next","tagName":"h3"},{"title":"Using RxServer with Koa","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#using-rxserver-with-koa","content":" There is also a RxDB Premium 👑 adapter to use the RxServer with Koa instead of express. Koa has shown to have better compared to express. import { createRxServer } from 'rxdb-server/plugins/server'; import { RxServerAdapterKoa } from 'rxdb-premium/plugins/server-adapter-koa'; const myServer = await createRxServer({ database: myRxDatabase, adapter: RxServerAdapterKoa, port: 443 }); await myServer.start(); ","version":"Next","tagName":"h3"},{"title":"RxServer Endpoints","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#rxserver-endpoints","content":" On top of the RxServer you can add different types of endpoints. An endpoint is always connected to exactly one RxCollection and it only serves data from that single collection. For now there are only two endpoints implemented, the replication endpoint and the REST endpoint. Others will be added in the future. An endpoint is added to the server by calling the add endpoint method like myRxServer.addReplicationEndpoint(). Each needs a different name string as input which will define the resulting endpoint url. The endpoint urls is a combination of the given name and schema version of the collection, like /my-endpoint/0. const myEndpoint = server.addReplicationEndpoint({ name: 'my-endpoint', collection: myServerCollection }); console.log(myEndpoint.urlPath) // > 'my-endpoint/0' Notice that it is not required that the server side schema version is equal to the client side schema version. You might want to change server schemas more often and then only do a migration on the server, not on the clients. ","version":"Next","tagName":"h2"},{"title":"Replication Endpoint","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#replication-endpoint","content":" The replication endpoint allows clients that connect to it to replicate data with the server via the RxDB replication protocol. There is also the Replication Server plugin that is used on the client side to connect to the endpoint. The endpoint is added to the server with the addReplicationEndpoint() method. It requires a specific collection and the endpoint will only provided replication for documents inside of that collection. // > server.ts const endpoint = server.addReplicationEndpoint({ name: 'my-endpoint', collection: myServerCollection }); Then you can start the Server Replication on the client: // > client.ts const replicationState = await replicateServer({ collection: usersCollection, replicationIdentifier: 'my-server-replication', url: 'http://localhost:80/my-endpoint/0', push: {}, pull: {} }); ","version":"Next","tagName":"h2"},{"title":"REST endpoint","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#rest-endpoint","content":" The REST endpoint exposes various methods to access the data from the RxServer with non-RxDB tools via plain HTTP operations. You can use it to connect apps that are programmed in different programming languages than JavaScript or to access data from other third party tools. Creating a REST endpoint on a RxServer: const endpoint = await server.addRestEndpoint({ name: 'my-endpoint', collection: myServerCollection }); // plain http request with fetch const request = await fetch('http://localhost:80/' + endpoint.urlPath + '/query', { method: 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify({ selector: {} }) }); const response = await request.json(); There is also the client-rest plugin that provides typesave interactions with the REST endpoint: // using the client (optional) import { createRestClient } from 'rxdb-server/plugins/client-rest'; const client = createRestClient('http://localhost:80/' + endpoint.urlPath, {/* headers */}); const response = await client.query({ selector: {} }); The REST endpoint exposes the following paths: query [POST]: Fetch the results of a NoSQL query.query/observe [GET]: Observe a query's results via Server Send Events.get [POST]: Fetch multiple documents by their primary key.set [POST]: Write multiple documents at once.delete [POST]: Delete multiple documents by their primary key. ","version":"Next","tagName":"h2"},{"title":"CORS","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#cors","content":" When creating a server or adding endpoints, you can specify a CORS string. Endpoint cors always overwrite server cors. The default is the wildcard * which allows all requests. const myServer = await startRxServer({ database: myRxDatabase, cors: 'http://example.com' port: 443 }); const endpoint = await server.addReplicationEndpoint({ name: 'my-endpoint', collection: myServerCollection, cors: 'http://example.com' }); ","version":"Next","tagName":"h2"},{"title":"Auth handler","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#auth-handler","content":" To authenticate users and to make user-specific data available on server requests, an authHandler must be provided that parses the headers and returns the actual auth data that is used to authenticate the client and in the queryModifier and changeValidator. An auth handler gets the given headers object as input and returns the auth data in the format { data: {}, validUntil: 1706579817126}. The data field can contain any data that can be used afterwards in the queryModifier and changeValidator. The validUntil field contains the unix timestamp in milliseconds at which the authentication is no longer valid and the client will get disconnected. For example your authHandler could get the Authorization header and parse the JSON web token to identify the user and store the user id in the data field for later use. ","version":"Next","tagName":"h2"},{"title":"Query modifier","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#query-modifier","content":" The query modifier is a JavaScript function that is used to restrict which documents a client can fetch or replicate from the server. It gets the auth data and the actual NoSQL query as input parameter and returns a modified NoSQL query that is then used internally by the server. You can pass a different query modifier to each endpoint so that you can have different endpoints for different use cases on the same server. For example you could use a query modifier that get the userId from the auth data and then restricts the query to only return documents that have the same userId set. function myQueryModifier(authData, query) { query.selector.userId = { $eq: authData.data.userid }; return query; } const endpoint = await server.addReplicationEndpoint({ name: 'my-endpoint', collection: myServerCollection, queryModifier: myQueryModifier }); The RxServer will use the queryModifier at many places internally to determine which queries to run or if a document is allowed to be seen/edited by a client. note For performance reasons the queryModifier and changeValidatorMUST NOT be async and return a promise. If you need async data to run them, you should gather that data in the RxServerAuthHandler and store it in the auth data to access it later. ","version":"Next","tagName":"h2"},{"title":"Change validator","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#change-validator","content":" The change validator is a JavaScript function that is used to restrict which document writes are allowed to be done by a client. For example you could restrict clients to only change specific document fields or to not do any document writes at all. It can also be used to validate change document data before storing it at the server. In this example we restrict clients from doing inserts and only allow updates. For that we check if the change contains an assumedMasterState property and return false to block the write. function myChangeValidator(authData, change) { if(change.assumedMasterState) { return false; } else { return true; } } const endpoint = await server.addReplicationEndpoint({ name: 'my-endpoint', collection: myServerCollection, changeValidator: myChangeValidator }); ","version":"Next","tagName":"h2"},{"title":"Server-only indexes","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#server-only-indexes","content":" Normal RxDB schema indexes get the _deleted field prepended because all RxQueries automatically only search for documents with _deleted=false. When you use RxDB on a server, this might not be optimal because there can be the need to query for documents where the value of _deleted does not matter. Mostly this is required in the pull.stream$ of a replication when a queryModifier is used to add an additional field to the query. To set indexes without _deleted, you can use the internalIndexes field of the schema like the following: { "version": 0, "primaryKey": "id", "type": "object", "properties": { "id": { "type": "string", "maxLength": 100 }, "name": { "type": "string", "maxLength": 100 } }, "internalIndexes": [ ["name", "id"] ] } note Indexes come with a performance burden. You should only use the indexes you need and make sure you do not accidentally set the internalIndexes in your client side RxCollections. ","version":"Next","tagName":"h2"},{"title":"Server-only fields","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#server-only-fields","content":" All endpoints can be created with the serverOnlyFields set which defines some fields to only exist on the server, not on the clients. Clients will not see that fields and cannot do writes where one of the serverOnlyFields is set. Notice that when you use serverOnlyFields you likely need to have a different schema on the server than the schema that is used on the clients. const endpoint = await server.addReplicationEndpoint({ name: 'my-endpoint', collection: col, // here the field 'my-secretss' is defined to be server-only serverOnlyFields: ['my-secrets'] }); note For performance reasons, only top-level fields can be used as serverOnlyFields. Otherwise the server would have to deep-clone all document data which is too expensive. ","version":"Next","tagName":"h2"},{"title":"Readonly fields","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#readonly-fields","content":" When you have fields that should only be modified by the server, but not by the client, you can ensure that by comparing the fields value in the changeValidator. const myChangeValidator = function(authData, change){ if(change.newDocumentState.myReadonlyField !== change.assumedMasterState.myReadonlyField){ throw new Error('myReadonlyField is readonly'); } } ","version":"Next","tagName":"h2"},{"title":"$regex queries not allowed","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#regex-queries-not-allowed","content":" $regex queries are not allowed to run at the server to prevent ReDos Attacks. ","version":"Next","tagName":"h2"},{"title":"Conflict handling","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#conflict-handling","content":" To detect and handle conflicts, the conflict handler from the endpoints RxCollection is used. ","version":"Next","tagName":"h2"},{"title":"FAQ","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#faq","content":" Why are the server plugins in a different github repo and npm package? The RxServer and its other plugins are in a different github repository because: It has too many dependencies that you do not want to install if you only use RxDB at the client side It has a different license (SSPL) to prevent large cloud vendors from "stealing" the revenue, similar to MongoDB's license. Why can't endpoits be added dynamically? After RxServer.start() is called, you can no longer add endpoints. This is because many of the supported server libraries do not allow dynamic routing for performance and security reasons. ","version":"Next","tagName":"h2"},{"title":"RxDB Database on top of FoundationDB","type":0,"sectionRef":"#","url":"/rx-storage-foundationdb.html","content":"","keywords":"","version":"Next"},{"title":"Features of RxDB+FoundationDB","type":1,"pageTitle":"RxDB Database on top of FoundationDB","url":"/rx-storage-foundationdb.html#features-of-rxdbfoundationdb","content":" Using RxDB on top of FoundationDB, gives you many benefits compare to using the plain FoundationDB API: Indexes: In RxDB with a FoundationDB storage layer, indexes are used to optimize query performance, allowing for fast and efficient data retrieval even in large datasets. You can define single and compound indexes with the RxDB schema.Schema Based Data Model: Utilizing a jsonschema based data model, the system offers a highly structured and versatile approach to organizing and validating data, ensuring consistency and clarity in database interactions.Complex Queries: The system supports complex NoSQL queries, allowing for advanced data manipulation and retrieval, tailored to specific needs and intricate data relationships. For example you can do $regex or $or queries which is hardy possible with the plain key-value access of FoundationDB.Observable Queries & Documents: RxDB's observable queries and documents feature ensures real-time updates and synchronization, providing dynamic and responsive data interactions in applications.Compression: RxDB employs data compression techniques to reduce storage requirements and enhance transmission efficiency, making it more cost-effective and faster, especially for large volumes of data. You can compress the NoSQL document data, but also the binary attachments data.Attachments: RxDB supports the storage and management of attachments which allowing for the seamless inclusion of binary data like images or documents alongside structured data within the database. ","version":"Next","tagName":"h2"},{"title":"Installation","type":1,"pageTitle":"RxDB Database on top of FoundationDB","url":"/rx-storage-foundationdb.html#installation","content":" Install the FoundationDB client cli which is used to communicate with the FoundationDB cluster.Install the FoundationDB node bindings npm module via npm install foundationdb --save. If the latest version does not work for you, you should use the same version as stated in the storage-foundationdb job of the RxDB CI main.yml. ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"RxDB Database on top of FoundationDB","url":"/rx-storage-foundationdb.html#usage","content":" import { createRxDatabase } from 'rxdb'; import { getRxStorageFoundationDB } from 'rxdb/plugins/storage-foundationdb'; const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageFoundationDB({ /** * Version of the API of the FoundationDB cluster.. * FoundationDB is backwards compatible across a wide range of versions, * so you have to specify the api version. * If in doubt, set it to 620. */ apiVersion: 620, /** * Path to the FoundationDB cluster file. * (optional) * If in doubt, leave this empty to use the default location. */ clusterFile: '/path/to/fdb.cluster', /** * Amount of documents to be fetched in batch requests. * You can change this to improve performance depending on * your database access patterns. * (optional) * [default=50] */ batchSize: 50 }) }); ","version":"Next","tagName":"h2"},{"title":"Multi Instance","type":1,"pageTitle":"RxDB Database on top of FoundationDB","url":"/rx-storage-foundationdb.html#multi-instance","content":" Because FoundationDB does not offer a changestream, it is not possible to use the same cluster from more than one Node.js process at the same time. For example you cannot spin up multiple servers with RxDB databases that all use the same cluster. There might be workarounds to create something like a FoundationDB changestream and you can make a Pull Request if you need that feature. ","version":"Next","tagName":"h2"},{"title":"IndexedDB RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-indexeddb.html","content":"","keywords":"","version":"Next"},{"title":"IndexedDB performance comparison","type":1,"pageTitle":"IndexedDB RxStorage","url":"/rx-storage-indexeddb.html#indexeddb-performance-comparison","content":" Here is some performance comparison with other storages. Compared to the non-memory storages like OPFS and Dexie.js, it has the smallest build size and fastest write speed. Only OPFS is faster on queries over big datasets. See performance comparison page for a comparison with all storages. ","version":"Next","tagName":"h2"},{"title":"Using the IndexedDB RxStorage","type":1,"pageTitle":"IndexedDB RxStorage","url":"/rx-storage-indexeddb.html#using-the-indexeddb-rxstorage","content":" To use the indexedDB storage you import it from the RxDB Premium 👑 npm module and use getRxStorageIndexedDB() when creating the RxDatabase. import { createRxDatabase } from 'rxdb'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageIndexedDB({ /** * For better performance, queries run with a batched cursor. * You can change the batchSize to optimize the query time * for specific queries. * You should only change this value when you are also doing performance measurements. * [default=300] */ batchSize: 300 }) }); ","version":"Next","tagName":"h2"},{"title":"Overwrite/Polyfill the native IndexedDB","type":1,"pageTitle":"IndexedDB RxStorage","url":"/rx-storage-indexeddb.html#overwritepolyfill-the-native-indexeddb","content":" Node.js has no IndexedDB API. To still run the IndexedDB RxStorage in Node.js, for example to run unit tests, you have to polyfill it. You can do that by using the fake-indexeddb module and pass it to the getRxStorageDexie() function. import { createRxDatabase } from 'rxdb'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; //> npm install fake-indexeddb --save const fakeIndexedDB = require('fake-indexeddb'); const fakeIDBKeyRange = require('fake-indexeddb/lib/FDBKeyRange'); const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageIndexedDB({ indexedDB: fakeIndexedDB, IDBKeyRange: fakeIDBKeyRange }) }); ","version":"Next","tagName":"h2"},{"title":"Storage Buckets","type":1,"pageTitle":"IndexedDB RxStorage","url":"/rx-storage-indexeddb.html#storage-buckets","content":" The Storage Buckets API provides a way for sites to organize locally stored data into groupings called "storage buckets". This allows the user agent or sites to manage and delete buckets independently rather than applying the same treatment to all the data from a single origin. Read More To use different storage buckets with the RxDB IndexedDB Storage, you can use a function instead of a plain object when providing the indexedDB attribute: import { createRxDatabase } from 'rxdb'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageIndexedDB({ indexedDB: async(params) => { const myStorageBucket = await navigator.storageBuckets.open('myApp-' + params.databaseName); return myStorageBucket.indexedDB; }, IDBKeyRange }) }); ","version":"Next","tagName":"h2"},{"title":"Limitations of the IndexedDB RxStorage","type":1,"pageTitle":"IndexedDB RxStorage","url":"/rx-storage-indexeddb.html#limitations-of-the-indexeddb-rxstorage","content":" It is part of the RxDB Premium 👑 plugin that must be purchased. If you just need a storage that works in the browser and you do not have to care about performance, you can use the Dexie.js storage instead.The IndexedDB storage requires support for IndexedDB v2, it does not work on Internet Explorer. ","version":"Next","tagName":"h2"},{"title":"RxStorage Localstorage Meta Optimizer","type":0,"sectionRef":"#","url":"/rx-storage-localstorage-meta-optimizer.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"RxStorage Localstorage Meta Optimizer","url":"/rx-storage-localstorage-meta-optimizer.html#usage","content":" The meta optimizer gets wrapped around any other RxStorage. It will than automatically detect if an RxDB internal storage instance is created, and replace that with a localstorage based instance. import { getLocalstorageMetaOptimizerRxStorage } from 'rxdb-premium/plugins/storage-localstorage-meta-optimizer'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; /** * First wrap the original RxStorage with the optimizer. */ const optimizedRxStorage = getLocalstorageMetaOptimizerRxStorage({ /** * Here we use the dexie.js RxStorage, * it is also possible to use any other RxStorage instead. */ storage: getRxStorageDexie() }); /** * Create the RxDatabase with the wrapped RxStorage. */ const database = await createRxDatabase({ name: 'mydatabase', storage: optimizedRxStorage }); ","version":"Next","tagName":"h2"},{"title":"Memory RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-memory.html","content":"","keywords":"","version":"Next"},{"title":"Pros","type":1,"pageTitle":"Memory RxStorage","url":"/rx-storage-memory.html#pros","content":" Really fast. Uses binary search on all operations.Small build size ","version":"Next","tagName":"h3"},{"title":"Cons","type":1,"pageTitle":"Memory RxStorage","url":"/rx-storage-memory.html#cons","content":" No persistence import { createRxDatabase } from 'rxdb'; import { getRxStorageMemory } from 'rxdb/plugins/storage-memory'; const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageMemory() }); ","version":"Next","tagName":"h3"},{"title":"Memory Mapped RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-memory-mapped.html","content":"","keywords":"","version":"Next"},{"title":"Pros","type":1,"pageTitle":"Memory Mapped RxStorage","url":"/rx-storage-memory-mapped.html#pros","content":" Improves read/write performance because these operations run against the in-memory storage.Decreases initial page load because it load all data in a single bulk request. It even detects if the database is used for the first time and then it does not have to await the creation of the persistent storage.Can store encrypted data on disc while still being able to run queries on the non-encrypted in-memory state. ","version":"Next","tagName":"h2"},{"title":"Cons","type":1,"pageTitle":"Memory Mapped RxStorage","url":"/rx-storage-memory-mapped.html#cons","content":" It does not support attachments because storing big attachments data in-memory should not be done.When the JavaScript process is killed ungracefully like when the browser crashes or the power of the PC is terminated, it might happen that some memory writes are not persisted to the parent storage. This can be prevented with the awaitWritePersistence flag.The memory-mapped storage can only be used if all data fits into the memory of the JavaScript process. This is normally not a problem because a browser has much memory these days and plain JSON document data is not that big.Because it has to await an initial data loading from the parent storage into the memory, initial page load time can increase when much data is already stored. This is likely not a problem when you store less than 10k documents.The memory-mapped storage is part of RxDB Premium 👑. It is not part of the default RxDB core module. ","version":"Next","tagName":"h2"},{"title":"Using the Memory-Mapped RxStorage","type":1,"pageTitle":"Memory Mapped RxStorage","url":"/rx-storage-memory-mapped.html#using-the-memory-mapped-rxstorage","content":" import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; import { getMemoryMappedRxStorage } from 'rxdb-premium/plugins/storage-memory-mapped'; /** * Here we use the IndexedDB RxStorage as persistence storage. * Any other RxStorage can also be used. */ const parentStorage = getRxStorageIndexedDB(); // wrap the persistent storage with the memory-mapped storage. const storage = getMemoryMappedRxStorage({ storage: parentStorage }); // create the RxDatabase like you would do with any other RxStorage const db = await createRxDatabase({ name: 'myDatabase, storage, }); /** ... **/ ","version":"Next","tagName":"h2"},{"title":"Multi-Tab Support","type":1,"pageTitle":"Memory Mapped RxStorage","url":"/rx-storage-memory-mapped.html#multi-tab-support","content":" By how the memory-mapped storage works, it is not possible to have the same storage open in multiple JavaScript processes. So when you use this in a browser application, you can not open multiple databases when the app is used in multiple browser tabs. To solve this, use the SharedWorker Plugin so that the memory-mapped storage runs inside of a SharedWorker exactly once and is then reused for all browser tabs. If you have a single JavaScript process, like in a React Native app, you do not have to care about this and can just use the memory-mapped storage in the main process. ","version":"Next","tagName":"h2"},{"title":"Encryption of the persistend data","type":1,"pageTitle":"Memory Mapped RxStorage","url":"/rx-storage-memory-mapped.html#encryption-of-the-persistend-data","content":" Normally RxDB is not capable of running queries on encrypted fields. But when you use the memory-mapped RxStorage, you can store the document data encrypted on disc, while being able to run queries on the not encrypted in-memory state. Make sure you use the encryption storage wrapper around the persistend storage, NOT around the memory-mapped storage as a whole. import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; import { getMemoryMappedRxStorage } from 'rxdb-premium/plugins/storage-memory-mapped'; import { wrappedKeyEncryptionWebCryptoStorage } from 'rxdb-premium/plugins/encryption-web-crypto'; const storage = getMemoryMappedRxStorage({ storage: wrappedKeyEncryptionWebCryptoStorage({ storage: getRxStorageIndexedDB() }) }); const db = await createRxDatabase({ name: 'myDatabase, storage, }); /** ... **/ ","version":"Next","tagName":"h2"},{"title":"Await Write Persistence","type":1,"pageTitle":"Memory Mapped RxStorage","url":"/rx-storage-memory-mapped.html#await-write-persistence","content":" Running operations on the memory-mapped storage by default returns directly when the operation has run on the in-memory state and then persist changes in the background. Sometimes you might want to ensure write operations is persisted, you can do this by setting awaitWritePersistence: true. const storage = getMemoryMappedRxStorage({ awaitWritePersistence: true, storage: getRxStorageIndexedDB() }); ","version":"Next","tagName":"h2"},{"title":"RxStorage LokiJS","type":0,"sectionRef":"#","url":"/rx-storage-lokijs.html","content":"","keywords":"","version":"Next"},{"title":"Pros","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#pros","content":" Queries can run faster because all data is processed in memory.It has a much faster initial load time because it loads all data from IndexedDB in a single request. But this is only true for small datasets. If much data must is stored, the initial load time can be higher than on other RxStorage implementations. ","version":"Next","tagName":"h3"},{"title":"Cons","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#cons","content":" It does not support attachments.Data can be lost when the JavaScript process is killed ungracefully like when the browser crashes or the power of the PC is terminated.All data must fit into the memory.Slow initialisation time when used with multiInstance: true because it has to await the leader election process.Slow initialisation time when really much data is stored inside of the database because it has to parse a big JSON string. ","version":"Next","tagName":"h3"},{"title":"Usage","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#usage","content":" import { createRxDatabase } from 'rxdb'; import { getRxStorageLoki } from 'rxdb/plugins/storage-lokijs'; // in the browser, we want to persist data in IndexedDB, so we use the indexeddb adapter. const LokiIncrementalIndexedDBAdapter = require('lokijs/src/incremental-indexeddb-adapter'); const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageLoki({ adapter: new LokiIncrementalIndexedDBAdapter(), /* * Do not set lokiJS persistence options like autoload and autosave, * RxDB will pick proper defaults based on the given adapter */ }) }); ","version":"Next","tagName":"h2"},{"title":"Adapters","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#adapters","content":" LokiJS is based on adapters that determine where to store persistent data. For LokiJS there are adapters for IndexedDB, AWS S3, the NodeJS filesystem or NativeScript. Find more about the possible adapters at the LokiJS docs. For react native there is also the loki-async-reference-adapter. ","version":"Next","tagName":"h2"},{"title":"Multi-Tab support","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#multi-tab-support","content":" When you use plain LokiJS, you cannot build an app that can be used in multiple browser tabs. The reason is that LokiJS loads data in bulk and then only regularly persists the in-memory state to disc. When opened in multiple tabs, it would happen that the LokiJS instances overwrite each other and data is lost. With the RxDB LokiJS-plugin, this problem is fixed with the LeaderElection module. Between all open tabs, a leading tab is elected and only in this tab a database is created. All other tabs do not run queries against their own database, but instead call the leading tab to send and retrieve data. When the leading tab is closed, a new leader is elected that reopens the database and processes queries. You can disable this by setting multiInstance: false when creating the RxDatabase. ","version":"Next","tagName":"h2"},{"title":"Autosave and autoload","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#autosave-and-autoload","content":" When using plain LokiJS, you could set the autosave option to true to make sure that LokiJS persists the database state after each write into the persistence adapter. Same goes to autoload which loads the persisted state on database creation. But RxDB knows better when to persist the database state and when to load it, so it has its own autosave logic. This will ensure that running the persistence handler does not affect the performance of more important tasks. Instead RxDB will always wait until the database is idle and then runs the persistence handler. A load of the persisted state is done on database or collection creation and it is ensured that multiple load calls do not run in parallel and interfere with each other or with saveDatabase() calls. ","version":"Next","tagName":"h2"},{"title":"Known problems","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#known-problems","content":" When you bundle the LokiJS Plugin with webpack, you might get the error Cannot find module "fs". This is because LokiJS uses a require('fs') statement that cannot work in the browser. You can fix that by telling webpack to not resolve the fs module with the following block in your webpack config: // in your webpack.config.js { /* ... */ resolve: { fallback: { fs: false } } /* ... */ } // Or if you do not have a webpack.config.js like you do with angular, // you might fix it by setting the browser field in the package.json { /* ... */ "browser": { "fs": false } /* ... */ } ","version":"Next","tagName":"h2"},{"title":"Using the internal LokiJS database","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#using-the-internal-lokijs-database","content":" For custom operations, you can access the internal LokiJS database. This is dangerous because you might do changes that are not compatible with RxDB. Only use this when there is no way to achieve your goals via the RxDB API. const storageInstance = myRxCollection.storageInstance; const localState = await storageInstance.internals.localState; localState.collection.insert({ key: 'foo', value: 'bar', _deleted: false, _attachments: {}, _rev: '1-62080c42d471e3d2625e49dcca3b8e3e', _meta: { lwt: new Date().getTime() } }); // manually trigger the save queue because we did a write to the internal loki db. await localState.databaseState.saveQueue.addWrite(); ","version":"Next","tagName":"h2"},{"title":"Disabling the non-premium console log","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#disabling-the-non-premium-console-log","content":" We want to be transparent with our community, and you'll notice a console message when using the free Loki.js based RxStorage implementation. This message serves to inform you about the availability of faster storage solutions within our 👑 Premium Plugins. We understand that this might be a minor inconvenience, and we sincerely apologize for that. However, maintaining and improving RxDB requires substantial resources, and our premium users help us ensure its sustainability. If you find value in RxDB and wish to remove this message, we encourage you to explore our premium storage options, which are optimized for professional use and production environments. Thank you for your understanding and support. If you already have premium access and want to use the Dexie.js RxStorage without the log, you can call the setPremiumFlag() function to disable the log. import { setPremiumFlag } from 'rxdb-premium/plugins/shared'; setPremiumFlag(); ","version":"Next","tagName":"h2"},{"title":"Memory Synced RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-memory-synced.html","content":"","keywords":"","version":"Next"},{"title":"Pros","type":1,"pageTitle":"Memory Synced RxStorage","url":"/rx-storage-memory-synced.html#pros","content":" Improves read/write performance because these operations run against the in-memory storage.Decreases initial page load because it load all data in a single bulk request. It even detects if the database is used for the first time and then it does not have to await the creation of the persistent storage. ","version":"Next","tagName":"h2"},{"title":"Cons","type":1,"pageTitle":"Memory Synced RxStorage","url":"/rx-storage-memory-synced.html#cons","content":" It does not support attachments.When the JavaScript process is killed ungracefully like when the browser crashes or the power of the PC is terminated, it might happen that some memory writes are not persisted to the parent storage. This can be prevented with the awaitWritePersistence flag.This can only be used if all data fits into the memory of the JavaScript process. This is normally not a problem because a browser has much memory these days and plain json document data is not that big.Because it has to await an initial replication from the parent storage into the memory, initial page load time can increase when much data is already stored. This is likely not a problem when you store less than 10k documents.The memory-synced storage itself does not support replication and migration. Instead you have to replicate the underlying parent storage.The memory-synced plugin is part of RxDB Premium 👑. It is not part of the default RxDB module. The memory-synced RxStorage was removed in RxDB version 16 The memory-synced was removed in RxDB version 16. Instead consider using the newer and better memory-mapped RxStorage which has better trade-offs and is easier to configure. ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"Memory Synced RxStorage","url":"/rx-storage-memory-synced.html#usage","content":" import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; import { getMemorySyncedRxStorage } from 'rxdb-premium/plugins/storage-memory-synced'; /** * Here we use the IndexedDB RxStorage as persistence storage. * Any other RxStorage can also be used. */ const parentStorage = getRxStorageIndexedDB(); // wrap the persistent storage with the memory synced one. const storage = getMemorySyncedRxStorage({ storage: parentStorage }); // create the RxDatabase like you would do with any other RxStorage const db = await createRxDatabase({ name: 'myDatabase, storage, }); /** ... **/ ","version":"Next","tagName":"h2"},{"title":"Options","type":1,"pageTitle":"Memory Synced RxStorage","url":"/rx-storage-memory-synced.html#options","content":" Some options can be provided to fine tune the performance and behavior. import { requestIdlePromise } from 'rxdb'; const storage = getMemorySyncedRxStorage({ storage: parentStorage, /** * Defines how many document * get replicated in a single batch. * [default=50] * * (optional) */ batchSize: 50, /** * By default, the parent storage will be created without indexes for a faster page load. * Indexes are not needed because the queries will anyway run on the memory storage. * You can disable this behavior by setting keepIndexesOnParent to true. * If you use the same parent storage for multiple RxDatabase instances where one is not * a asynced-memory storage, you will get the error: 'schema not equal to existing storage' * if you do not set keepIndexesOnParent to true. * * (optional) */ keepIndexesOnParent: true, /** * If set to true, all write operations will resolve AFTER the writes * have been persisted from the memory to the parentStorage. * This ensures writes are not lost even if the JavaScript process exits * between memory writes and the persistence interval. * default=false */ awaitWritePersistence: true, /** * After a write, await until the return value of this method resolves * before replicating with the master storage. * * By returning requestIdlePromise() we can ensure that the CPU is idle * and no other, more important operation is running. By doing so we can be sure * that the replication does not slow down any rendering of the browser process. * * (optional) */ waitBeforePersist: () => requestIdlePromise(); }); ","version":"Next","tagName":"h2"},{"title":"Replication and Migration with the memory-synced storage","type":1,"pageTitle":"Memory Synced RxStorage","url":"/rx-storage-memory-synced.html#replication-and-migration-with-the-memory-synced-storage","content":" The memory-synced storage itself does not support replication and migration. Instead you have to replicate the underlying parent storage. For example when you use it on top of an IndexedDB storage, you have to run replication on that storage instead by creating a different RxDatabase. const parentStorage = getRxStorageIndexedDB(); const memorySyncedStorage = getMemorySyncedRxStorage({ storage: parentStorage, keepIndexesOnParent: true }); const databaseName = 'mydata'; /** * Create a parent database with the same name+collections * and use it for replication and migration. * The parent database must be created BEFORE the memory-synced database * to ensure migration has already been run. */ const parentDatabase = await createRxDatabase({ name: databaseName, storage: parentStorage }); await parentDatabase.addCollections(/* ... */); replicateRxCollection({ collection: parentDatabase.myCollection, /* ... */ }); /** * Create an equal memory-synced database with the same name+collections * and use it for writes and queries. */ const memoryDatabase = await createRxDatabase({ name: databaseName, storage: memorySyncedStorage }); await memoryDatabase.addCollections(/* ... */); ","version":"Next","tagName":"h2"},{"title":"MongoDB RxStorage (beta)","type":0,"sectionRef":"#","url":"/rx-storage-mongodb.html","content":"","keywords":"","version":"Next"},{"title":"Limitations of the MongoDB RxStorage","type":1,"pageTitle":"MongoDB RxStorage (beta)","url":"/rx-storage-mongodb.html#limitations-of-the-mongodb-rxstorage","content":" Multiple Node.js servers using the same MongoDB database is currently not supportedRxAttachments are currently not supportedDoing non-RxDB writes on the MongoDB database is not supported. RxDB expects all writes to come from RxDB which update the required metadata. Doing non-RxDB writes can confuse the RxDatabase and lead to undefined behavior. But you can perform read-queries on the MongoDB storage from the outside at any time. ","version":"Next","tagName":"h2"},{"title":"Using the MongoDB RxStorage","type":1,"pageTitle":"MongoDB RxStorage (beta)","url":"/rx-storage-mongodb.html#using-the-mongodb-rxstorage","content":" To use the storage, you simply import the getRxStorageMongoDB method and use that when creating the RxDatabase. The connection parameter contains the MongoDB connection string. import { createRxDatabase } from 'rxdb'; import { getRxStorageMongoDB } from 'rxdb/plugins/storage-mongodb'; const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageMongoDB({ /** * MongoDB connection string * @link https://www.mongodb.com/docs/manual/reference/connection-string/ */ connection: 'mongodb://localhost:27017,localhost:27018,localhost:27019' }) }); ","version":"Next","tagName":"h2"},{"title":"📈 Discover RxDB Storage Benchmarks","type":0,"sectionRef":"#","url":"/rx-storage-performance.html","content":"","keywords":"","version":"Next"},{"title":"RxStorage Performance comparison","type":1,"pageTitle":"📈 Discover RxDB Storage Benchmarks","url":"/rx-storage-performance.html#rxstorage-performance-comparison","content":" A big difference in the RxStorage implementations is the performance. In difference to a server side database, RxDB is bound to the limits of the JavaScript runtime and depending on the runtime, there are different possibilities to store and fetch data. For example in the browser it is only possible to store data in a slow IndexedDB or OPFS instead of a filesystem while on React-Native you can use the SQLite storage. Therefore the performance can be completely different depending on where you use RxDB and what you do with it. Here you can see some performance measurements and descriptions on how the different storages work and how their performance is different. ","version":"Next","tagName":"h2"},{"title":"Persistend vs Semi-Persistend storages","type":1,"pageTitle":"📈 Discover RxDB Storage Benchmarks","url":"/rx-storage-performance.html#persistend-vs-semi-persistend-storages","content":" The "normal" storages are always persistend. This means each RxDB write is directly written to disc and all queries run on the disc state. This means a good startup performance because nothing has to be done on startup. In contrast, semi-persistend storages like memory mapped store all data in memory on startup and only save to disc occasionally (or on exit). Therefore it has a very fast read/write performance, but loading all data into memory on the first page load can take longer for big amounts of documents. Also these storages can only be used when all data fits into the memory at least once. In general it is recommended to stay on the persistend storages and only use semi-persitend ones, when you know for sure that the dataset will stay small (less than 2k documents). ","version":"Next","tagName":"h2"},{"title":"Performance comparison","type":1,"pageTitle":"📈 Discover RxDB Storage Benchmarks","url":"/rx-storage-performance.html#performance-comparison","content":" In the following you can find some performance measurements and comparisons. Notice that these are only a small set of possible RxDB operations. If performance is really relevant for your use case, you should do your own measurements with usage-patterns that are equal to how you use RxDB in production. ","version":"Next","tagName":"h2"},{"title":"Measurements","type":1,"pageTitle":"📈 Discover RxDB Storage Benchmarks","url":"/rx-storage-performance.html#measurements","content":" Here the following metrics are measured: time-to-first-insert: Many storages run lazy, so it makes no sense to compare the time which is required to create a database with collections. Instead we measure the time-to-first-insert which is the whole timespan from database creation until the first single document write is done.insert documents (bulk): Insert 500 documents with a single bulk-insert operation.find documents by id (bulk): Here we fetch 100% of the stored documents with a single findByIds() call.insert documents (serial): Insert 50 documents, one after each other.find documents by id (serial): Here we find 50 documents in serial with one findByIds() call per document.find documents by query: Here we fetch 100% of the stored documents with a single find() call.find documents by query: Here we fetch all of the stored documents with a 4 find() calls that run in parallel. Each fetching 25% of the documents.count documents: Counts 100% of the stored documents with a single count() call. Here we measure 4 runs at once to have a higher number that is easier to compare. ","version":"Next","tagName":"h3"},{"title":"Browser based Storages Performance Comparison","type":1,"pageTitle":"📈 Discover RxDB Storage Benchmarks","url":"/rx-storage-performance.html#browser-based-storages-performance-comparison","content":" The performance patterns of the browser based storages are very diverse. The IndexedDB storage is recommended for mostly all use cases so you should start with that one. Later you can do performance testings and switch to another storage like OPFS or memory-mapped. If you do not want to purchase RxDB Premium, you could use the slower Dexie.js based RxStorage instead. ","version":"Next","tagName":"h2"},{"title":"Node/Native based Storages Performance Comparison","type":1,"pageTitle":"📈 Discover RxDB Storage Benchmarks","url":"/rx-storage-performance.html#nodenative-based-storages-performance-comparison","content":" For most client-side native applications (react-native, electron, capacitor), using the SQLite RxStorage is recommended. For non-client side applications like a server, use the MongoDB storage instead. ","version":"Next","tagName":"h2"},{"title":"RxStorage PouchDB","type":0,"sectionRef":"#","url":"/rx-storage-pouchdb.html","content":"","keywords":"","version":"Next"},{"title":"Why is the PouchDB RxStorage deprecated?","type":1,"pageTitle":"RxStorage PouchDB","url":"/rx-storage-pouchdb.html#why-is-the-pouchdb-rxstorage-deprecated","content":" When I started developing RxDB in 2016, I had a specific use case to solve. Because there was no client-side database out there that fitted, I created RxDB as a wrapper around PouchDB. This worked great and all the PouchDB features like the query engine, the adapter system, CouchDB-replication and so on, came for free. But over the years, it became clear that PouchDB is not suitable for many applications, mostly because of its performance: To be compliant to CouchDB, PouchDB has to store all revision trees of documents which slows down queries. Also purging these document revisions is not possibleso the database storage size will only increase over time. Another problem was that many issues in PouchDB have never been fixed, but only closed by the issue-bot like this one. The whole PouchDB RxStorage code was full of workarounds and monkey patches to resolve these issues for RxDB users. Many these patches decreased performance even further. Sometimes it was not possible to fix things from the outside, for example queries with $gt operators return the wrong documents which is a no-go for a production database and hard to debug. In version 10.0.0 RxDB introduced the RxStorage layer which allows users to swap out the underlying storage engine where RxDB stores and queries documents from. This allowed to use alternatives from PouchDB, for example the Dexie RxStorage in browsers or even the FoundationDB RxStorage on the server side. There where not many use cases left where it was a good choice to use the PouchDB RxStorage. Only replicating with a CouchDB server, was only possible with PouchDB. But this has also changed. RxDB has a plugin that allows to replicate clients with any CouchDB server by using the RxDB replication protocol. This plugins work with any RxStorage so that it is not necessary to use the PouchDB storage. Removing PouchDB allows RxDB to add many awaited features like filtered change streams for easier replication and permission handling. It will also free up development time. If you are currently using the PouchDB RxStorage, you have these options: Migrate to another RxStorage (recommended)Never update RxDB to the next major version (stay on older 14.0.0)Fork the PouchDB RxStorage and maintain the plugin by yourself.Fix all the PouchDB problems so that we can add PouchDB to the RxDB Core again. ","version":"Next","tagName":"h2"},{"title":"Pros","type":1,"pageTitle":"RxStorage PouchDB","url":"/rx-storage-pouchdb.html#pros","content":" Most battle proven RxStorageSupports replication with a CouchDB endpointSupport storing attachmentsBig ecosystem of adapters ","version":"Next","tagName":"h2"},{"title":"Cons","type":1,"pageTitle":"RxStorage PouchDB","url":"/rx-storage-pouchdb.html#cons","content":" Big bundle sizeSlow performance because of revision handling overhead ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"RxStorage PouchDB","url":"/rx-storage-pouchdb.html#usage","content":" import { createRxDatabase } from 'rxdb'; import { getRxStoragePouch, addPouchPlugin } from 'rxdb/plugins/pouchdb'; addPouchPlugin(require('pouchdb-adapter-idb')); const db = await createRxDatabase({ name: 'exampledb', storage: getRxStoragePouch( 'idb', { /** * other pouchdb specific options * @link https://pouchdb.com/api.html#create_database */ } ) }); ","version":"Next","tagName":"h2"},{"title":"Polyfill the global variable","type":1,"pageTitle":"RxStorage PouchDB","url":"/rx-storage-pouchdb.html#polyfill-the-global-variable","content":" When you use RxDB with angular or other webpack based frameworks, you might get the error: <span style="color: red;">Uncaught ReferenceError: global is not defined</span> This is because pouchdb assumes a nodejs-specific global variable that is not added to browser runtimes by some bundlers. You have to add them by your own, like we do here. (window as any).global = window; (window as any).process = { env: { DEBUG: undefined }, }; ","version":"Next","tagName":"h2"},{"title":"Adapters","type":1,"pageTitle":"RxStorage PouchDB","url":"/rx-storage-pouchdb.html#adapters","content":" PouchDB has many adapters for all JavaScript runtimes. ","version":"Next","tagName":"h2"},{"title":"Using the internal PouchDB Database","type":1,"pageTitle":"RxStorage PouchDB","url":"/rx-storage-pouchdb.html#using-the-internal-pouchdb-database","content":" For custom operations, you can access the internal PouchDB database. This is dangerous because you might do changes that are not compatible with RxDB. Only use this when there is no way to achieve your goals via the RxDB API. import { getPouchDBOfRxCollection } from 'rxdb/plugins/pouchdb'; const pouch = getPouchDBOfRxCollection(myRxCollection); ","version":"Next","tagName":"h2"},{"title":"Sharding RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-sharding.html","content":"","keywords":"","version":"Next"},{"title":"Using the sharding plugin","type":1,"pageTitle":"Sharding RxStorage","url":"/rx-storage-sharding.html#using-the-sharding-plugin","content":" import { getRxStorageSharding } from 'rxdb-premium/plugins/storage-sharding'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; /** * First wrap the original RxStorage with the sharding RxStorage. */ const shardedRxStorage = getRxStorageSharding({ /** * Here we use the dexie.js RxStorage, * it is also possible to use any other RxStorage instead. */ storage: getRxStorageDexie() }); /** * Add the sharding options to your schema. * Changing these options will require a data migration. */ const mySchema = { /* ... */ sharding: { /** * Amount of shards per RxStorage instance. * Depending on your data size and query patterns, the optimal shard amount may differ. * Do a performance test to optimize that value. * 10 Shards is a good value to start with. * * IMPORTANT: Changing the value of shards is not possible on a already existing database state, * you will loose access to your data. */ shards: 10, /** * Sharding mode, * you can either shard by collection or by database. * For most cases you should use 'collection' which will shard on the collection level. * For example with the IndexedDB RxStorage, it will then create multiple stores per IndexedDB database * and not multiple IndexedDB databases, which would be slower. */ mode: 'collection' } /* ... */ } /** * Create the RxDatabase with the wrapped RxStorage. */ const database = await createRxDatabase({ name: 'mydatabase', storage: shardedRxStorage }); ","version":"Next","tagName":"h2"},{"title":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-opfs.html","content":"","keywords":"","version":"Next"},{"title":"What is OPFS","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#what-is-opfs","content":" The Origin Private File System (OPFS) is a native browser storage API that allows web applications to manage files in a private, sandboxed, origin-specific virtual filesystem. Unlike IndexedDB and LocalStorage, which are optimized as object/key-value storage, OPFS provides more granular control for file operations, enabling byte-by-byte access, file streaming, and even low-level manipulations. OPFS is ideal for applications requiring high-performance file operations (3x-4x faster compared to IndexedDB) inside of a client-side application, offering advantages like improved speed, more efficient use of resources, and enhanced security and privacy features. ","version":"Next","tagName":"h2"},{"title":"OPFS limitations","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#opfs-limitations","content":" From the beginning of 2023, the Origin Private File System API is supported by all modern browsers like Safari, Chrome, Edge and Firefox. Only Internet Explorer is not supported and likely will never get support. It is important to know that the most performant synchronous methods like read() and write() of the OPFS API are only available inside of a WebWorker. They cannot be used in the main thread, an iFrame or even a SharedWorker. The OPFS createSyncAccessHandle() method that gives you access to the synchronous methods is not exposed in the main thread, only in a Worker. While there is no concrete data size limit defined by the API, browsers will refuse to store more data at some point. If no more data can be written, a QuotaExceededError is thrown which should be handled by the application, like showing an error message to the user. ","version":"Next","tagName":"h3"},{"title":"How the OPFS API works","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#how-the-opfs-api-works","content":" The OPFS API is pretty straightforward to use. First you get the root filesystem. Then you can create files and directories on that. Notice that whenever you synchronously write to, or read from a file, an ArrayBuffer must be used that contains the data. It is not possible to synchronously write plain strings or objects into the file. Therefore the TextEncoder and TextDecoder API must be used. Also notice that some of the methods of FileSystemSyncAccessHandlehave been asynchronous in the past, but are synchronous since Chromium 108. To make it less confusing, we just use await in front of them, so it will work in both cases. // Access the root directory of the origin's private file system. const root = await navigator.storage.getDirectory(); // Create a subdirectory. const diaryDirectory = await root.getDirectoryHandle('subfolder', { create: true, }); // Create a new file named 'example.txt'. const fileHandle = await diaryDirectory.getFileHandle('example.txt', { create: true, }); // Create a FileSystemSyncAccessHandle on the file. const accessHandle = await fileHandle.createSyncAccessHandle(); // Write a sentence to the file. let writeBuffer = new TextEncoder().encode('Hello from RxDB'); const writeSize = accessHandle.write(writeBuffer); // Read file and transform data to string. const readBuffer = new Uint8Array(writeSize); const readSize = accessHandle.read(readBuffer, { at: 0 }); const contentAsString = new TextDecoder().decode(readBuffer); // Write an exclamation mark to the end of the file. writeBuffer = new TextEncoder().encode('!'); accessHandle.write(writeBuffer, { at: readSize }); // Truncate file to 10 bytes. await accessHandle.truncate(10); // Get the new size of the file. const fileSize = await accessHandle.getSize(); // Persist changes to disk. await accessHandle.flush(); // Always close FileSystemSyncAccessHandle if done, so others can open the file again. await accessHandle.close(); A more detailed description of the OPFS API can be found on MDN. ","version":"Next","tagName":"h2"},{"title":"OPFS performance","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#opfs-performance","content":" Because the Origin Private File System API provides low-level access to binary files, it is much faster compared to IndexedDB or localStorage. According to the storage performance test, OPFS is up to 2x times faster on plain inserts when a new file is created on each write. Reads are even faster. A good comparison about real world scenarios, are the performance results of the various RxDB storages. Here it shows that reads are up to 4x faster compared to IndexedDB, even with complex queries: ","version":"Next","tagName":"h2"},{"title":"Using OPFS as RxStorage in RxDB","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#using-opfs-as-rxstorage-in-rxdb","content":" The OPFS RxStorage itself must run inside a WebWorker. Therefore we use the Worker RxStorage and let it point to the prebuild opfs.worker.js file that comes shipped with RxDB Premium 👑. Notice that the OPFS RxStorage is part of the RxDB Premium 👑 plugin that must be purchased. import { createRxDatabase } from 'rxdb'; import { getRxStorageWorker } from 'rxdb-premium/plugins/storage-worker'; const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageWorker( { /** * This file must be statically served from a webserver. * You might want to first copy it somewhere outside of * your node_modules folder. */ workerInput: 'node_modules/rxdb-premium/dist/workers/opfs.worker.js' } ) }); ","version":"Next","tagName":"h2"},{"title":"Using OPFS in the main thread instead of a worker","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#using-opfs-in-the-main-thread-instead-of-a-worker","content":" The createSyncAccessHandle method from the Filesystem API is only available inside of a Webworker. Therefore you cannot use getRxStorageOPFS() in the main thread. But there is a slightly slower way to access the virtual filesystem from the main thread. RxDB support the getRxStorageOPFSMainThread() for that. Notice that this uses the createWritable function which is not supported in safari. Using OPFS from the main thread can have benefits because not having to cross the worker bridge can reduce latence in reads and writes. import { createRxDatabase } from 'rxdb'; import { getRxStorageOPFSMainThread } from 'rxdb-premium/plugins/storage-worker'; const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageOPFSMainThread() }); ","version":"Next","tagName":"h2"},{"title":"Building a custom worker.js","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#building-a-custom-workerjs","content":" When you want to run additional plugins like storage wrappers or replication inside of the worker, you have to build your own worker.js file. You can do that similar to other workers by calling exposeWorkerRxStorage like described in the worker storage plugin. // inside of the worker.js file import { getRxStorageOPFS } from 'rxdb-premium/plugins/storage-opfs'; import { exposeWorkerRxStorage } from 'rxdb-premium/plugins/storage-worker'; const storage = getRxStorageOPFS(); exposeWorkerRxStorage({ storage }); ","version":"Next","tagName":"h2"},{"title":"Setting usesRxDatabaseInWorker when a RxDatabase is also used inside of the worker","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#setting-usesrxdatabaseinworker-when-a-rxdatabase-is-also-used-inside-of-the-worker","content":" When you use the OPFS inside of a worker, it will internally use strings to represent operation results. This has the benefit that transferring strings from the worker to the main thread, is way faster compared to complex json objects. The getRxStorageWorker() will automatically decode these strings on the main thread so that the data can be used by the RxDatabase. But using a RxDatabase inside of your worker can make sense for example when you want to move the replication with a server. To enable this, you have to set usesRxDatabaseInWorker to true: // inside of the worker.js file import { getRxStorageOPFS } from 'rxdb-premium/plugins/storage-opfs'; const storage = getRxStorageOPFS({ usesRxDatabaseInWorker: true }); If you forget to set this and still create and use a RxDatabase inside of the worker, you might get the error messageorUncaught (in promise) TypeError: Cannot read properties of undefined (reading 'length')`. ","version":"Next","tagName":"h2"},{"title":"OPFS in Electron, React-Native or Capacitor.js","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#opfs-in-electron-react-native-or-capacitorjs","content":" Origin Private File System is a browser API that is only accessible in browsers. Other JavaScript like React-Native or Node.js, do not support it. Electron has two JavaScript contexts: the browser (chromium) context and the Node.js context. While you could use the OPFS API in the browser context, it is not recommended. Instead you should use the Filesystem API of Node.js and then only transfer the relevant data with the ipcRenderer. With RxDB that is pretty easy to configure: In the main.js, expose the Node Filesystem storage with the exposeIpcMainRxStorage() that comes with the electron pluginIn the browser context, access the main storage with the getRxStorageIpcRenderer() method. React Native (and Expo) does not have an OPFS API. You could use the ReactNative Filesystem to directly write data. But to get a fully featured database like RxDB it is easier to use the SQLite RxStorage which starts an SQLite database inside of the ReactNative app and uses that to do the database operations. Capacitor.js is able to access the OPFS API. ","version":"Next","tagName":"h2"},{"title":"Difference between File System Access API and Origin Private File System (OPFS)","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#difference-between-file-system-access-api-and-origin-private-file-system-opfs","content":" Often developers are confused with the differences between the File System Access API and the Origin Private File System (OPFS). The File System Access API provides access to the files on the device file system, like the ones shown in the file explorer of the operating system. To use the File System API, the user has to actively select the files from a filepicker.Origin Private File System (OPFS) is a sub-part of the File System Standard and it only describes the things you can do with the filesystem root from navigator.storage.getDirectory(). OPFS writes to a sandboxed filesystem, not visible to the user. Therefore the user does not have to actively select or allow the data access. ","version":"Next","tagName":"h2"},{"title":"Learn more about OPFS:","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#learn-more-about-opfs","content":" WebKit: The File System API with Origin Private File SystemBrowser SupportPerformance Test Tool ","version":"Next","tagName":"h2"},{"title":"Remote RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-remote.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"Remote RxStorage","url":"/rx-storage-remote.html#usage","content":" The remote storage communicates over a message channel which has to implement the messageChannelCreator function which returns an object that has a messages$ observable and a send() function on both sides and a close() function that closes the RemoteMessageChannel. // on the client import { getRxStorageRemote } from 'rxdb/plugins/storage-remote'; const storage = getRxStorageRemote({ identifier: 'my-id', mode: 'storage', messageChannelCreator: () => Promise.resolve({ messages$: new Subject(), send(msg) { // send to remote storage } }) }); const myDb = await createRxDatabase({ storage }); // on the remote import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; import { exposeRxStorageRemote } from 'rxdb/plugins/storage-remote'; exposeRxStorageRemote({ storage: getRxStorageDexie(), messages$: new Subject(), send(msg){ // send to other side } }); ","version":"Next","tagName":"h2"},{"title":"Usage with a Websocket server","type":1,"pageTitle":"Remote RxStorage","url":"/rx-storage-remote.html#usage-with-a-websocket-server","content":" The remote storage plugin contains helper functions to create a remote storage over a WebSocket server. This is often used in Node.js to give one microservice access to another services database without having to replicate the full database state. // server.js import { getRxStorageMemory } from 'rxdb/plugins/storage-memory'; import { startRxStorageRemoteWebsocketServer } from 'rxdb/plugins/storage-remote-websocket'; // either you can create the server based on a RxDatabase const serverBasedOnDatabase = await startRxStorageRemoteWebsocketServer({ port: 8080, database: myRxDatabase }); // or you can create the server based on a pure RxStorage const serverBasedOn = await startRxStorageRemoteWebsocketServer({ port: 8080, storage: getRxStorageMemory() }); // client.js import { getRxStorageRemoteWebsocket } from 'rxdb/plugins/storage-remote-websocket'; const myDb = await createRxDatabase({ storage: getRxStorageRemoteWebsocket({ url: 'ws://example.com:8080' }) }); ","version":"Next","tagName":"h2"},{"title":"Sending custom messages","type":1,"pageTitle":"Remote RxStorage","url":"/rx-storage-remote.html#sending-custom-messages","content":" The remote storage can also be used to send custom messages to and from the remote instance. One the remote you have to define a customRequestHandler like: const serverBasedOnDatabase = await startRxStorageRemoteWebsocketServer({ port: 8080, database: myRxDatabase, async customRequestHandler(msg){ // here you can return any JSON object as an 'answer' return { foo: 'bar' }; } }); On the client instance you can then call the customRequest() method: const storage = getRxStorageRemoteWebsocket({ url: 'ws://example.com:8080' }); const answer = await storage.customRequest({ bar: 'foo' }); console.dir(answer); // > { foo: 'bar' } ","version":"Next","tagName":"h2"},{"title":"SharedWorker RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-shared-worker.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#usage","content":" ","version":"Next","tagName":"h2"},{"title":"On the SharedWorker process","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#on-the-sharedworker-process","content":" In the worker process JavaScript file, you have wrap the original RxStorage with getRxStorageIndexedDB(). // shared-worker.ts import { exposeWorkerRxStorage } from 'rxdb-premium/plugins/storage-worker'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/indexeddb'; exposeWorkerRxStorage({ /** * You can wrap any implementation of the RxStorage interface * into a worker. * Here we use the IndexedDB RxStorage. */ storage: getRxStorageIndexedDB() }); ","version":"Next","tagName":"h3"},{"title":"On the main process","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#on-the-main-process","content":" import { createRxDatabase } from 'rxdb'; import { getRxStorageSharedWorker } from 'rxdb-premium/plugins/storage-worker'; import { getRxStorageIndexedDB } from 'rxdb/plugins/storage-indexeddb'; const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageSharedWorker( { /** * Contains any value that can be used as parameter * to the SharedWorker constructor of thread.js * Most likely you want to put the path to the shared-worker.js file in here. * * @link https://developer.mozilla.org/en-US/docs/Web/API/SharedWorker?retiredLocale=de */ workerInput: 'path/to/shared-worker.js', /** * (Optional) options * for the worker. */ workerOptions: { type: 'module', credentials: 'omit' } } ) }); ","version":"Next","tagName":"h3"},{"title":"Pre-build workers","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#pre-build-workers","content":" The shared-worker.js must be a self containing JavaScript file that contains all dependencies in a bundle. To make it easier for you, RxDB ships with pre-bundles worker files that are ready to use. You can find them in the folder node_modules/rxdb-premium/dist/workers after you have installed the RxDB Premium 👑 Plugin. From there you can copy them to a location where it can be served from the webserver and then use their path to create the RxDatabase Any valid worker.js JavaScript file can be used both, for normal Workers and SharedWorkers. import { createRxDatabase } from 'rxdb'; import { getRxStorageSharedWorker } from 'rxdb-premium/plugins/storage-worker'; const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageSharedWorker( { /** * Path to where the copied file from node_modules/rxdb-premium/dist/workers * is reachable from the webserver. */ workerInput: '/indexeddb.shared-worker.js' } ) }); ","version":"Next","tagName":"h2"},{"title":"Building a custom worker","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#building-a-custom-worker","content":" To build a custom worker.js file, check out the webpack config at the worker documentation. Any worker file form the worker storage can also be used in a shared worker because exposeWorkerRxStorage detects where it runs and exposes the correct messaging endpoints. ","version":"Next","tagName":"h2"},{"title":"Passing in a SharedWorker instance","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#passing-in-a-sharedworker-instance","content":" Instead of setting an url as workerInput, you can also specify a function that returns a new SharedWorker instance when called. This is mostly used when you have a custom worker file and dynamically import it. This works equal to the workerInput of the Worker Storage ","version":"Next","tagName":"h2"},{"title":"Set multiInstance: false","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#set-multiinstance-false","content":" When you know that you only ever create your RxDatabase inside of the shared worker, you might want to set multiInstance: false to prevent sending change events across JavaScript realms and to improve performance. Do not set this when you also create the same storage on another realm, like when you have the same RxDatabase once inside the shared worker and once on the main thread. ","version":"Next","tagName":"h2"},{"title":"Replication with SharedWorker","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#replication-with-sharedworker","content":" When a SharedWorker RxStorage is used, it is recommended to run the replication inside of the worker. This is the best option for performance. You can do that by opening another RxDatabase inside of it and starting the replication there. If you are not concerned about performance, you can still start replication on the main thread instead. But you should never run replication on both the main thread and the worker. // shared-worker.ts import { exposeWorkerRxStorage } from 'rxdb-premium/plugins/storage-worker'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; import { createRxDatabase, addRxPlugin } from 'rxdb'; import { RxDBReplicationGraphQLPlugin } from 'rxdb/plugins/replication-graphql'; addRxPlugin(RxDBReplicationGraphQLPlugin); const baseStorage = getRxStorageIndexedDB(); // first expose the RxStorage to the outside exposeWorkerRxStorage({ storage: baseStorage }); /** * Then create a normal RxDatabase and RxCollections * and start the replication. */ const database = await createRxDatabase({ name: 'mydatabase', storage: baseStorage }); await db.addCollections({ humans: {/* ... */} }); const replicationState = db.humans.syncGraphQL({/* ... */}); ","version":"Next","tagName":"h2"},{"title":"Limitations","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#limitations","content":" The SharedWorker API is not available in some mobile browser ","version":"Next","tagName":"h3"},{"title":"FAQ","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#faq","content":" Can I use this plugin with a Service Worker? No. A Service Worker is not the same as a Shared Worker. While you can use RxDB inside of a ServiceWorker, you cannot use the ServiceWorker as a RxStorage that gets accessed by an outside RxDatabase instance. ","version":"Next","tagName":"h3"},{"title":"RxDB Tradeoffs","type":0,"sectionRef":"#","url":"/rxdb-tradeoffs.html","content":"","keywords":"","version":"Next"},{"title":"Why not SQL syntax","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#why-not-sql-syntax","content":" When you ask people which database they would want for browsers, the most answer I hear is something SQL based like SQLite. This makes sense, SQL is a query language that most developers had learned in school/university and it is reusable across various database solutions. But for RxDB (and other client side databases), using SQL is not a good option and instead it operates on document writes and the JSON based Mango-query syntax for querying. // A Mango Query const query = { selector: { age: { $gt: 10 }, lastName: 'foo' }, sort: [{ age: 'asc' }] }; ","version":"Next","tagName":"h2"},{"title":"SQL is made for database servers","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#sql-is-made-for-database-servers","content":" SQL is made to be used to run operations against a database server. You send a SQL string like SELECT SUM(column_name)... to the database server and the server then runs all operations required to calculate the result and only send back that result. This saves performance on the application side and ensures that the application itself is not blocked. But RxDB is a client-side database that runs inside of the application. There is no performance difference if the SUM() query is run inside of the database or at the application level where a Array.reduce() call calculates the result. ","version":"Next","tagName":"h3"},{"title":"Typescript support","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#typescript-support","content":" SQL is string based and therefore you need additional IDE tooling to ensure that your written database code is valid. Using the Mango Query syntax instead, TypeScript can be used validate the queries and to autocomplete code and knows which fields do exist and which do not. By doing so, the correctness of queries can be ensured at compile-time instead of run-time. ","version":"Next","tagName":"h3"},{"title":"Composeable queries","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#composeable-queries","content":" By using JSON based Mango Queries, it is easy to compose queries in plain JavaScript. For example if you have any given query and want to add the condition user MUST BE 'foobar', you can just add the condition to the selector without having to parse and understand a complex SQL string. query.selector.user = 'foobar'; Even merging the selectors of multiple queries is not a problem: queryA.selector = { $and: [ queryA.selector, queryB.selector ] }; ","version":"Next","tagName":"h3"},{"title":"Why Document based (NoSQL)","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#why-document-based-nosql","content":" Like other NoSQL databases, RxDB operates data on document level. It has no concept of tables, rows and columns. Instead we have collections, documents and fields. ","version":"Next","tagName":"h2"},{"title":"Javascript is made to work with objects","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#javascript-is-made-to-work-with-objects","content":" ","version":"Next","tagName":"h3"},{"title":"Caching","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#caching","content":" ","version":"Next","tagName":"h3"},{"title":"EventReduce","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#eventreduce","content":" ","version":"Next","tagName":"h3"},{"title":"Easier to use with typescript","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#easier-to-use-with-typescript","content":" Because of the document based approach, TypeScript can know the exact type of the query response while a SQL query could return anything from a number over a set of rows or a complex construct. ","version":"Next","tagName":"h3"},{"title":"Why no transactions","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#why-no-transactions","content":" Does not work with offline-firstDoes not work with multi-tabEasier conflict handling on document level -- Instead of transactions, rxdb works with revisions ","version":"Next","tagName":"h2"},{"title":"Why no relations","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#why-no-relations","content":" Does not work with easy replication ","version":"Next","tagName":"h2"},{"title":"Why is a schema required","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#why-is-a-schema-required","content":" migration of data on clients is hardWhy jsonschema ","version":"Next","tagName":"h2"},{"title":"","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html##","content":"","version":"Next","tagName":"h2"},{"title":"RxStorage","type":0,"sectionRef":"#","url":"/rx-storage.html","content":"","keywords":"","version":"Next"},{"title":"Quick Recommendations","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#quick-recommendations","content":" In the Browser: Use the IndexedDB RxStorage if you have 👑 premium access, otherwise use the Dexie.js storage.In Electron and ReactNative: Use the SQLite RxStorage if you have 👑 premium access or the memory RxStorage for tryouts.In Capacitor: Use the SQLite RxStorage if you have 👑 premium access, otherwise use the Dexie.js storage. ","version":"Next","tagName":"h2"},{"title":"Configuration Examples","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#configuration-examples","content":" The RxStorage layer of RxDB is very flexible. Here are some examples on how to configure more complex settings: ","version":"Next","tagName":"h2"},{"title":"Storing much data in a browser securely","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#storing-much-data-in-a-browser-securely","content":" Lets say you build a browser app that needs to store a big amount of data as secure as possible. Here we can use a combination of the storages (encryption, IndexedDB, compression, schema-checks) that increase security and reduce the stored data size. We use the schema-validation on the top level to ensure schema-errors are clearly readable and do not contain encrypted/compressed data. The encryption is used inside of the compression because encryption of compressed data is more efficient. import { wrappedValidateAjvStorage } from 'rxdb/plugins/validate-ajv'; import { wrappedKeyCompressionStorage } from 'rxdb/plugins/key-compression'; import { wrappedKeyEncryptionCryptoJsStorage } from 'rxdb/plugins/encryption-crypto-js'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; const myDatabase = await createRxDatabase({ storage: wrappedValidateAjvStorage({ storage: wrappedKeyCompressionStorage({ storage: wrappedKeyEncryptionCryptoJsStorage({ storage: getRxStorageIndexedDB() }) }) }) }); ","version":"Next","tagName":"h3"},{"title":"High query Load","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#high-query-load","content":" Also we can utilize a combination of storages to create a database that is optimized to run complex queries on the data really fast. Here we use the shardingstorage together with the worker storage. This allows to run queries in parallel multithreading instead of a single JavaScript process. Because the worker initialization can slow down the initial page load, we also use the localstorage-meta-optimizer to improve initialization time. import { getRxStorageSharding } from 'rxdb-premium/plugins/storage-sharding'; import { getRxStorageWorker } from 'rxdb-premium/plugins/storage-worker'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; import { getLocalstorageMetaOptimizerRxStorage } from 'rxdb-premium/plugins/storage-localstorage-meta-optimizer'; const myDatabase = await createRxDatabase({ storage: getLocalstorageMetaOptimizerRxStorage({ storage: getRxStorageSharding({ storage: getRxStorageWorker({ workerInput: 'path/to/worker.js', storage: getRxStorageIndexedDB() }) }) }) }); ","version":"Next","tagName":"h3"},{"title":"Low Latency on Writes and Simple Reads","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#low-latency-on-writes-and-simple-reads","content":" Here we create a storage configuration that is optimized to have a low latency on simple reads and writes. It uses the memory-mapped storage to fetch and store data in memory. For persistence the OPFS storage is used in the main thread which has lower latency for fetching big chunks of data when at initialization the data is loaded from disc into memory. We do not use workers because sending data from the main thread to workers and backwards would increase the latency. import { getLocalstorageMetaOptimizerRxStorage } from 'rxdb-premium/plugins/storage-localstorage-meta-optimizer'; import { getMemoryMappedRxStorage } from 'rxdb-premium/plugins/storage-memory-mapped'; import { getRxStorageOPFSMainThread } from 'rxdb-premium/plugins/storage-worker'; const myDatabase = await createRxDatabase({ storage: getLocalstorageMetaOptimizerRxStorage({ storage: getMemoryMappedRxStorage({ storage: getRxStorageOPFSMainThread() }) }) }); ","version":"Next","tagName":"h3"},{"title":"All RxStorage Implementations List","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#all-rxstorage-implementations-list","content":" ","version":"Next","tagName":"h2"},{"title":"Dexie.js","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#dexiejs","content":" The Dexie.js based storage is based on the Dexie.js IndexedDB wrapper. It stores the data inside of a browsers IndexedDB database and has a very small bundle size. If you are new to RxDB, you should start with the Dexie.js RxStorage. Read more ","version":"Next","tagName":"h3"},{"title":"Memory","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#memory","content":" A storage that stores the data in as plain data in the memory of the JavaScript process. Really fast and can be used in all environments. Read more ","version":"Next","tagName":"h3"},{"title":"👑 IndexedDB","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-indexeddb","content":" The IndexedDB RxStorage is based on plain IndexedDB. This has a better performance than the Dexie.js storage, but it is slower compared to the OPFS storage. Read more ","version":"Next","tagName":"h3"},{"title":"👑 OPFS","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-opfs","content":" The OPFS RxStorage is based on the File System Access API. This has the best performance of all other non-in-memory storage, when RxDB is used inside of a browser. Read more ","version":"Next","tagName":"h3"},{"title":"👑 SQLite","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-sqlite","content":" The SQLite storage has great performance when RxDB is used on Node.js, Electron, React Native, Cordova or Capacitor. Read more ","version":"Next","tagName":"h3"},{"title":"👑 Filesystem Node","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-filesystem-node","content":" The Filesystem Node storage is best suited when you use RxDB in a Node.js process or with electron.js. Read more ","version":"Next","tagName":"h3"},{"title":"MongoDB","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#mongodb","content":" To use RxDB on the server side, the MongoDB RxStorage provides a way of having a secure, scalable and performant storage based on the popular MongoDB NoSQL database Read more ","version":"Next","tagName":"h3"},{"title":"DenoKV","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#denokv","content":" To use RxDB in Deno. The DenoKV RxStorage provides a way of having a secure, scalable and performant storage based on the Deno Key Value Store. Read more ","version":"Next","tagName":"h3"},{"title":"FoundationDB","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#foundationdb","content":" To use RxDB on the server side, the FoundationDB RxStorage provides a way of having a secure, fault-tolerant and performant storage. Read more ","version":"Next","tagName":"h3"},{"title":"👑 Worker","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-worker","content":" The worker RxStorage is a wrapper around any other RxStorage which allows to run the storage in a WebWorker (in browsers) or a Worker Thread (in Node.js). By doing so, you can take CPU load from the main process and move it into the worker's process which can improve the perceived performance of your application. Read more ","version":"Next","tagName":"h3"},{"title":"👑 SharedWorker","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-sharedworker","content":" The worker RxStorage is a wrapper around any other RxStorage which allows to run the storage in a SharedWorker (only in browsers). By doing so, you can take CPU load from the main process and move it into the worker's process which can improve the perceived performance of your application. Read more ","version":"Next","tagName":"h3"},{"title":"Remote","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#remote","content":" The Remote RxStorage is made to use a remote storage and communicate with it over an asynchronous message channel. The remote part could be on another JavaScript process or even on a different host machine. Mostly used internally in other storages like Worker or Electron-ipc. Read more ","version":"Next","tagName":"h3"},{"title":"👑 Sharding","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-sharding","content":" On some RxStorage implementations (like IndexedDB), a huge performance improvement can be done by sharding the documents into multiple database instances. With the sharding plugin you can wrap any other RxStorage into a sharded storage. Read more ","version":"Next","tagName":"h3"},{"title":"👑 Memory Mapped","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-memory-mapped","content":" The memory-mapped RxStorage is a wrapper around any other RxStorage. The wrapper creates an in-memory storage that is used for query and write operations. This memory instance stores its data in an underlying storage for persistence. The main reason to use this is to improve query/write performance while still having the data stored on disc. Read more ","version":"Next","tagName":"h3"},{"title":"👑 Localstorage Meta Optimizer","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-localstorage-meta-optimizer","content":" The RxStorage Localstorage Meta Optimizer is a wrapper around any other RxStorage. The wrapper uses the original RxStorage for normal collection documents. But to optimize the initial page load time, it uses localstorage to store the plain key-value metadata that RxDB needs to create databases and collections. This plugin can only be used in browsers. Read more ","version":"Next","tagName":"h3"},{"title":"Electron IpcRenderer & IpcMain","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#electron-ipcrenderer--ipcmain","content":" To use RxDB in electron, it is recommended to run the RxStorage in the main process and the RxDatabase in the renderer processes. With the rxdb electron plugin you can create a remote RxStorage and consume it from the renderer process. Read more ","version":"Next","tagName":"h3"},{"title":"SQLite RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-sqlite.html","content":"","keywords":"","version":"Next"},{"title":"Performance comparison with other storages","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#performance-comparison-with-other-storages","content":" The SQLite storage is a bit slower compared to other Node.js based storages like the Filesystem Storage because wrapping SQLite has a bit of overhead and sending data from the JavaScript process to SQLite and backwards increases the latency. However for most hybrid apps the SQLite storage is the best option because it can leverage the SQLite version that comes already installed on the smartphones OS (iOS and android). Also for desktop electron apps it can be a viable solution because it is easy to ship SQLite together inside of the electron bundle. ","version":"Next","tagName":"h2"},{"title":"Using the SQLite RxStorage","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#using-the-sqlite-rxstorage","content":" To use the SQLite storage you have to import getRxStorageSQLite from the RxDB Premium 👑 package and then add the correct sqliteBasics adapter depending on which sqlite module you want to use. This can then be used as storage when creating the RxDatabase. In the following you can see some examples for some of the most common SQLite packages. ","version":"Next","tagName":"h2"},{"title":"Usage with the sqlite3 npm package","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#usage-with-the-sqlite3-npm-package","content":" import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsNode } from 'rxdb-premium/plugins/storage-sqlite'; /** * In Node.js, we use the SQLite database * from the 'sqlite' npm module. * @link https://www.npmjs.com/package/sqlite3 */ import sqlite3 from 'sqlite3'; const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageSQLite({ /** * Different runtimes have different interfaces to SQLite. * For example in node.js we have a callback API, * while in capacitor sqlite we have Promises. * So we need a helper object that is capable of doing the basic * sqlite operations. */ sqliteBasics: getSQLiteBasicsNode(sqlite3) }) }); ","version":"Next","tagName":"h2"},{"title":"Usage with the node:sqlite package","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#usage-with-the-node-package","content":" With Node.js version 22 and newer, you can use the "native" sqlite module that comes shipped with Node.js. import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsNodeNative } from 'rxdb-premium/plugins/storage-sqlite'; import sqlite from 'node:sqlite'; const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsNodeNative(sqlite.DatabaseSync) }) }); ","version":"Next","tagName":"h2"},{"title":"Usage with Webassembly in the Browser","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#usage-with-webassembly-in-the-browser","content":" In the browser you can use the wa-sqlite package to run sQLite in Webassembly. The wa-sqlite module also allows to use persistence with IndexedDB or OPFS. Notice that in general SQLite via Webassembly is slower compared to other storages like IndexedDB or OPFS because sending data from the main thread to wasm and backwards is slow in the browser. Have a look the performance comparison. import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsWasm } from 'rxdb-premium/plugins/storage-sqlite'; /** * In the Browser, we use the SQLite database * from the 'wa-sqlite' npm module. This contains the SQLite library * compiled to Webassembly * @link https://www.npmjs.com/package/wa-sqlite */ import SQLiteESMFactory from 'wa-sqlite/dist/wa-sqlite-async.mjs'; import SQLite from 'wa-sqlite'; const sqliteModule = await SQLiteESMFactory(); const sqlite3 = SQLite.Factory(module); const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsWasm(sqlite3) }) }); ","version":"Next","tagName":"h2"},{"title":"Usage with React Native","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#usage-with-react-native","content":" Install the react-native-quick-sqlite npm moduleImport getSQLiteBasicsQuickSQLite from the SQLite plugin and use it to create a RxDatabase: import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsQuickSQLite } from 'rxdb-premium/plugins/storage-sqlite'; import { open } from 'react-native-quick-sqlite'; // create database const myRxDatabase = await createRxDatabase({ name: 'exampledb', multiInstance: false, // <- Set multiInstance to false when using RxDB in React Native storage: getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsQuickSQLite(open) }) }); If react-native-quick-sqlite does not work for you, as alternative you can use the react-native-sqlite-2 library instead: import { getRxStorageSQLite, getSQLiteBasicsWebSQL } from 'rxdb-premium/plugins/storage-sqlite'; import SQLite from 'react-native-sqlite-2'; const storage = getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsWebSQL(SQLite.openDatabase) }); ","version":"Next","tagName":"h2"},{"title":"Usage with Expo SQLite","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#usage-with-expo-sqlite","content":" Notice that expo-sqlite cannot be used on android (but it works on iOS) if you use Expo SDK version 50 or older. Please update to Version 50 or newer to use it. In the latest expo SDK version, use the getSQLiteBasicsExpoSQLiteAsync() method: import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsExpoSQLiteAsync } from 'rxdb-premium/plugins/storage-sqlite'; import * as SQLite from 'expo-sqlite'; const myRxDatabase = await createRxDatabase({ name: 'exampledb', multiInstance: false, storage: getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsExpoSQLiteAsync(SQLite.openDatabaseAsync) }) }); In older Expo SDK versions, you might have to use the non-async API: import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsExpoSQLite } from 'rxdb-premium/plugins/storage-sqlite'; import { openDatabase } from 'expo-sqlite'; const myRxDatabase = await createRxDatabase({ name: 'exampledb', multiInstance: false, storage: getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsExpoSQLite(openDatabase) }) }); ","version":"Next","tagName":"h2"},{"title":"Usage with SQLite Capacitor","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#usage-with-sqlite-capacitor","content":" Install the sqlite capacitor npm moduleAdd the iOS database location to your capacitor config { "plugins": { "CapacitorSQLite": { "iosDatabaseLocation": "Library/CapacitorDatabase" } } } Use the function getSQLiteBasicsCapacitor to get the capacitor sqlite wrapper. import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsCapacitor } from 'rxdb-premium/plugins/storage-sqlite'; /** * Import SQLite from the capacitor plugin. */ import { CapacitorSQLite, SQLiteConnection } from '@capacitor-community/sqlite'; import { Capacitor } from '@capacitor/core'; const sqlite = new SQLiteConnection(CapacitorSQLite); const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageSQLite({ /** * Different runtimes have different interfaces to SQLite. * For example in node.js we have a callback API, * while in capacitor sqlite we have Promises. * So we need a helper object that is capable of doing the basic * sqlite operations. */ sqliteBasics: getSQLiteBasicsCapacitor(sqlite, Capacitor) }) }); ","version":"Next","tagName":"h2"},{"title":"Database Connection","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#database-connection","content":" If you need to access the database connection for any reason you can use getDatabaseConnection to do so: import { getDatabaseConnection } from 'rxdb-premium/plugins/storage-sqlite' It has the following signature: getDatabaseConnection( sqliteBasics: SQLiteBasics<any>, databaseName: string ): Promise<SQLiteDatabaseClass>; ","version":"Next","tagName":"h2"},{"title":"Known Problems of SQLite in JavaScript apps","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#known-problems-of-sqlite-in-javascript-apps","content":" Some JavaScript runtimes do not contain a Buffer API which is used by SQLite to store binary attachments data as BLOB. You can set storeAttachmentsAsBase64String: true if you want to store the attachments data as base64 string instead. This increases the database size but makes it work even without having a Buffer. The SQlite RxStorage works on SQLite libraries that use SQLite in version 3.38.0 (2022-02-22) or newer, because it uses the SQLite JSON methods like JSON_EXTRACT. If you get an error like [Error: no such function: JSON_EXTRACT (code 1 SQLITE_ERROR[1]), you might have a too old version of SQLite. To debug all SQL operations, you can pass a log function to getRxStorageSQLite() like this: const storage = getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsCapacitor(sqlite, Capacitor), // pass log function log: console.log.bind(console) }); ","version":"Next","tagName":"h2"},{"title":"Related","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#related","content":" React Native Databases ","version":"Next","tagName":"h2"},{"title":"Worker RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-worker.html","content":"","keywords":"","version":"Next"},{"title":"On the worker process","type":1,"pageTitle":"Worker RxStorage","url":"/rx-storage-worker.html#on-the-worker-process","content":" // worker.ts import { exposeWorkerRxStorage } from 'rxdb-premium/plugins/storage-worker'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; exposeWorkerRxStorage({ /** * You can wrap any implementation of the RxStorage interface * into a worker. * Here we use the IndexedDB RxStorage. */ storage: getRxStorageIndexedDB() }); ","version":"Next","tagName":"h2"},{"title":"On the main process","type":1,"pageTitle":"Worker RxStorage","url":"/rx-storage-worker.html#on-the-main-process","content":" import { createRxDatabase } from 'rxdb'; import { getRxStorageWorker } from 'rxdb-premium/plugins/storage-worker'; const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageWorker( { /** * Contains any value that can be used as parameter * to the Worker constructor of thread.js * Most likely you want to put the path to the worker.js file in here. * * @link https://developer.mozilla.org/en-US/docs/Web/API/Worker/Worker */ workerInput: 'path/to/worker.js', /** * (Optional) options * for the worker. */ workerOptions: { type: 'module', credentials: 'omit' } } ) }); ","version":"Next","tagName":"h2"},{"title":"Pre-build workers","type":1,"pageTitle":"Worker RxStorage","url":"/rx-storage-worker.html#pre-build-workers","content":" The worker.js must be a self containing JavaScript file that contains all dependencies in a bundle. To make it easier for you, RxDB ships with pre-bundles worker files that are ready to use. You can find them in the folder node_modules/rxdb-premium/dist/workers after you have installed the RxDB Premium 👑 Plugin. From there you can copy them to a location where it can be served from the webserver and then use their path to create the RxDatabase. Any valid worker.js JavaScript file can be used both, for normal Workers and SharedWorkers. import { createRxDatabase } from 'rxdb'; import { getRxStorageWorker } from 'rxdb-premium/plugins/storage-worker'; const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageWorker( { /** * Path to where the copied file from node_modules/rxdb/dist/workers * is reachable from the webserver. */ workerInput: '/indexeddb.worker.js' } ) }); ","version":"Next","tagName":"h2"},{"title":"Building a custom worker","type":1,"pageTitle":"Worker RxStorage","url":"/rx-storage-worker.html#building-a-custom-worker","content":" The easiest way to bundle a custom worker.js file is by using webpack. Here is the webpack-config that is also used for the prebuild workers: // webpack.config.js const path = require('path'); const TerserPlugin = require('terser-webpack-plugin'); const projectRootPath = path.resolve( __dirname, '../../' // path from webpack-config to the root folder of the repo ); const babelConfig = require(path.join(projectRootPath, 'babel.config')); const baseDir = './dist/workers/'; // output path module.exports = { target: 'webworker', entry: { 'my-custom-worker': baseDir + 'my-custom-worker.js', }, output: { filename: '[name].js', clean: true, path: path.resolve( projectRootPath, 'dist/workers' ), }, mode: 'production', module: { rules: [ { test: /\\.tsx?$/, exclude: /(node_modules)/, use: { loader: 'babel-loader', options: babelConfig } } ], }, resolve: { extensions: ['.tsx', '.ts', '.js', '.mjs', '.mts'] }, optimization: { moduleIds: 'deterministic', minimize: true, minimizer: [new TerserPlugin({ terserOptions: { format: { comments: false, }, }, extractComments: false, })], } }; ","version":"Next","tagName":"h2"},{"title":"One worker per database","type":1,"pageTitle":"Worker RxStorage","url":"/rx-storage-worker.html#one-worker-per-database","content":" Each call to getRxStorageWorker() will create a different worker instance so that when you have more than one RxDatabase, each database will have its own JavaScript worker process. To reuse the worker instance in more than one RxDatabase, you can store the output of getRxStorageWorker() into a variable an use that one. Reusing the worker can decrease the initial page load, but you might get slower database operations. // Call getRxStorageWorker() exactly once const workerStorage = getRxStorageWorker({ workerInput: 'path/to/worker.js' }); // use the same storage for both databases. const databaseOne = await createRxDatabase({ name: 'database-one', storage: workerStorage }); const databaseTwo = await createRxDatabase({ name: 'database-two', storage: workerStorage }); ","version":"Next","tagName":"h2"},{"title":"Passing in a Worker instance","type":1,"pageTitle":"Worker RxStorage","url":"/rx-storage-worker.html#passing-in-a-worker-instance","content":" Instead of setting an url as workerInput, you can also specify a function that returns a new Worker instance when called. getRxStorageWorker({ workerInput: () => new Worker('path/to/worker.js') }) This can be helpful for environments where the worker is build dynamically by the bundler. For example in angular you would create a my-custom.worker.ts file that contains a custom build worker and then import it. const storage = getRxStorageWorker({ workerInput: () => new Worker(new URL('./my-custom.worker', import.meta.url)), }); //> my-custom.worker.ts import { exposeWorkerRxStorage } from 'rxdb-premium/plugins/storage-worker'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; exposeWorkerRxStorage({ storage: getRxStorageIndexedDB() }); ","version":"Next","tagName":"h2"},{"title":"Schema validation","type":0,"sectionRef":"#","url":"/schema-validation.html","content":"","keywords":"","version":"Next"},{"title":"validate-ajv","type":1,"pageTitle":"Schema validation","url":"/schema-validation.html#validate-ajv","content":" A validation-module that does the schema-validation. This one is using ajv as validator which is a bit faster. Better compliant to the jsonschema-standard but also has a bigger build-size. import { wrappedValidateAjvStorage } from 'rxdb/plugins/validate-ajv'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // wrap the validation around the main RxStorage const storage = wrappedValidateAjvStorage({ storage: getRxStorageDexie() }); const db = await createRxDatabase({ name: randomCouchString(10), storage }); ","version":"Next","tagName":"h3"},{"title":"validate-z-schema","type":1,"pageTitle":"Schema validation","url":"/schema-validation.html#validate-z-schema","content":" Both is-my-json-valid and validate-ajv use eval() to perform validation which might not be wanted when 'unsafe-eval' is not allowed in Content Security Policies. This one is using z-schema as validator which doesn't use eval. import { wrappedValidateZSchemaStorage } from 'rxdb/plugins/validate-z-schema'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // wrap the validation around the main RxStorage const storage = wrappedValidateZSchemaStorage({ storage: getRxStorageDexie() }); const db = await createRxDatabase({ name: randomCouchString(10), storage }); ","version":"Next","tagName":"h3"},{"title":"validate-is-my-json-valid","type":1,"pageTitle":"Schema validation","url":"/schema-validation.html#validate-is-my-json-valid","content":" WARNING: The is-my-json-valid validation is no longer supported until this bug is fixed. The validate-is-my-json-valid plugin uses is-my-json-valid for schema validation. import { wrappedValidateIsMyJsonValidStorage } from 'rxdb/plugins/validate-is-my-json-valid'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // wrap the validation around the main RxStorage const storage = wrappedValidateIsMyJsonValidStorage({ storage: getRxStorageDexie() }); const db = await createRxDatabase({ name: randomCouchString(10), storage }); ","version":"Next","tagName":"h3"},{"title":"Custom Formats","type":1,"pageTitle":"Schema validation","url":"/schema-validation.html#custom-formats","content":" The schema validators provide methods to add custom formats like a email format. You have to add these formats before you create your database. ","version":"Next","tagName":"h2"},{"title":"Ajv Custom Format","type":1,"pageTitle":"Schema validation","url":"/schema-validation.html#ajv-custom-format","content":" import { getAjv } from 'rxdb/plugins/validate-ajv'; const ajv = getAjv(); ajv.addFormat('email', { type: 'string', validate: v => v.includes('@') // ensure email fields contain the @ symbol }); ","version":"Next","tagName":"h3"},{"title":"Z-Schema Custom Format","type":1,"pageTitle":"Schema validation","url":"/schema-validation.html#z-schema-custom-format","content":" import { ZSchemaClass } from 'rxdb/plugins/validate-z-schema'; ZSchemaClass.registerFormat('email', function (v: string) { return v.includes('@'); // ensure email fields contain the @ symbol }); ","version":"Next","tagName":"h3"},{"title":"Performance comparison of the validators","type":1,"pageTitle":"Schema validation","url":"/schema-validation.html#performance-comparison-of-the-validators","content":" The RxDB team ran performance benchmarks using two storage options on an Ubuntu 24.04 machine with Chrome version 131.0.6778.85. The testing machine has 32 core 13th Gen Intel(R) Core(TM) i9-13900HX CPU. Dexie Storage (based on IndexedDB in the browser): Dexie Storage\tTime to First insert\tInsert 3000 documentsno validator\t68 ms\t213 ms ajv\t67 ms\t216 ms z-schema\t71 ms\t230 ms Memory Storage: stores everything in memory for extremely fast reads and writes, with no persistence by default. Often used with the RxDB memory-mapped plugin that processes data in memory an later persists to disc in background: Memory Storage\tTime to First insert\tInsert 3000 documentsno validator\t1.15 ms\t0.8 ms ajv\t3.05 ms\t2.7 ms z-schema\t0.9 ms\t18 ms Including a validator library also increases your JavaScript bundle size. Here's how it breaks down (minified + gzip): Build Size (minified+gzip)\tBuild Size (dexie)\tBuild Size (memory)no validator\t73103 B\t39976 B ajv\t106135 B\t72773 B z-schema\t125186 B\t91882 B ","version":"Next","tagName":"h2"},{"title":"Third Party Plugins","type":0,"sectionRef":"#","url":"/third-party-plugins.html","content":"Third Party Plugins rxdb-hooks A set of hooks to integrate RxDB into react applications.rxdb-flexsearch The full text search for RxDB using FlexSearch.rxdb-orion Enables replication with Laravel Orion.rxdb-supabase Enables replication with Supabase.rxdb-utils Additional features for RxDB like models, timestamps, default values, view and more.loki-async-reference-adapter Simple async adapter for LokiJS, suitable to use RxDB's Lokijs RxStorage with React Native.","keywords":"","version":"Next"},{"title":"Transactions, Conflicts and Revisions","type":0,"sectionRef":"#","url":"/transactions-conflicts-revisions.html","content":"","keywords":"","version":"Next"},{"title":"Why RxDB does not have transactions","type":1,"pageTitle":"Transactions, Conflicts and Revisions","url":"/transactions-conflicts-revisions.html#why-rxdb-does-not-have-transactions","content":" When talking about transactions, we mean ACID transactions that guarantee the properties of atomicity, consistency, isolation and durability. With an ACID transaction you can mutate data dependent on the current state of the database. It is ensured that no other database operations happen in between your transaction and after the transaction has finished, it is guaranteed that the new data is actually written to the disc. To implement ACID transactions on a single server, the database has to keep track on who is running transactions and then schedule these transactions so that they can run in isolation. As soon as you have to split your database on multiple servers, transaction handling becomes way more difficult. The servers have to communicate with each other to find a consensus about which transaction can run and which has to wait. Network connections might break, or one server might complete its part of the transaction and then be required to roll back its changes because of an error on another server. But with RxDB you have multiple clients that can go randomly online or offline. The users can have different devices and the clock of these devices can go off by any time. To support ACID transactions here, RxDB would have to make the whole world stand still for all clients, while one client is doing a write operation. And even that can only work when all clients are online. Implementing that might be possible, but at the cost of an unpredictable amount of performance loss and not being able to support offline-first. A single write operation to a document is the only atomic thing you can do in RxDB. The benefits of not having to support transactions: Clients can read and write data without blocking each other.Clients can write data while being offline and then replicate with a server when they are online again, called offline-first.Creating a compatible backend for the replication is easy so that RxDB can replicate with any existing infrastructure.Optimizations like Sharding can be used. ","version":"Next","tagName":"h2"},{"title":"Revisions","type":1,"pageTitle":"Transactions, Conflicts and Revisions","url":"/transactions-conflicts-revisions.html#revisions","content":" Working without transactions leads to having undefined state when doing multiple database operations at the same time. Most client side databases rely on a last-write-wins strategy on write operations. This might be a viable solution for some cases, but often this leads to strange problems that are hard to debug. Instead, to ensure that the behavior of RxDB is always predictable, RxDB relies on revisions for version control. Revisions work similar to Lamport Clocks. Each document is stored together with its revision string, that looks like 1-9dcca3b8e1a and consists of: The revision height, a number that starts with 1 and is increased with each write to that document.The database instance token. An operation to the RxDB data layer does not only contain the new document data, but also the previous document data with its revision string. If the previous revision matches the revision that is currently stored in the database, the write operation can succeed. If the previous revision is different than the revision that is currently stored in the database, the operation will throw a 409 CONFLICT error. ","version":"Next","tagName":"h2"},{"title":"Conflicts","type":1,"pageTitle":"Transactions, Conflicts and Revisions","url":"/transactions-conflicts-revisions.html#conflicts","content":" There are two types of conflicts in RxDB, the local conflict and the replication conflict. ","version":"Next","tagName":"h2"},{"title":"Local conflicts","type":1,"pageTitle":"Transactions, Conflicts and Revisions","url":"/transactions-conflicts-revisions.html#local-conflicts","content":" A local conflict can happen when a write operation assumes a different previous document state, then what is currently stored in the database. This can happen when multiple parts of your application do simultaneous writes to the same document. This can happen on a single browser tab, or when multiple tabs write at once or when a write appears while the document gets replicated from a remote server replication. When a local conflict appears, RxDB will throw a 409 CONFLICT error. The calling code must then handle the error properly, depending on the application logic. Instead of handling local conflicts, in most cases it is easier to ensure that they cannot happen, by using incremental database operations like incrementalModify(), incrementalPatch() or incrementalUpsert(). These write operations have a build in way to handle conflicts by re-applying the mutation functions to the conflicting document state. ","version":"Next","tagName":"h3"},{"title":"Replication conflicts","type":1,"pageTitle":"Transactions, Conflicts and Revisions","url":"/transactions-conflicts-revisions.html#replication-conflicts","content":" A replication conflict appears when multiple clients write to the same documents at once and these documents are then replicated to the backend server. When you replicate with the Graphql replication and the replication primitives, RxDB assumes that conflicts are detected and resolved at the client side. When a document is send to the backend and the backend detected a conflict (by comparing revisions or other properties), the backend will respond with the actual document state so that the client can compare this with the local document state and create a new, resolved document state that is then pushed to the server again. You can read more about the replication protocol here. ","version":"Next","tagName":"h2"},{"title":"Custom conflict handler","type":1,"pageTitle":"Transactions, Conflicts and Revisions","url":"/transactions-conflicts-revisions.html#custom-conflict-handler","content":" A conflict handler is an object with two JavaScript functions: Detect if two document states are equalSolve existing conflicts Because the conflict handler also is used for conflict detection, it will run many times on pull-, push- and write operations of RxDB. Most of the time it will detect that there is no conflict and then return. Lets have a look at the default conflict handler of RxDB to learn how to create a custom one: import { deepEqual } from 'rxdb/plugins/utils'; export const defaultConflictHandler: RxConflictHandler<any> = { isEqual(a, b) { /** * isEqual() is used to detect conflicts or to detect if a * document has to be pushed to the remote. * If the documents are deep equal, * we have no conflict. * Because deepEqual is CPU expensive, on your custom conflict handler you might only * check some properties, like the updatedAt time or revisions * for better performance. */ return deepEqual(a, b); }, resolve(i) { /** * The default conflict handler will always * drop the fork state and use the master state instead. * * In your custom conflict handler you likely want to merge properties * of the realMasterState and the newDocumentState instead. */ return i.realMasterState; } }; To overwrite the default conflict handler, you have to specify a custom conflictHandler property when creating a collection with addCollections(). const myCollections = await myDatabase.addCollections({ // key = collectionName humans: { schema: mySchema, conflictHandler: myCustomConflictHandler } }); ","version":"Next","tagName":"h2"},{"title":"Why IndexedDB is slow and what to use instead","type":0,"sectionRef":"#","url":"/slow-indexeddb.html","content":"","keywords":"","version":"Next"},{"title":"Batched Cursor","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#batched-cursor","content":" With IndexedDB 2.0, new methods were introduced which can be utilized to improve performance. With the getAll() method, a faster alternative to the old openCursor() can be created which improves performance when reading data from the IndexedDB store. Lets say we want to query all user documents that have an age greater than 25 out of the store. To implement a fast batched cursor that only needs calls to getAll() and not to getAllKeys(), we first need to create an age index that contains the primary id as last field. myIndexedDBObjectStore.createIndex( 'age-index', [ 'age', 'id' ] ); This is required because the age field is not unique, and we need a way to checkpoint the last returned batch so we can continue from there in the next call to getAll(). const maxAge = 25; let result = []; const tx: IDBTransaction = db.transaction([storeName], 'readonly', TRANSACTION_SETTINGS); const store = tx.objectStore(storeName); const index = store.index('age-index'); let lastDoc; let done = false; /** * Run the batched cursor until all results are retrieved * or the end of the index is reached. */ while (done === false) { await new Promise((res, rej) => { const range = IDBKeyRange.bound( /** * If we have a previous document as checkpoint, * we have to continue from it's age and id values. */ [ lastDoc ? lastDoc.age : -Infinity, lastDoc ? lastDoc.id : -Infinity, ], [ maxAge + 0.00000001, String.fromCharCode(65535) ], true, false ); const openCursorRequest = index.getAll(range, batchSize); openCursorRequest.onerror = err => rej(err); openCursorRequest.onsuccess = e => { const subResult: TestDocument[] = e.target.result; lastDoc = lastOfArray(subResult); if (subResult.length === 0) { done = true; } else { result = result.concat(subResult); } res(); }; }); } console.dir(result); As the performance test results show, using a batched cursor can give a huge improvement. Interestingly choosing a high batch size is important. When you known that all results of a given IDBKeyRange are needed, you should not set a batch size at all and just directly query all documents via getAll(). RxDB uses batched cursors in the IndexedDB RxStorage. ","version":"Next","tagName":"h2"},{"title":"IndexedDB Sharding","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#indexeddb-sharding","content":" Sharding is a technique, normally used in server side databases, where the database is partitioned horizontally. Instead of storing all documents at one table/collection, the documents are split into so called shards and each shard is stored on one table/collection. This is done in server side architectures to spread the load between multiple physical servers which increases scalability. When you use IndexedDB in a browser, there is of course no way to split the load between the client and other servers. But you can still benefit from sharding. Partitioning the documents horizontally into multiple IndexedDB stores, has shown to have a big performance improvement in write- and read operations while only increasing initial pageload slightly. As shown in the performance test results, sharding should always be done by IDBObjectStore and not by database. Running a batched cursor over the whole dataset with 10 store shards in parallel is about 28% faster then running it over a single store. Initialization time increases minimal from 9 to 17 milliseconds. Getting a quarter of the dataset by batched iterating over an index, is even 43% faster with sharding then when a single store is queried. As downside, getting 10k documents by their id is slower when it has to run over the shards. Also it can be much effort to recombined the results from the different shards into the required query result. When a query without a limit is done, the sharding method might cause a data load huge overhead. Sharding can be used with RxDB with the Sharding Plugin. ","version":"Next","tagName":"h2"},{"title":"Custom Indexes","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#custom-indexes","content":" Indexes improve the query performance of IndexedDB significant. Instead of fetching all data from the storage when you search for a subset of it, you can iterate over the index and stop iterating when all relevant data has been found. For example to query for all user documents that have an age greater than 25, you would create an age+id index. To be able to run a batched cursor over the index, we always need our primary key (id) as the last index field. Instead of doing this, you can use a custom index which can improve the performance. The custom index runs over a helper field ageIdCustomIndex which is added to each document on write. Our index now only contains a single string field instead of two (age-number and id-string). // On document insert add the ageIdCustomIndex field. const idMaxLength = 20; // must be known to craft a custom index docData.ageIdCustomIndex = docData.age + docData.id.padStart(idMaxLength, ' '); store.put(docData); // ... // normal index myIndexedDBObjectStore.createIndex( 'age-index', [ 'age', 'id' ] ); // custom index myIndexedDBObjectStore.createIndex( 'age-index-custom', [ 'ageIdCustomIndex' ] ); To iterate over the index, you also use a custom crafted keyrange, depending on the last batched cursor checkpoint. Therefore the maxLength of id must be known. // keyrange for normal index const range = IDBKeyRange.bound( [25, ''], [Infinity, Infinity], true, false ); // keyrange for custom index const range = IDBKeyRange.bound( // combine both values to a single string 25 + ''.padStart(idMaxLength, ' '), Infinity, true, false ); As shown, using a custom index can further improve the performance of running a batched cursor by about 10%. Another big benefit of using custom indexes, is that you can also encode boolean values in them, which cannot be done with normal IndexedDB indexes. RxDB uses custom indexes in the IndexedDB RxStorage. ","version":"Next","tagName":"h2"},{"title":"Relaxed durability","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#relaxed-durability","content":" Chromium based browsers allow to set durability to relaxed when creating an IndexedDB transaction. Which runs the transaction in a less secure durability mode, which can improve the performance. The user agent may consider that the transaction has successfully committed as soon as all outstanding changes have been written to the operating system, without subsequent verification. As shown here, using the relaxed durability mode can improve performance slightly. The best performance improvement could be measured when many small transactions have to be run. Less, bigger transaction do not benefit that much. ","version":"Next","tagName":"h2"},{"title":"Explicit transaction commits","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#explicit-transaction-commits","content":" By explicitly committing a transaction, another slight performance improvement can be achieved. Instead of waiting for the browser to commit an open transaction, we call the commit() method to explicitly close it. // .commit() is not available on all browsers, so first check if it exists. if (transaction.commit) { transaction.commit() } The improvement of this technique is minimal, but observable as these tests show. ","version":"Next","tagName":"h2"},{"title":"In-Memory on top of IndexedDB","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#in-memory-on-top-of-indexeddb","content":" To prevent transaction handling and to fix the performance problems, we need to stop using IndexedDB as a database. Instead all data is loaded into the memory on the initial page load. Here all reads and writes happen in memory which is about 100x faster. Only some time after a write occurred, the memory state is persisted into IndexedDB with a single write transaction. In this scenario IndexedDB is used as a filesystem, not as a database. There are some libraries that already do that: LokiJS with the IndexedDB AdapterAbsurd-SQLSQL.js with the empscripten Filesystem APIDuckDB Wasm ","version":"Next","tagName":"h2"},{"title":"In-Memory: Persistence","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#in-memory-persistence","content":" One downside of not directly using IndexedDB, is that your data is not persistent all the time. And when the JavaScript process exists without having persisted to IndexedDB, data can be lost. To prevent this from happening, we have to ensure that the in-memory state is written down to the disc. One point is make persisting as fast as possible. LokiJS for example has the incremental-indexeddb-adapter which only saves new writes to the disc instead of persisting the whole state. Another point is to run the persisting at the correct point in time. For example the RxDB LokiJS storage persists in the following situations: When the database is idle and no write or query is running. In that time we can persist the state if any new writes appeared before.When the window fires the beforeunload event we can assume that the JavaScript process is exited any moment and we have to persist the state. After beforeunload there are several seconds time which are sufficient to store all new changes. This has shown to work quite reliable. The only missing event that can happen is when the browser exists unexpectedly like when it crashes or when the power of the computer is shut of. ","version":"Next","tagName":"h3"},{"title":"In-Memory: Multi Tab Support","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#in-memory-multi-tab-support","content":" One big difference between a web application and a 'normal' app, is that your users can use the app in multiple browser tabs at the same time. But when you have all database state in memory and only periodically write it to disc, multiple browser tabs could overwrite each other and you would loose data. This might not be a problem when you rely on a client-server replication, because the lost data might already be replicated with the backend and therefore with the other tabs. But this would not work when the client is offline. The ideal way to solve that problem, is to use a SharedWorker. A SharedWorker is like a WebWorker that runs its own JavaScript process only that the SharedWorker is shared between multiple contexts. You could create the database in the SharedWorker and then all browser tabs could request the Worker for data instead of having their own database. But unfortunately the SharedWorker API does not work in all browsers. Safari dropped its support and InternetExplorer or Android Chrome, never adopted it. Also it cannot be polyfilled. UPDATE:Apple added SharedWorkers back in Safari 142 Instead, we could use the BroadcastChannel API to communicate between tabs and then apply a leader election between them. The leader election ensures that, no matter how many tabs are open, always one tab is the Leader. The disadvantage is that the leader election process takes some time on the initial page load (about 150 milliseconds). Also the leader election can break when a JavaScript process is fully blocked for a longer time. When this happens, a good way is to just reload the browser tab to restart the election process. ","version":"Next","tagName":"h3"},{"title":"Further read","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#further-read","content":" Offline First Database ComparisonSpeeding up IndexedDB reads and writesSQLITE ON THE WEB: ABSURD-SQLSQLite in a PWA with FileSystemAccessAPIResponse to this article by Oren Eini ","version":"Next","tagName":"h2"},{"title":"Why UI applications need NoSQL","type":0,"sectionRef":"#","url":"/why-nosql.html","content":"","keywords":"","version":"Next"},{"title":"Transactions do not work with humans involved","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#transactions-do-not-work-with-humans-involved","content":" On the server side, transactions are used to run steps of logic inside of a self contained unit of work. The database system ensures that multiple transactions do not run in parallel or interfere with each other. This works well because on the server side you can predict how longer everything takes. It can be ensured that one transactions does not block everything else for too long which would make the system not responding anymore to other requests. When you build a UI based application that is used by a real human, you can no longer predict how long anything takes. The user clicks the edit button and expects to not have anyone else change the document while the user is in edit mode. Using a transaction to ensure nothing is changed in between, is not an option because the transaction could be open for a long time and other background tasks, like replication, would no longer work. So whenever a human is involved, this kind of logic has to be implemented using other strategies. Most NoSQL databases like RxDB or CouchDB use a system based on revision and conflicts to handle these. ","version":"Next","tagName":"h2"},{"title":"Transactions do not work with offline-first","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#transactions-do-not-work-with-offline-first","content":" When you want to build an offline-first application, it is assumed that the user can also read and write data, even when the device has lost the connection to the backend. You could use database transactions on writes to the client's database state, but enforcing a transaction boundary across other instances like clients or servers, is not possible when there is no connection. On the client you could run an update query where all color: red rows are changed to color: blue, but this would not guarantee that there will still be other red documents when the client goes online again and restarts the replication with the server. UPDATE docs SET docs.color = 'red' WHERE docs.color = 'blue'; ","version":"Next","tagName":"h2"},{"title":"Relational queries in NoSQL","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#relational-queries-in-nosql","content":" What most people want from a relational database, is to run queries over multiple tables. Some people think that they cannot do that with NoSQL, so let me explain. Let's say you have two tables with customers and cities where each city has an id and each customer has a city_id. You want to get every customer that resides in Tokyo. With SQL, you would use a query like this: SELECT * FROM city WHERE city.name = 'Tokyo' LEFT JOIN customer ON customer.city_id = city.id; With NoSQL you can just do the same, but you have to write it manually: const cityDocument = await db.cities.findOne().where('name').equals('Tokyo').exec(); const customerDocuments = await db.customers.find().where('city_id').equals(cityDocument.id).exec(); So what are the differences? The SQL version would run faster on a remote database server because it would aggregate all data there and return only the customers as result set. But when you have a local database, it is not really a difference. Querying the two tables by hand would have about the same performance as a JavaScript implementation of SQL that is running locally. The main benefit from using SQL is, that the SQL query runs inside of a single transaction. When a change to one of our two tables happens, while our query runs, the SQL database will ensure that the write does not affect the result of the query. This could happen with NoSQL, while you retrieve the city document, the customer table gets changed and your result is not correct for the dataset that was there when you started the querying. As a workaround, you could observe the database for changes and if a change happened in between, you have to re-run everything. ","version":"Next","tagName":"h2"},{"title":"Reliable replication","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#reliable-replication","content":" In an offline first app, your data is replicated from your backend servers to your users and you want it to be reliable. The replication is reliable when, no matter what happens, every online client is able to run a replication and end up with the exact same database state as any other client. Implementing a reliable replication protocol is hard because of the circumstances of your app: Your users have unknown devices.They have an unknown internet speed.They can go offline or online at any time.Clients can be offline for a several days with un-synced changes.You can have many users at the same time.The users can do many database writes at the same time to the same entities. Now lets say you have a SQL database and one of your users, called Alice, runs a query that mutates some rows, based on a condition. # mark all items out of stock as inStock=FALSE UPDATE Table_A SET Table_A.inStock = FALSE FROM Table_A WHERE Table_A.amountInStock = 0 At first, the query runs on the local database of Alice and everything is fine. But at the same time Bob, the other client, updates a row and sets amountInStock from 0 to 1. Now Bob's client replicates the changes from Alice and runs them. Bob will end up with a different database state than Alice because on one of the rows, the WHERE condition was not met. This is not what we want, so our replication protocol should be able to fix it. For that it has to reduce all mutations into a deterministic state. Let me loosely describe how "many" SQL replications work: Instead of just running all replicated queries, we remember a list of all past queries. When a new query comes in that happened before our last query, we roll back the previous queries, run the new query, and then re-execute our own queries on top of that. For that to work, all queries need a timestamp so we can order them correctly. But you cannot rely on the clock that is running at the client. Client side clocks drift, they can run in a different speed or even a malicious client modifies the clock on purpose. So instead of a normal timestamp, we have to use a Hybrid Logical Clock that takes a client generated id and the number of the clients query into account. Our timestamp will then look like 2021-10-04T15:29.40.273Z-0000-eede1195b7d94dd5. These timestamps can be brought into a deterministic order and each client can run the replicated queries in the same order. Watch this video to learn how to implement that. While this sounds easy and realizable, we have some problems: This kind of replication works great when you replicate between multiple SQL servers. It does not work great when you replicate between a single server and many clients. As mentioned above, clients can be offline for a long time which could require us to do many and heavy rollbacks on each client when someone comes back after a long time and replicates the change.We have many clients where many changes can appear and our database would have to roll back many times.During the rollback, the database cannot be used for read queries.It is required that each client downloads and keeps the whole query history. With NoSQL, replication works different. A new client downloads all current documents and each time a document changes, that document is downloaded again. Instead of replicating the query that leads to a data change, we just replicate the changed data itself. Of course, we could do the same with SQL and just replicate the affected rows of a query, like WatermelonDB does it. This was a clever way to go for WatermelonDB, because it was initially made for React Native and did want to use the fast SQLite instead of the slow AsyncStorage. But in a more general view, it defeats the whole purpose of having a replicating relational database because you have transactions locally, but these transactions become meaningless as soon as the data goes through the replication layer. ","version":"Next","tagName":"h2"},{"title":"Server side validation","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#server-side-validation","content":" Whenever there is client-side input, it must be validated on the server. On a NoSQL database, validating a changed document is trivial. The client sends the changed document to the server, and the server can then check if the user was allowed to modify that one document and if the applied changes are ok. Safely validating a SQL query is up to impossible. You first need a way to parse the query with all this complex SQL syntax and keywords.You have to ensure that the query does not DOS your system.Then you check which rows would be affected when running the query and if the user was allowed to change themThen you check if the mutation to that rows are valid. For simple queries like an insert/update/delete to a single row, this might be doable. But a query with 4 LEFT JOIN will be hard. ","version":"Next","tagName":"h2"},{"title":"Event optimization","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#event-optimization","content":" With NoSQL databases, each write event always affects exactly one document. This makes it easy to optimize the processing of events at the client. For example instead of handling multiple updates to the same document, when the user comes online again, you could skip everything but the last event. Similar to that you can optimize observable query results. When you query the customers table you get a query result of 10 customers. Now a new customer is added to the table and you want to know how the new query results look like. You could analyze the event and now you know that you only have to add the new customer to the previous results set, instead of running the whole query again. These types of optimizations can be run with all NoSQL queries and even work with limit and skip operators. In RxDB this all happens in the background with the EventReduce algorithm that calculates new query results on incoming changes. These optimizations do not really work with relational data. A change to one table could affect a query to any other tables. and you could not just calculate the new results based on the event. You would always have to re-run the full query to get the updated results. ","version":"Next","tagName":"h2"},{"title":"Migration without relations","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#migration-without-relations","content":" Sooner or later you change the layout of your data. You update the schema and you also have to migrate the stored rows/documents. In NoSQL this is often not a big deal because all of your documents are modeled as self containing piece of data. There is an old version of the document and you have a function that transforms it into the new version. With relational data, nothing is self-contained. The relevant data for the migration of a single row could be inside any other table. So when changing the schema, it will be important which table to migrate first and how to orchestrate the migration or relations. On client side applications, this is even harder because the client can close the application at any time and the migration must be able to continue. ","version":"Next","tagName":"h2"},{"title":"Everything can be downgraded to NoSQL","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#everything-can-be-downgraded-to-nosql","content":" To use an offline first database in the frontend, you have to make it compatible with your backend APIs. Making software things compatible often means you have to find the lowest common denominator. When you have SQLite in the frontend and want to replicate it with the backend, the backend also has to use SQLite. You cannot even use PostgreSQL because it has a different SQL dialect and some queries might fail. But you do not want to let the frontend dictate which technologies to use in the backend just to make replication work. With NoSQL, you just have documents and writes to these documents. You can build a document based layer on top of everything by removing functionality. It can be built on top of SQL, but also on top of a graph database or even on top of a key-value store like levelDB or FoundationDB. With that document layer you can build a replication API that serves documents sorted by the last update time and there you have a realtime replication. ","version":"Next","tagName":"h2"},{"title":"Caching query results","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#caching-query-results","content":" Memory is limited and this is especially true for client side applications where you never know how much free RAM the device really has. You want to have a fast realtime UI, so your database must be able to cache query results. When you run a SQL query like SELECT .. the result of it can be anything. An array, a number, a string, a single row, it depends on how the query goes on. So the caching strategy can only be to keep the result in memory, once for each query. This scales very bad because the more queries you run, the more results you have to store in memory. When you make a query to a NoSQL collection, you always know how the result will look like. It is a list of documents, based on the collection's schema (if you have one). The result set is stored in memory, but because you get similar documents for different queries to the same collection, we can de-duplicated the documents. So when multiple queries return the same document, we only have it in the cache once and each query caches point to the same memory object. So no matter how many queries you make, your cache maximum is the collection size. ","version":"Next","tagName":"h2"},{"title":"TypeScript support","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#typescript-support","content":" Modern web apps are build with TypeScript and you want the transpiler to know the types of your query result so it can give you build time errors when something does not match. This is quite easy on document based systems. The typings of for each document of a collection can be generated from the schema, and all queries to that collection will always return the given document type. With SQL you have to manually write the typings for each query by hand because it can contain all these aggregate functions that affect the type of the query's result. ","version":"Next","tagName":"h2"},{"title":"What you lose with NoSQL","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#what-you-lose-with-nosql","content":" You can not run relational queries across tables inside a single transaction.You can not mutate documents based on a WHERE clause, in a single transaction.You need to resolve replication conflicts on a per-document basis. ","version":"Next","tagName":"h2"},{"title":"But there is database XY","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#but-there-is-database-xy","content":" Yes, there are SQL databases out there that run on the client side or have replication, but not both. WebSQL / sql.js: In the past there was WebSQL in the browser. It was a direct mapping to SQLite because all browsers used the SQLite implementation. You could store relational data in it, but there was no concept of replication at any point in time. sql.js is an SQLite complied to JavaScript. It has not replication and it has (for now) no persistent storage, everything is stored in memory.WatermelonDB is a SQL databases that runs in the client. WatermelonDB uses a document-based replication that is not able to replicate relational queries.Cockroach / Spanner/ PostgreSQL etc. are SQL databases with replication. But they run on servers, not on clients, so they can make different trade offs. Further read Cockroach Labs: Living Without Atomic Clocks Transactions, Conflicts and Revisions in RxDB Why MongoDB, Cassandra, HBase, DynamoDB, and Riak will only let you perform transactions on a single data item Make a PR to this file if you have more interesting links to that topic ","version":"Next","tagName":"h2"},{"title":"Using RxDB with TypeScript","type":0,"sectionRef":"#","url":"/tutorials/typescript.html","content":"","keywords":"","version":"Next"},{"title":"Using the types","type":1,"pageTitle":"Using RxDB with TypeScript","url":"/tutorials/typescript.html#using-the-types","content":" Now that we have declare all our types, we can use them. /** * create database and collections */ const myDatabase: MyDatabase = await createRxDatabase<MyDatabaseCollections>({ name: 'mydb', storage: getRxStorageDexie() }); const heroSchema: RxJsonSchema<HeroDocType> = { title: 'human schema', description: 'describes a human being', version: 0, keyCompression: true, primaryKey: 'passportId', type: 'object', properties: { passportId: { type: 'string' }, firstName: { type: 'string' }, lastName: { type: 'string' }, age: { type: 'integer' } }, required: ['passportId', 'firstName', 'lastName'] }; const heroDocMethods: HeroDocMethods = { scream: function(this: HeroDocument, what: string) { return this.firstName + ' screams: ' + what.toUpperCase(); } }; const heroCollectionMethods: HeroCollectionMethods = { countAllDocuments: async function(this: HeroCollection) { const allDocs = await this.find().exec(); return allDocs.length; } }; await myDatabase.addCollections({ heroes: { schema: heroSchema, methods: heroDocMethods, statics: heroCollectionMethods } }); // add a postInsert-hook myDatabase.heroes.postInsert( function myPostInsertHook( this: HeroCollection, // own collection is bound to the scope docData: HeroDocType, // documents data doc: HeroDocument // RxDocument ) { console.log('insert to ' + this.name + '-collection: ' + doc.firstName); }, false // not async ); /** * use the database */ // insert a document const hero: HeroDocument = await myDatabase.heroes.insert({ passportId: 'myId', firstName: 'piotr', lastName: 'potter', age: 5 }); // access a property console.log(hero.firstName); // use a orm method hero.scream('AAH!'); // use a static orm method from the collection const amount: number = await myDatabase.heroes.countAllDocuments(); console.log(amount); /** * clean up */ myDatabase.close(); ","version":"Next","tagName":"h2"},{"title":"Known Problems","type":1,"pageTitle":"Using RxDB with TypeScript","url":"/tutorials/typescript.html#known-problems","content":" RxDB uses the WeakRef API. If your typescript bundler throws the error TS2304: Cannot find name 'WeakRef', you have to add ES2021.WeakRef to compilerOptions.lib in your tsconfig.json. { "compilerOptions": { "lib": ["ES2020", "ES2021.WeakRef"] } } ","version":"Next","tagName":"h2"}],"options":{"excludeRoutes":["blog","releases"],"id":"default"}}