forked from pubkey/rxdb
-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathsearch-doc-1716231513282.json
1 lines (1 loc) · 693 KB
/
search-doc-1716231513282.json
1
{"searchDocs":[{"title":"Alternatives for realtime offline-first JavaScript applications","type":0,"sectionRef":"#","url":"/alternatives.html","content":"","keywords":"","version":"Next"},{"title":"RxDB: The benefits of Browser Databases","type":0,"sectionRef":"#","url":"/articles/browser-database.html","content":"","keywords":"","version":"Next"},{"title":"Why you might want to store data in the browser","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#why-you-might-want-to-store-data-in-the-browser","content":" There are compelling reasons to consider storing data in the browser: ","version":"Next","tagName":"h2"},{"title":"Use the database for caching","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#use-the-database-for-caching","content":" By leveraging a browser database, you can harness the power of caching. Storing frequently accessed data locally enables you to reduce server requests and greatly improve application performance. Caching provides a faster and smoother user experience, enhancing overall user satisfaction. ","version":"Next","tagName":"h3"},{"title":"Data is offline accessible","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#data-is-offline-accessible","content":" Storing data in the browser allows for offline accessibility. Regardless of an active internet connection, users can access and interact with the application, ensuring uninterrupted productivity and user engagement. ","version":"Next","tagName":"h3"},{"title":"Easier implementation of replicating database state","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#easier-implementation-of-replicating-database-state","content":" Browser databases simplify the replication of database state across multiple devices or instances of the application. Compared to complex REST routes, replicating data becomes easier and more streamlined. This capability enables the development of real-time and collaborative applications, where changes are seamlessly synchronized among users. ","version":"Next","tagName":"h3"},{"title":"Building real-time applications is easier with local data","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#building-real-time-applications-is-easier-with-local-data","content":" With a local browser database, building real-time applications becomes more straightforward. The availability of local data allows for reactive data flows and dynamic user interfaces that instantly reflect changes in the underlying data. Real-time features can be seamlessly implemented, providing a rich and interactive user experience. ","version":"Next","tagName":"h3"},{"title":"Browser databases can scale better","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#browser-databases-can-scale-better","content":" Browser databases distribute the query workload to users' devices, allowing queries to run locally instead of relying solely on server resources. This decentralized approach improves scalability by reducing the burden on the server, resulting in a more efficient and responsive application. ","version":"Next","tagName":"h3"},{"title":"Running queries locally has low latency","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#running-queries-locally-has-low-latency","content":" Browser databases offer the advantage of running queries locally, resulting in low latency. Eliminating the need for server round-trips significantly improves query performance, ensuring faster data retrieval and a more responsive application. ","version":"Next","tagName":"h3"},{"title":"Faster initial application start time","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#faster-initial-application-start-time","content":" Storing data in the browser reduces the initial application start time. Instead of waiting for data to be fetched from the server, the application can leverage the local database, resulting in faster initialization and improved user satisfaction right from the start. ","version":"Next","tagName":"h3"},{"title":"Easier integration with JavaScript frameworks","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#easier-integration-with-javascript-frameworks","content":" Browser databases, including RxDB, seamlessly integrate with popular JavaScript frameworks such as Angular, React.js, Vue.js, and Svelte. This integration allows developers to leverage the power of a database while working within the familiar environment of their preferred framework, enhancing productivity and ease of development. ","version":"Next","tagName":"h3"},{"title":"Store local data with encryption","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#store-local-data-with-encryption","content":" Security is a crucial aspect of data storage, especially when handling sensitive information. Browser databases, like RxDB, offer the capability to store local data with encryption, ensuring the confidentiality and protection of sensitive user data. ","version":"Next","tagName":"h3"},{"title":"Using a local database for state management","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#using-a-local-database-for-state-management","content":" Utilizing a local browser database for state management eliminates the need for traditional state management libraries like Redux or NgRx. This approach simplifies the application's architecture by leveraging the database's capabilities to handle state-related operations efficiently. ","version":"Next","tagName":"h3"},{"title":"Data is portable and always accessible by the user","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#data-is-portable-and-always-accessible-by-the-user","content":" When data is stored in the browser, it becomes portable and always accessible by the user. This ensures that users have control and ownership of their data, enhancing data privacy and accessibility. ","version":"Next","tagName":"h3"},{"title":"Why SQL databases like SQLite are not a good fit for the browser","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#why-sql-databases-like-sqlite-are-not-a-good-fit-for-the-browser","content":" While SQL databases, such as SQLite, excel in server-side scenarios, they are not always the optimal choice for browser-based applications. Here are some reasons why SQL databases may not be the best fit for the browser: ","version":"Next","tagName":"h2"},{"title":"Push/Pull based vs. reactive","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#pushpull-based-vs-reactive","content":" SQL databases typically rely on a push/pull mechanism, where the server pushes updates to the client or the client pulls data from the server. This approach is not inherently reactive and requires additional effort to implement real-time data updates. In contrast, browser databases like RxDB provide built-in reactive mechanisms, allowing the application to react to data changes seamlessly. ","version":"Next","tagName":"h3"},{"title":"Build size of server-side databases","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#build-size-of-server-side-databases","content":" Server-side databases, designed to handle large-scale applications, often have significant build sizes that are unsuitable for browser applications. In contrast, browser databases are specifically optimized for browser environments and leverage browser APIs like IndexedDB, OPFS, and Webworker, resulting in smaller build sizes. ","version":"Next","tagName":"h3"},{"title":"Initialization time and performance","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#initialization-time-and-performance","content":" The initialization time and performance of server-side databases can be suboptimal in browser applications. Browser databases, on the other hand, are designed to provide fast initialization and efficient performance within the browser environment, ensuring a smooth user experience. ","version":"Next","tagName":"h3"},{"title":"Why RxDB is a good fit for the browser","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#why-rxdb-is-a-good-fit-for-the-browser","content":" RxDB stands out as an excellent choice for implementing a browser database solution. Here's why RxDB is a perfect fit for browser applications: ","version":"Next","tagName":"h2"},{"title":"Observable Queries (rxjs) to automatically update the UI on changes","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#observable-queries-rxjs-to-automatically-update-the-ui-on-changes","content":" RxDB provides Observable Queries, powered by RxJS, enabling automatic UI updates when data changes occur. This reactive approach eliminates the need for manual data synchronization and ensures a real-time and responsive user interface. const query = myCollection.find({ selector: { age: { $gt: 21 } } }); const querySub = query.$.subscribe(results => { console.log('got results: ' + results.length); }); ","version":"Next","tagName":"h3"},{"title":"NoSQL JSON documents are a better fit for UIs","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#nosql-json-documents-are-a-better-fit-for-uis","content":" RxDB utilizes NoSQL JSON documents, which align naturally with UI development in JavaScript. JavaScript's native handling of JSON objects makes working with NoSQL documents more intuitive, simplifying UI-related operations. ","version":"Next","tagName":"h3"},{"title":"NoSQL has better TypeScript support compared to SQL","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#nosql-has-better-typescript-support-compared-to-sql","content":" TypeScript is widely used in modern JavaScript development. NoSQL databases, including RxDB, offer excellent TypeScript support, making it easier to build type-safe applications and leverage the benefits of static typing. ","version":"Next","tagName":"h3"},{"title":"Observable document fields","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#observable-document-fields","content":" RxDB allows observing individual document fields, providing granular reactivity. This feature enables efficient tracking of specific data changes and fine-grained UI updates, optimizing performance and responsiveness. ","version":"Next","tagName":"h3"},{"title":"Made in JavaScript, optimized for JavaScript applications","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#made-in-javascript-optimized-for-javascript-applications","content":" RxDB is built entirely in JavaScript, optimized for JavaScript applications. This ensures seamless integration with JavaScript codebases and maximizes performance within the browser environment. ","version":"Next","tagName":"h3"},{"title":"Optimized observed queries with the EventReduce Algorithm","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#optimized-observed-queries-with-the-eventreduce-algorithm","content":" RxDB employs the EventReduce Algorithm to optimize observed queries. This algorithm intelligently reduces unnecessary data transmissions, resulting in efficient query execution and improved performance. ","version":"Next","tagName":"h3"},{"title":"Built-in multi-tab support","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#built-in-multi-tab-support","content":" RxDB natively supports multi-tab applications, allowing data synchronization and replication across different tabs or instances of the same application. This feature ensures consistent data across the application and enhances collaboration and real-time experiences. ","version":"Next","tagName":"h3"},{"title":"Handling of schema changes","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#handling-of-schema-changes","content":" RxDB excels in handling schema changes, even when data is stored on multiple client devices. It provides mechanisms to handle schema migrations seamlessly, ensuring data integrity and compatibility as the application evolves. ","version":"Next","tagName":"h3"},{"title":"Storing documents compressed","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#storing-documents-compressed","content":" To optimize storage space, RxDB allows the compression of documents. Storing compressed documents reduces storage requirements and improves overall performance, especially in scenarios with large data volumes. ","version":"Next","tagName":"h3"},{"title":"Flexible storage layer for various platforms","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#flexible-storage-layer-for-various-platforms","content":" RxDB offers a flexible storage layer, enabling code reuse across different platforms, including Electron.js, React Native, hybrid apps (e.g., Capacitor.js), and web browsers. This flexibility streamlines development efforts and ensures consistent data management across multiple platforms. ","version":"Next","tagName":"h3"},{"title":"Replication Algorithm for compatibility with any backend","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#replication-algorithm-for-compatibility-with-any-backend","content":" RxDB incorporates a Replication Algorithm that is open-source and can be made compatible with various backend systems. This compatibility allows seamless data synchronization with different backend architectures, such as own servers, Firebase, CouchDB, NATS or WebSocket. ","version":"Next","tagName":"h3"},{"title":"Follow Up","type":1,"pageTitle":"RxDB: The benefits of Browser Databases","url":"/articles/browser-database.html#follow-up","content":" To explore more about RxDB and leverage its capabilities for browser database development, check out the following resources: RxDB GitHub Repository: Visit the official GitHub repository of RxDB to access the source code, documentation, and community support.RxDB Quickstart: Get started quickly with RxDB by following the provided quickstart guide, which provides step-by-step instructions for setting up and using RxDB in your projects. RxDB empowers developers to unlock the power of browser databases, enabling efficient data management, real-time applications, and enhanced user experiences. By leveraging RxDB's features and benefits, you can take your browser-based applications to the next level of performance, scalability, and responsiveness. ","version":"Next","tagName":"h2"},{"title":"What to compare with","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#what-to-compare-with","content":" RxDB is an observable, replicating, local first, JavaScript database. So it makes only sense to list similar projects as alternatives, not just any database or JavaScript store library. However, I will list up some projects that RxDB is often compared with, even if it only makes sense for some use cases. This list should be seen as an entrypoint for your personal evaluation of which tool could work for your project. ","version":"Next","tagName":"h3"},{"title":"Firebase","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#firebase","content":" Firebase is a platform developed by Google for creating mobile and web applications. Firebase has many features and products, two of which are client side databases. The Realtime Database and the Cloud Firestore. Firebase - Realtime Database The firebase realtime database was the first database in firestore. It has to be mentioned that in this context, "realtime" means "realtime replication", not "realtime computing". The firebase realtime database stores data as a big unstructured JSON tree that is replicated between clients and the backend. Firebase - Cloud Firestore The firestore is the successor to the realtime database. The big difference is that it behaves more like a 'normal' database that stores data as documents inside of collections. The conflict resolution strategy of firestore is always last-write-wins which might or might not be suitable for your use case. The biggest difference to RxDB is that firebase products are only able to be used on top of the Firebase cloud hosted backend, which creates a vendor lock-in. RxDB can replicate with any self hosted CouchDB server or custom GraphQL endpoints. You can even replicate Firestore to RxDB with the Firestore Replication Plugin. ","version":"Next","tagName":"h3"},{"title":"Meteor","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#meteor","content":" Meteor (since 2012) is one of the oldest technologies for JavaScript realtime applications. Meteor is not a library but a whole framework with its own package manager, database management and replication protocol. Because of how it works, it has proven to be hard to integrate it with other modern JavaScript frameworks like angular, vue.js or svelte. Meteor uses MongoDB in the backend and can replicate with a Minimongo database in the frontend. While testing, it has proven to be impossible to make a meteor app offline first capable. There are some projects that might do this, but all are unmaintained. ","version":"Next","tagName":"h3"},{"title":"Minimongo","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#minimongo","content":" Forked in Jan 2014 from meteorJSs' minimongo package, Minimongo is a client-side, in-memory, JavaScript version of MongoDB with backend replication over HTTP. Similar to MongoDB, it stores data in documents inside of collections and also has the same query syntax. Minimongo has different storage adapters for IndexedDB, WebSQL, LocalStorage and SQLite. Compared to RxDB, Minimongo has no concept of revisions or conflict handling, which might lead to undefined behavior when used with replication or in multiple browser tabs. Minimongo has no observable queries or changestream. ","version":"Next","tagName":"h3"},{"title":"WatermelonDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#watermelondb","content":" WatermelonDB is a reactive & asynchronous JavaScript database. While originally made for React and React Native, it can also be used with other JavaScript frameworks. The main goal of WatermelonDB is performance within an application with lots of data. In React Native, WatermelonDB uses the provided SQLite database. Also there is an Expo plugin for WatermelonDB. In a browser, WatermelonDB uses the LokiJS in-memory database to store and query data. WatermelonDB is one of the rare projects that support both Flow and Typescript at the same time. ","version":"Next","tagName":"h3"},{"title":"AWS Amplify","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#aws-amplify","content":" AWS Amplify is a collection of tools and libraries to develop web- and mobile frontend applications. Similar to firebase, it provides everything needed like authentication, analytics, a REST API, storage and so on. Everything hosted in the AWS Cloud, even when they state that "AWS Amplify is designed to be open and pluggable for any custom backend or service". For realtime replication, AWS Amplify can connect to an AWS App-Sync GraphQL endpoint. ","version":"Next","tagName":"h3"},{"title":"AWS Datastore","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#aws-datastore","content":" Since december 2019 the Amplify library includes the AWS Datastore which is a document-based, client side database that is able to replicate data via AWS AppSync in the background. The main difference to other projects is the complex project configuration via the amplify cli and the bit confusing query syntax that works over functions. Complex Queries with multiple OR/AND statements are not possible which might change in the future. Local development is hard because the AWS AppSync mock does not support realtime replication. It also is not really offline-first because a user login is always required. // An AWS datastore OR query const posts = await DataStore.query(Post, c => c.or( c => c.rating("gt", 4).status("eq", PostStatus.PUBLISHED) )); // An AWS datastore SORT query const posts = await DataStore.query(Post, Predicates.ALL, { sort: s => s.rating(SortDirection.ASCENDING).title(SortDirection.DESCENDING) }); The biggest difference to RxDB is that you have to use the AWS cloud backends. This might not be a problem if your data is at AWS anyway. ","version":"Next","tagName":"h3"},{"title":"RethinkDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#rethinkdb","content":" RethinkDB is a backend database that pushed dynamic JSON data to the client in realtime. It was founded in 2009 and the company shut down in 2016. Rethink db is not a client side database, it streams data from the backend to the client which of course does not work while offline. ","version":"Next","tagName":"h3"},{"title":"Horizon","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#horizon","content":" Horizon is the client side library for RethinkDB which provides useful functions like authentication, permission management and subscription to a RethinkDB backend. Offline support never made it to horizon. ","version":"Next","tagName":"h3"},{"title":"Supabase","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#supabase","content":" Supabase labels itself as "an open source Firebase alternative". It is a collection of open source tools that together mimic many Firebase features, most of them by providing a wrapper around a PostgreSQL database. While it has realtime queries that run over the wire, like with RethinkDB, Supabase has no client-side storage or replication feature and therefore is not offline first. ","version":"Next","tagName":"h3"},{"title":"CouchDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#couchdb","content":" Apache CouchDB is a server-side, document-oriented database that is mostly known for its multi-master replication feature. Instead of having a master-slave replication, with CouchDB you can run replication in any constellation without having a master server as bottleneck where the server even can go off- and online at any time. This comes with the drawback of having a slow replication with much network overhead. CouchDB has a changestream and a query syntax similar to MongoDB. ","version":"Next","tagName":"h3"},{"title":"PouchDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#pouchdb","content":" PouchDB is a JavaScript database that is compatible with most of the CouchDB API. It has an adapter system that allows you to switch out the underlying storage layer. There are many adapters like for IndexedDB, SQLite, the Filesystem and so on. The main benefit is to be able to replicate data with any CouchDB compatible endpoint. Because of the CouchDB compatibility, PouchDB has to do a lot of overhead in handling the revision tree of document, which is why it can show bad performance for bigger datasets. RxDB was originally build around PouchDB until the storage layer was abstracted out in version 10.0.0 so it now allows to use different RxStorage implementations. PouchDB has some performance issues because of how it has to store the document revision tree to stay compatible with the CouchDB API. ","version":"Next","tagName":"h3"},{"title":"Couchbase","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#couchbase","content":" Couchbase (originally known as Membase) is another NoSQL document database made for realtime applications. It uses the N1QL query language which is more SQL like compared to other NoSQL query languages. In theory you can achieve replication of a Couchbase with a PouchDB database, but this has shown to be not that easy. ","version":"Next","tagName":"h3"},{"title":"Cloudant","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#cloudant","content":" Cloudant is a cloud-based service that is based on CouchDB and has mostly the same features. It was originally designed for cloud computing where data can automatically be distributed between servers. But it can also be used to replicate with frontend PouchDB instances to create scalable web applications. It was bought by IBM in 2014 and since 2018 the Cloudant Shared Plan is retired and migrated to IBM Cloud. ","version":"Next","tagName":"h3"},{"title":"Hoodie","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#hoodie","content":" Hoodie is a backend solution that enables offline-first JavaScript frontend development without having to write backend code. Its main goal is to abstract away configuration into simple calls to the Hoodie API. It uses CouchDB in the backend and PouchDB in the frontend to enable offline-first capabilities. The last commit for hoodie was one year ago and the website (hood.ie) is offline which indicates it is not an active project anymore. ","version":"Next","tagName":"h3"},{"title":"LokiJS","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#lokijs","content":" LokiJS is a JavaScript embeddable, in-memory database. And because everything is handled in-memory, LokiJS has awesome performance when mutating or querying data. You can still persist to a permanent storage (IndexedDB, Filesystem etc.) with one of the provided storage adapters. The persistence happens after a timeout is reached after a write, or before the JavaScript process exits. This also means you could loose data when the JavaScript process exits ungracefully like when the power of the device is shut down or the browser crashes. While the project is not that active anymore, it is more finished than unmaintained. RxDB supports using LokiJS as RxStorage. ","version":"Next","tagName":"h3"},{"title":"Gundb","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#gundb","content":" GUN is a JavaScript graph database. While having many features, the decentralized replication is the main unique selling point. You can replicate data Peer-to-Peer without any centralized backend server. GUN has several other features that are useful on top of that, like encryption and authentication. While testing it was really hard to get basic things running. GUN is open source, but because of how the source code is written, it is very difficult to understand what is going wrong. ","version":"Next","tagName":"h3"},{"title":"sql.js","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#sqljs","content":" sql.js is a javascript library to run SQLite on the web. It uses a virtual database file stored in memory and does not have any persistence. All data is lost once the JavaScript process exits. sql.js is created by compiling SQLite to WebAssembly so it has about the same features as SQLite. For older browsers there is a JavaScript fallback. ","version":"Next","tagName":"h3"},{"title":"absurd-sQL","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#absurd-sql","content":" Absurd-sql is a project that implements an IndexedDB-based persistence for sql.js. Instead of directly writing data into the IndexedDB, it treats IndexedDB like a disk and stores data in blocks there which shows to have a much better performance, mostly because of how performance expensive IndexedDB transactions are. ","version":"Next","tagName":"h3"},{"title":"NeDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#nedb","content":" NeDB was a embedded persistent or in-memory database for Node.js, nw.js, Electron and browsers. It is document-oriented and had the same query syntax as MongoDB. Like LokiJS it has persistence adapters for IndexedDB etc. to persist the database state on the disc. The last commit to NeDB was in 2016. ","version":"Next","tagName":"h3"},{"title":"Dexie.js","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#dexiejs","content":" Dexie.js is a minimalistic wrapper for IndexedDB. While providing a better API than plain IndexedDB, Dexie also improves performance by batching transactions and other optimizations. It also adds additional non-IndexedDB features like observable queries or multi tab support or react hooks. Compared to RxDB, Dexie.js does not support complex (MongoDB-like) queries and requires a lot of fiddling when a document range of a specific index must be fetched. Dexie.js is used by Whatsapp Web, Microsoft To Do and Github Desktop. RxDB supports using Dexie.js as RxStorage which enhances IndexedDB with RxDB features like MongoDB-like queries etc. ","version":"Next","tagName":"h3"},{"title":"LowDB","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#lowdb","content":" LowDB is a small, local JSON database powered by the Lodash library. It is designed to be simple, easy to use, and straightforward. LowDB allows you to perform native JavaScript queries and persist data in a flat JSON file. Written in TypeScript, it's particularly well-suited for small projects, prototyping, or when you need a lightweight, file-based database. As an alternative to LowDB, RxDB offers real-time reactivity, allowing developers to subscribe to database changes, a feature not natively available in LowDB. Additionally, RxDB provides robust query capabilities, including the ability to subscribe to query results for automatic UI updates. These features make RxDB a strong alternative to LowDB for more complex and dynamic applications. ","version":"Next","tagName":"h3"},{"title":"MongoDB Realm","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#mongodb-realm","content":" Originally Realm was a mobile database for Android and iOS. Later they added support for other languages and runtimes, also for JavaScript. It was meant as replacement for SQLite but is more like an object store than a full SQL database. In 2019 MongoDB bought Realm and changed the projects focus. Now Realm is made for replication with the MongoDB Realm Sync based on the MongoDB Atlas Cloud platform. This tight coupling to the MongoDB cloud service is a big downside for most use cases. ","version":"Next","tagName":"h3"},{"title":"Apollo","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#apollo","content":" The Apollo GraphQL platform is made to transfer data between a server to UI applications over GraphQL endpoints. It contains several tools like GraphQL clients in different languages or libraries to create GraphQL endpoints. While it is has different caching features for offline usage, compared to RxDB it is not fully offline first because caching alone does not mean your application is fully usable when the user is offline. ","version":"Next","tagName":"h3"},{"title":"Replicache","type":1,"pageTitle":"Alternatives for realtime offline-first JavaScript applications","url":"/alternatives.html#replicache","content":" Replicache is a client-side sync framework for building realtime, collaborative, local-first web apps. It claims to work with most backend stacks. In contrast to other local first tools, replicache does not work like a local database. Instead it runs on so called mutators that unify behavior on the client and server side. So instead of implementing and calling REST routes on both sides of your stack, you will implement mutators that define a specific delta behavior based on the input data. To observe data in replicache, there are subscriptions that notify your frontend application about changes to the state. Replicache can be used in most frontend technologies like browsers, react/remix, vercle and react native. While replicache can be installed and used from npm, the replicache source code is not open source and the replicache github repo does not allow you to inspect or debug it. Still you can use replicache for in non-commercial projects, or for companies with < $200k revenue (ARR) and < $500k in funding. Read further Offline First Database Comparisonhttps://jaredforsyth.com/tags/local-first/ ","version":"Next","tagName":"h3"},{"title":"RxDB as a Database in an Angular Application","type":0,"sectionRef":"#","url":"/articles/angular-database.html","content":"","keywords":"","version":"Next"},{"title":"Angular Web Applications","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#angular-web-applications","content":" Angular is a powerful JavaScript framework developed and maintained by Google. It enables developers to build single-page applications (SPAs) with a modular and component-based approach. Angular provides a comprehensive set of tools and features for creating dynamic and responsive web applications. ","version":"Next","tagName":"h2"},{"title":"Importance of Databases in Angular Applications","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#importance-of-databases-in-angular-applications","content":" Databases play a vital role in Angular applications by providing a structured and efficient way to store, retrieve, and manage data. Whether it's handling user authentication, caching data, or persisting application state, a robust database solution is essential for ensuring optimal performance and user experience. ","version":"Next","tagName":"h2"},{"title":"Introducing RxDB as a Database Solution","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#introducing-rxdb-as-a-database-solution","content":" RxDB stands for Reactive Database and is built on the principles of reactive programming. It combines the best features of NoSQL databases with the power of reactive programming to provide a scalable and efficient database solution. RxDB offers seamless integration with Angular applications and brings several unique features that make it an attractive choice for developers. ","version":"Next","tagName":"h2"},{"title":"Getting Started with RxDB","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#getting-started-with-rxdb","content":" To begin our journey with RxDB, let's understand its key concepts and features. ","version":"Next","tagName":"h2"},{"title":"What is RxDB?","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#what-is-rxdb","content":" RxDB is a client-side database that follows the principles of reactive programming. It is built on top of IndexedDB, the native browser database, and leverages the RxJS library for reactive data handling. RxDB provides a simple and intuitive API for managing data and offers features like data replication, multi-tab support, and efficient query handling. ","version":"Next","tagName":"h3"},{"title":"Reactive Data Handling","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#reactive-data-handling","content":" At the core of RxDB is the concept of reactive data handling. RxDB leverages observables and reactive streams to enable real-time updates and data synchronization. With RxDB, you can easily subscribe to data changes and react to them in a reactive and efficient manner. ","version":"Next","tagName":"h3"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#offline-first-approach","content":" One of the standout features of RxDB is its offline-first approach. It allows you to build applications that can work seamlessly in offline scenarios. RxDB stores data locally and automatically synchronizes changes with the server when the network becomes available. This capability is particularly useful for applications that need to function in low-connectivity or unreliable network environments. ","version":"Next","tagName":"h3"},{"title":"Data Replication","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#data-replication","content":" RxDB provides built-in support for data replication between clients and servers. This means you can synchronize data across multiple devices or instances of your application effortlessly. RxDB handles conflict resolution and ensures that data remains consistent across all connected clients. ","version":"Next","tagName":"h3"},{"title":"Observable Queries","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#observable-queries","content":" RxDB offers a powerful querying mechanism with support for observable queries. This allows you to create dynamic queries that automatically update when the underlying data changes. By leveraging RxDB's observable queries, you can build reactive UI components that respond to data changes in real-time. ","version":"Next","tagName":"h3"},{"title":"Multi-Tab Support","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#multi-tab-support","content":" RxDB provides out-of-the-box support for multi-tab scenarios. This means that if your Angular application is running in multiple browser tabs, RxDB automatically keeps the data in sync across all tabs. It ensures that changes made in one tab are immediately reflected in others, providing a seamless user experience. ","version":"Next","tagName":"h3"},{"title":"RxDB vs. Other Angular Database Options","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#rxdb-vs-other-angular-database-options","content":" While there are other database options available for Angular applications, RxDB stands out with its reactive programming model, offline-first approach, and built-in synchronization capabilities. Unlike traditional SQL databases, RxDB's NoSQL-like structure and observables-based API make it well-suited for real-time applications and complex data scenarios. ","version":"Next","tagName":"h3"},{"title":"Using RxDB in an Angular Application","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#using-rxdb-in-an-angular-application","content":" Now that we have a good understanding of RxDB and its features, let's explore how to integrate it into an Angular application. ","version":"Next","tagName":"h2"},{"title":"Installing RxDB in an Angular App","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#installing-rxdb-in-an-angular-app","content":" To use RxDB in an Angular application, we first need to install the necessary dependencies. You can install RxDB using npm or yarn by running the following command: npm install rxdb --save Once installed, you can import RxDB into your Angular application and start using its API to create and manage databases. ","version":"Next","tagName":"h3"},{"title":"Patch Change Detection with zone.js","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#patch-change-detection-with-zonejs","content":" Angular uses change detection to detect and update UI elements when data changes. However, RxDB's data handling is based on observables, which can sometimes bypass Angular's change detection mechanism. To ensure that changes made in RxDB are detected by Angular, we need to patch the change detection mechanism using zone.js. Zone.js is a library that intercepts and tracks asynchronous operations, including observables. By patching zone.js, we can make sure that Angular is aware of changes happening in RxDB. warning RxDB creates rxjs observables outside of angulars zone So you have to import the rxjs patch to ensure the angular change detection works correctly.link //> app.component.ts import 'zone.js/plugins/zone-patch-rxjs'; ","version":"Next","tagName":"h3"},{"title":"Use the Angular async pipe to observe an RxDB Query","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#use-the-angular-async-pipe-to-observe-an-rxdb-query","content":" Angular provides the async pipe, which is a convenient way to subscribe to observables and handle the subscription lifecycle automatically. When working with RxDB, you can use the async pipe to observe an RxDB query and bind the results directly to your Angular template. This ensures that the UI stays in sync with the data changes emitted by the RxDB query. constructor( private dbService: DatabaseService, private dialog: MatDialog ) { this.heroes$ = this.dbService .db.hero // collection .find({ // query selector: {}, sort: [{ name: 'asc' }] }) .$; } <ul *ngFor="let hero of heroes$ | async as heroes;"> <li>{{hero.name}}</li> </ul> ","version":"Next","tagName":"h3"},{"title":"Different RxStorage layers for RxDB","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#different-rxstorage-layers-for-rxdb","content":" RxDB supports multiple storage layers for persisting data. Some of the available storage options include: Dexie.js RxStorage: Dexie.js is a minimalistic IndexedDB wrapper that provides a simple API for working with IndexedDB. RxDB leverages Dexie.js as its default storage layer.LokiJS RxStorage: LokiJS is an in-memory document-oriented database that can also persist data to IndexedDB. RxDB integrates with LokiJS to provide an alternative storage option.IndexedDB RxStorage: RxDB directly supports IndexedDB as a storage layer. IndexedDB is a low-level browser database that offers good performance and reliability.OPFS RxStorage: The OPFS RxStorage for RxDB is built on top of the File System Access API which is available in all modern browsers. It provides an API to access a sandboxed private file system to persistently store and retrieve data. Compared to other persistend storage options in the browser (like IndexedDB), the OPFS API has a way better performance.Memory RxStorage: In addition to persistent storage options, RxDB also provides a memory-based storage layer. This is useful for testing or scenarios where you don't need long-term data persistence. You can choose the storage layer that best suits your application's requirements and configure RxDB accordingly. ","version":"Next","tagName":"h3"},{"title":"Synchronizing Data with RxDB between Clients and Servers","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#synchronizing-data-with-rxdb-between-clients-and-servers","content":" Data replication between an Angular application and a server is a common requirement. RxDB simplifies this process and provides built-in support for data synchronization. Let's explore how to replicate data between an Angular application and a server using RxDB. ","version":"Next","tagName":"h2"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#offline-first-approach-1","content":" One of the key strengths of RxDB is its offline-first approach. It allows Angular applications to function seamlessly even in offline scenarios. RxDB stores data locally and automatically synchronizes changes with the server when the network becomes available. This capability is particularly useful for applications that need to operate in low-connectivity or unreliable network environments. ","version":"Next","tagName":"h3"},{"title":"Conflict Resolution","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#conflict-resolution","content":" In a distributed system, conflicts can arise when multiple clients modify the same data simultaneously. RxDB offers conflict resolution mechanisms to handle such scenarios. You can define conflict resolution strategies based on your application's requirements. RxDB provides hooks and events to detect conflicts and resolve them in a consistent manner. ","version":"Next","tagName":"h3"},{"title":"Bidirectional Synchronization","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#bidirectional-synchronization","content":" RxDB supports bidirectional data synchronization, allowing updates from both the client and server to be replicated seamlessly. This ensures that data remains consistent across all connected clients and the server. RxDB handles conflicts and resolves them based on the defined conflict resolution strategies. ","version":"Next","tagName":"h3"},{"title":"Real-Time Updates","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#real-time-updates","content":" RxDB provides real-time updates by leveraging reactive programming principles. Changes made to the data are automatically propagated to all connected clients in real-time. Angular applications can subscribe to these updates and update the user interface accordingly. This real-time capability enables collaborative features and enhances the overall user experience. ","version":"Next","tagName":"h3"},{"title":"Advanced RxDB Features and Techniques","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#advanced-rxdb-features-and-techniques","content":" RxDB offers several advanced features and techniques that can further enhance your Angular application. ","version":"Next","tagName":"h2"},{"title":"Indexing and Performance Optimization","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#indexing-and-performance-optimization","content":" To improve query performance, RxDB allows you to define indexes on specific fields of your documents. Indexing enables faster data retrieval and query execution, especially when working with large datasets. By strategically creating indexes, you can optimize the performance of your Angular application. ","version":"Next","tagName":"h3"},{"title":"Encryption of Local Data","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#encryption-of-local-data","content":" RxDB provides built-in support for encrypting local data using the Web Crypto API. With encryption, you can protect sensitive data stored in the client-side database. RxDB transparently encrypts the data, ensuring that it remains secure even if the underlying storage is compromised. ","version":"Next","tagName":"h3"},{"title":"Change Streams and Event Handling","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#change-streams-and-event-handling","content":" RxDB exposes change streams, which allow you to listen for data changes at a database or collection level. By subscribing to change streams, you can react to data modifications and perform specific actions, such as updating the UI or triggering notifications. Change streams enable real-time event handling in your Angular application. ","version":"Next","tagName":"h3"},{"title":"JSON Key Compression","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#json-key-compression","content":" To reduce the storage footprint and improve performance, RxDB supports JSON key compression. With key compression, RxDB replaces long keys with shorter aliases, reducing the overall storage size. This optimization is particularly useful when working with large datasets or frequently updating data. ","version":"Next","tagName":"h3"},{"title":"Best Practices for Using RxDB in Angular Applications","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#best-practices-for-using-rxdb-in-angular-applications","content":" To make the most of RxDB in your Angular application, consider the following best practices: ","version":"Next","tagName":"h2"},{"title":"Use Async Pipe for Subscriptions so you do not have to unsubscribe","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#use-async-pipe-for-subscriptions-so-you-do-not-have-to-unsubscribe","content":" Angular's async pipe is a powerful tool for handling observables in templates. By using the async pipe, you can avoid the need to manually subscribe and unsubscribe from RxDB observables. Angular takes care of the subscription lifecycle, ensuring that resources are released when they are no longer needed. Instead of manually subscribing to Observables, you should always prefer the async pipe. // WRONG: let amount; this.dbService .db.hero .find({ selector: {}, sort: [{ name: 'asc' }] }) .$.subscribe(docs => { amount = 0; docs.forEach(d => amount = d.points); }); // RIGHT: this.amount$ = this.dbService .db.hero .find({ selector: {}, sort: [{ name: 'asc' }] }) .$.pipe( map(docs => { let amount = 0; docs.forEach(d => amount = d.points); return amount; }) ); ","version":"Next","tagName":"h3"},{"title":"Use custom reactivity to have signals instead of rxjs observables","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#use-custom-reactivity-to-have-signals-instead-of-rxjs-observables","content":" RxDB supports adding custom reactivity factories that allow you to get angular signals out of the database instead of rxjs observables. read more. ","version":"Next","tagName":"h3"},{"title":"Use Angular Services for Database creation","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#use-angular-services-for-database-creation","content":" To ensure proper separation of concerns and maintain a clean codebase, it is recommended to create an Angular service responsible for managing the RxDB database instance. This service can handle database creation, initialization, and provide methods for interacting with the database throughout your application. ","version":"Next","tagName":"h3"},{"title":"Efficient Data Handling","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#efficient-data-handling","content":" RxDB provides various mechanisms for efficient data handling, such as batching updates, debouncing, and throttling. Leveraging these techniques can help optimize performance and reduce unnecessary UI updates. Consider the specific data handling requirements of your application and choose the appropriate strategies provided by RxDB. ","version":"Next","tagName":"h3"},{"title":"Data Synchronization Strategies","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#data-synchronization-strategies","content":" When working with data synchronization between clients and servers, it's important to consider strategies for conflict resolution and handling network failures. RxDB provides plugins and hooks that allow you to customize the replication behavior and implement specific synchronization strategies tailored to your application's needs. ","version":"Next","tagName":"h3"},{"title":"Conclusion","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#conclusion","content":" RxDB is a powerful database solution for Angular applications, offering reactive data handling, offline-first capabilities, and seamless data synchronization. By integrating RxDB into your Angular application, you can build responsive and scalable web applications that provide a rich user experience. Whether you're building real-time collaborative apps, progressive web applications, or offline-capable applications, RxDB's features and techniques make it a valuable addition to your Angular development toolkit. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"RxDB as a Database in an Angular Application","url":"/articles/angular-database.html#follow-up","content":" To explore more about RxDB and leverage its capabilities for browser database development, check out the following resources: RxDB GitHub Repository: Visit the official GitHub repository of RxDB to access the source code, documentation, and community support.RxDB Quickstart: Get started quickly with RxDB by following the provided quickstart guide, which provides step-by-step instructions for setting up and using RxDB in your projects.RxDB Angular Example at GitHub ","version":"Next","tagName":"h2"},{"title":"Browser Storage - RxDB as a Database for Browsers","type":0,"sectionRef":"#","url":"/articles/browser-storage.html","content":"","keywords":"","version":"Next"},{"title":"Localstorage","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#localstorage","content":" Localstorage is a straightforward way to store small amounts of data in the user's web browser. It operates on a simple key-value basis and is relatively easy to use. While it has limitations, it is suitable for basic data storage requirements. ","version":"Next","tagName":"h3"},{"title":"IndexedDB","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#indexeddb","content":" IndexedDB, on the other hand, offers a more robust and structured approach to browser-based data storage. It can handle larger datasets and complex queries, making it a valuable choice for more advanced web applications. ","version":"Next","tagName":"h3"},{"title":"Why Store Data in the Browser","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#why-store-data-in-the-browser","content":" Now that we've explored the methods of storing data in the browser, let's delve into why this is a beneficial strategy for web developers: Caching: Storing data in the browser allows you to cache frequently used information. This means that your web application can access essential data more quickly because it doesn't need to repeatedly fetch it from a server. This results in a smoother and more responsive user experience. Offline Access: One significant advantage of browser storage is that data becomes portable and remains accessible even when the user is offline. This feature ensures that users can continue to use your application, view their saved information, and make changes, irrespective of their internet connection status. Faster Real-time Applications: For real-time applications, having data stored locally in the browser significantly enhances performance. Local data allows your application to respond faster to user interactions, creating a more seamless and responsive user interface. Low Latency Queries: When you run queries locally within the browser, you minimize the latency associated with network requests. This results in near-instant access to data, which is particularly crucial for applications that require rapid data retrieval. Faster Initial Application Start Time: By preloading essential data into browser storage, you can reduce the initial load time of your web application. Users can start using your application more swiftly, which is essential for making a positive first impression. Store Local Data with Encryption: For applications that deal with sensitive data, browser storage allows you to implement encryption to secure the stored information. This ensures that even if data is stored on the user's device, it remains confidential and protected. In summary, storing data in the browser offers several advantages, including improved performance, offline access, and enhanced user experiences. Localstorage and IndexedDB are two valuable tools that developers can utilize to leverage these benefits and create web applications that are more responsive and user-friendly. ","version":"Next","tagName":"h2"},{"title":"Browser Storage Limitations","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#browser-storage-limitations","content":" While browser storage, such as Localstorage and IndexedDB, offers many advantages, it's important to be aware of its limitations: Slower Performance Compared to Native Databases: Browser-based storage solutions can't match the performance of native server-side databases. They may experience slower data retrieval and processing, especially for large datasets or complex operations. Storage Space Limitations: Browsers impose restrictions on the amount of data that can be stored locally. This limitation can be problematic for applications with extensive data storage requirements, potentially necessitating creative solutions to manage data effectively. ","version":"Next","tagName":"h2"},{"title":"Why SQL Databases Like SQLite Aren't a Good Fit for the Browser","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#why-sql-databases-like-sqlite-arent-a-good-fit-for-the-browser","content":" SQL databases like SQLite, while powerful in server environments, may not be the best choice for browser-based applications due to various reasons: ","version":"Next","tagName":"h2"},{"title":"Push/Pull Based vs. Reactive","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#pushpull-based-vs-reactive","content":" SQL databases often use a push/pull model for data synchronization. This approach is less reactive and may not align well with the real-time nature of web applications, where immediate updates to the user interface are crucial. ","version":"Next","tagName":"h3"},{"title":"Build Size of Server-Side Databases","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#build-size-of-server-side-databases","content":" Server-side databases like SQLite have a significant build size, which can increase the initial load time of web applications. This can result in a suboptimal user experience, particularly for users with slower internet connections. ","version":"Next","tagName":"h3"},{"title":"Initialization Time and Performance","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#initialization-time-and-performance","content":" SQL databases are optimized for server environments, and their initialization processes and performance characteristics may not align with the needs of web applications. They might not offer the swift performance required for seamless user interactions. ","version":"Next","tagName":"h3"},{"title":"Why RxDB Is a Good Fit as Browser Storage","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#why-rxdb-is-a-good-fit-as-browser-storage","content":" RxDB is an excellent choice for browser-based storage due to its numerous features and advantages: ","version":"Next","tagName":"h2"},{"title":"Flexible Storage Layer for Various Platforms","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#flexible-storage-layer-for-various-platforms","content":" RxDB offers a flexible storage layer that can seamlessly integrate with different platforms, making it versatile and adaptable to various application needs. ","version":"Next","tagName":"h3"},{"title":"NoSQL JSON Documents Are a Better Fit for UIs","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#nosql-json-documents-are-a-better-fit-for-uis","content":" NoSQL JSON documents, used by RxDB, are well-suited for user interfaces. They provide a natural and efficient way to structure and display data in web applications. ","version":"Next","tagName":"h3"},{"title":"NoSQL Has Better TypeScript Support Compared to SQL","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#nosql-has-better-typescript-support-compared-to-sql","content":" RxDB boasts robust TypeScript support, which is beneficial for developers who prefer type safety and code predictability in their projects. ","version":"Next","tagName":"h3"},{"title":"Observable Document Fields","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#observable-document-fields","content":" RxDB enables developers to observe individual document fields, offering fine-grained control over data tracking and updates. ","version":"Next","tagName":"h3"},{"title":"Made in JavaScript, Optimized for JavaScript Applications","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#made-in-javascript-optimized-for-javascript-applications","content":" Being built in JavaScript and optimized for JavaScript applications, RxDB seamlessly integrates into web development stacks, minimizing compatibility issues. ","version":"Next","tagName":"h3"},{"title":"Observable Queries (rxjs) to Automatically Update the UI on Changes","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#observable-queries-rxjs-to-automatically-update-the-ui-on-changes","content":" RxDB's support for Observable Queries allows the user interface to update automatically in real-time when data changes. This reactivity enhances the user experience and simplifies UI development. const query = myCollection.find({ selector: { age: { $gt: 21 } } }); const querySub = query.$.subscribe(results => { console.log('got results: ' + results.length); }); ","version":"Next","tagName":"h3"},{"title":"Optimized Observed Queries with the EventReduce Algorithm","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#optimized-observed-queries-with-the-eventreduce-algorithm","content":" RxDB's EventReduce Algorithm ensures efficient data handling and rendering, improving overall performance and responsiveness. ","version":"Next","tagName":"h3"},{"title":"Handling of Schema Changes","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#handling-of-schema-changes","content":" RxDB provides built-in support for handling schema changes, simplifying database management when updates are required. ","version":"Next","tagName":"h3"},{"title":"Built-In Multi-Tab Support","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#built-in-multi-tab-support","content":" For applications requiring multi-tab support, RxDB natively handles data consistency across different browser tabs, streamlining data synchronization. ","version":"Next","tagName":"h3"},{"title":"Storing Documents Compressed","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#storing-documents-compressed","content":" Efficient data storage is achieved through document compression, reducing storage space requirements and enhancing overall performance. ","version":"Next","tagName":"h3"},{"title":"Replication Algorithm for Compatibility with Any Backend","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#replication-algorithm-for-compatibility-with-any-backend","content":" RxDB's Replication Algorithm facilitates compatibility with various backend systems, ensuring seamless data synchronization between the browser and server. ","version":"Next","tagName":"h3"},{"title":"Summary","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#summary","content":" In conclusion, RxDB is a powerful and feature-rich solution for browser-based storage. Its adaptability, real-time capabilities, TypeScript support, and optimization for JavaScript applications make it an ideal choice for modern web development projects, addressing the limitations of traditional SQL databases in the browser. Developers can harness RxDB to create efficient, responsive, and user-friendly web applications that leverage the full potential of browser storage. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"Browser Storage - RxDB as a Database for Browsers","url":"/articles/browser-storage.html#follow-up","content":" To explore more about RxDB and leverage its capabilities for browser storage, check out the following resources: RxDB GitHub Repository: Visit the official GitHub repository of RxDB to access the source code, documentation, and community support.RxDB Quickstart: Get started quickly with RxDB by following the provided quickstart guide, which provides step-by-step instructions for setting up and using RxDB in your projects. ","version":"Next","tagName":"h2"},{"title":"PouchDB Adapters","type":0,"sectionRef":"#","url":"/adapters.html","content":"","keywords":"","version":"Next"},{"title":"Memory","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#memory","content":" In any environment, you can use the memory-adapter. It stores the data in the javascript runtime memory. This means it is not persistent and the data is lost when the process terminates. Use this adapter when: You want to have really good performanceYou do not want persistent state, for example in your test suite import { createRxDatabase } from 'rxdb' import { getRxStoragePouch } from 'rxdb/plugins/pouchdb'; // npm install pouchdb-adapter-memory --save addPouchPlugin(require('pouchdb-adapter-memory')); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch('memory') }); ","version":"Next","tagName":"h2"},{"title":"Memdown","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#memdown","content":" With RxDB you can also use adapters that implement abstract-leveldown like the memdown-adapter. // npm install memdown --save // npm install pouchdb-adapter-leveldb --save addPouchPlugin(require('pouchdb-adapter-leveldb')); // leveldown adapters need the leveldb plugin to work const memdown = require('memdown'); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch(memdown) // the full leveldown-module }); Browser ","version":"Next","tagName":"h2"},{"title":"IndexedDB","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#indexeddb","content":" The IndexedDB adapter stores the data inside of IndexedDB use this in browsers environments as default. // npm install pouchdb-adapter-idb --save addPouchPlugin(require('pouchdb-adapter-idb')); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch('idb') }); ","version":"Next","tagName":"h2"},{"title":"IndexedDB","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#indexeddb-1","content":" A reimplementation of the indexeddb adapter which uses native secondary indexes. Should have a much better performance but can behave different on some edge cases. note Multiple users have reported problems with this adapter. It is not recommended to use this adapter. // npm install pouchdb-adapter-indexeddb --save addPouchPlugin(require('pouchdb-adapter-indexeddb')); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch('indexeddb') }); ","version":"Next","tagName":"h2"},{"title":"Websql","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#websql","content":" This adapter stores the data inside of websql. It has a different performance behavior. Websql is deprecated. You should not use the websql adapter unless you have a really good reason. // npm install pouchdb-adapter-websql --save addPouchPlugin(require('pouchdb-adapter-websql')); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch('websql') }); NodeJS ","version":"Next","tagName":"h2"},{"title":"leveldown","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#leveldown","content":" This adapter uses a LevelDB C++ binding to store that data on the filesystem. It has the best performance compared to other filesystem adapters. This adapter can not be used when multiple nodejs-processes access the same filesystem folders for storage. // npm install leveldown --save // npm install pouchdb-adapter-leveldb --save addPouchPlugin(require('pouchdb-adapter-leveldb')); // leveldown adapters need the leveldb plugin to work const leveldown = require('leveldown'); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch(leveldown) // the full leveldown-module }); // or use a specific folder to store the data const database = await createRxDatabase({ name: '/root/user/project/mydatabase', storage: getRxStoragePouch(leveldown) // the full leveldown-module }); ","version":"Next","tagName":"h2"},{"title":"Node-Websql","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#node-websql","content":" This adapter uses the node-websql-shim to store data on the filesystem. Its advantages are that it does not need a leveldb build and it can be used when multiple nodejs-processes use the same database-files. // npm install pouchdb-adapter-node-websql --save addPouchPlugin(require('pouchdb-adapter-node-websql')); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch('websql') // the name of your adapter }); // or use a specific folder to store the data const database = await createRxDatabase({ name: '/root/user/project/mydatabase', storage: getRxStoragePouch('websql') // the name of your adapter }); React-Native ","version":"Next","tagName":"h2"},{"title":"react-native-sqlite","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#react-native-sqlite","content":" Uses ReactNative SQLite as storage. Claims to be much faster than the asyncstorage adapter. To use it, you have to do some steps from this tutorial. First install pouchdb-adapter-react-native-sqlite and react-native-sqlite-2. npm install pouchdb-adapter-react-native-sqlite react-native-sqlite-2 Then you have to link the library. react-native link react-native-sqlite-2 You also have to add some polyfills which are need but not included in react-native. npm install base-64 events import { decode, encode } from 'base-64' if (!global.btoa) { global.btoa = encode; } if (!global.atob) { global.atob = decode; } // Avoid using node dependent modules process.browser = true; Then you can use it inside of your code. import { createRxDatabase } from 'rxdb'; import { addPouchPlugin, getRxStoragePouch } from 'rxdb/plugins/pouchdb'; import SQLite from 'react-native-sqlite-2' import SQLiteAdapterFactory from 'pouchdb-adapter-react-native-sqlite' const SQLiteAdapter = SQLiteAdapterFactory(SQLite) addPouchPlugin(SQLiteAdapter); addPouchPlugin(require('pouchdb-adapter-http')); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch('react-native-sqlite') // the name of your adapter }); ","version":"Next","tagName":"h2"},{"title":"asyncstorage","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#asyncstorage","content":" Uses react-native's asyncstorage. note There are known problems with this adapter and it is not recommended to use it. // npm install pouchdb-adapter-asyncstorage --save addPouchPlugin(require('pouchdb-adapter-asyncstorage')); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch('node-asyncstorage') // the name of your adapter }); ","version":"Next","tagName":"h2"},{"title":"asyncstorage-down","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#asyncstorage-down","content":" A leveldown adapter that stores on asyncstorage. // npm install pouchdb-adapter-asyncstorage-down --save addPouchPlugin(require('pouchdb-adapter-leveldb')); // leveldown adapters need the leveldb plugin to work const asyncstorageDown = require('asyncstorage-down'); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch(asyncstorageDown) // the full leveldown-module }); Cordova / Phonegap / Capacitor ","version":"Next","tagName":"h2"},{"title":"cordova-sqlite","type":1,"pageTitle":"PouchDB Adapters","url":"/adapters.html#cordova-sqlite","content":" Uses cordova's global cordova.sqlitePlugin. It can be used with cordova and capacitor. // npm install pouchdb-adapter-cordova-sqlite --save addPouchPlugin(require('pouchdb-adapter-cordova-sqlite')); /** * In capacitor/cordova you have to wait until all plugins are loaded and 'window.sqlitePlugin' * can be accessed. * This function waits until document deviceready is called which ensures that everything is loaded. * @link https://cordova.apache.org/docs/de/latest/cordova/events/events.deviceready.html */ export function awaitCapacitorDeviceReady(): Promise<void> { return new Promise(res => { document.addEventListener('deviceready', () => { res(); }); }); } async function getDatabase(){ // first wait until the deviceready event is fired await awaitCapacitorDeviceReady(); const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStoragePouch( 'cordova-sqlite', // pouch settings are passed as second parameter { // for ios devices, the cordova-sqlite adapter needs to know where to save the data. iosDatabaseLocation: 'Library' } ) }); } ","version":"Next","tagName":"h2"},{"title":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","type":0,"sectionRef":"#","url":"/articles/data-base.html","content":"","keywords":"","version":"Next"},{"title":"Overview of Web Applications that can benefit from RxDB","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#overview-of-web-applications-that-can-benefit-from-rxdb","content":" Before diving into the specifics of RxDB, let's take a moment to understand the scope of web applications that can leverage its capabilities. Any web application that requires real-time data updates, offline functionality, and synchronization between clients and servers can greatly benefit from RxDB. Whether it's a collaborative document editing tool, a task management app, or a chat application, RxDB offers a robust foundation for building these types of applications. ","version":"Next","tagName":"h2"},{"title":"Importance of data bases in Mobile Applications","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#importance-of-data-bases-in-mobile-applications","content":" Mobile applications have become an integral part of our lives, providing us with instant access to information and services. Behind the scenes, data bases play a pivotal role in storing and managing the data that powers these applications. data bases enable efficient data retrieval, updates, and synchronization, ensuring a smooth user experience even in challenging network conditions. ","version":"Next","tagName":"h2"},{"title":"Introducing RxDB as a data base Solution","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#introducing-rxdb-as-a-data-base-solution","content":" RxDB, short for Reactive data base, is a client-side data base solution designed specifically for web and mobile applications. Built on the principles of reactive programming, RxDB brings the power of observables and event-driven architecture to data management. With RxDB, developers can create applications that are responsive, offline-ready, and capable of seamless data synchronization between clients and servers. ","version":"Next","tagName":"h2"},{"title":"Getting Started with RxDB","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#getting-started-with-rxdb","content":" ","version":"Next","tagName":"h2"},{"title":"What is RxDB?","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#what-is-rxdb","content":" RxDB is an open-source JavaScript data base that leverages reactive programming and provides a seamless API for handling data. It is built on top of existing popular data base technologies, such as IndexedDB, and adds a layer of reactive features to enable real-time data updates and synchronization. ","version":"Next","tagName":"h3"},{"title":"Reactive Data Handling","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#reactive-data-handling","content":" One of the standout features of RxDB is its reactive data handling. It utilizes observables to provide a stream of data that automatically updates whenever a change occurs. This reactive approach allows developers to build applications that respond instantly to data changes, ensuring a highly interactive and real-time user experience. ","version":"Next","tagName":"h3"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#offline-first-approach","content":" RxDB embraces an offline-first approach, enabling applications to work seamlessly even when there is no internet connectivity. It achieves this by caching data locally on the client-side and synchronizing it with the server when the connection is available. This ensures that users can continue working with the application and have their data automatically synchronized when they come back online. ","version":"Next","tagName":"h3"},{"title":"Data Replication","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#data-replication","content":" RxDB simplifies the process of data replication between clients and servers. It provides replication plugins that handle the synchronization of data in real-time. These plugins allow applications to keep data consistent across multiple clients, enabling collaborative features and ensuring that each client has the most up-to-date information. ","version":"Next","tagName":"h3"},{"title":"Observable Queries","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#observable-queries","content":" RxDB introduces the concept of observable queries, which are powerful tools for efficiently querying data. With observable queries, developers can subscribe to specific data queries and receive automatic updates whenever the underlying data changes. This eliminates the need for manual polling and ensures that applications always have access to the latest data. ","version":"Next","tagName":"h3"},{"title":"Multi-Tab support","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#multi-tab-support","content":" RxDB offers multi-tab support, allowing applications to function seamlessly across multiple browser tabs. This feature ensures that data changes in one tab are immediately reflected in all other open tabs, enabling a consistent user experience across different browser windows. ","version":"Next","tagName":"h3"},{"title":"RxDB vs. Other data base Options","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#rxdb-vs-other-data-base-options","content":" When considering data base options for web applications, developers often encounter choices like Dexie.js, LokiJS, IndexedDB, OPFS, and Memory-based solutions. RxDB, while built on top of IndexedDB, stands out due to its reactive data handling capabilities and advanced synchronization features. Compared to other options, RxDB offers a more streamlined and powerful approach to managing data in web applications. ","version":"Next","tagName":"h3"},{"title":"Different RxStorage layers for RxDB","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#different-rxstorage-layers-for-rxdb","content":" RxDB provides various storage layers, known as RxStorage, that serve as interfaces to different underlying storage technologies. These layers include: Dexie.js RxStorage: Built on top of Dexie.js, this storage layer leverages IndexedDB as its backend.LokiJS RxStorage: Utilizing the in-memory data base LokiJS, this layer provides a lightweight alternative to persist data.IndexedDB RxStorage: This layer directly utilizes IndexedDB as its backend, providing a robust and widely supported storage option.OPFS RxStorage: OPFS (Operational Transformation File System) is a file system-like storage layer that allows for efficient conflict resolution and real-time collaboration.Memory RxStorage: Primarily used for testing and development, this storage layer keeps data in memory without persisting it to disk. Each RxStorage layer has its strengths and is suited for different scenarios, enabling developers to choose the most appropriate option for their specific use case. ","version":"Next","tagName":"h3"},{"title":"Synchronizing Data with RxDB between Clients and Servers","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#synchronizing-data-with-rxdb-between-clients-and-servers","content":" ","version":"Next","tagName":"h2"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#offline-first-approach-1","content":" As mentioned earlier, RxDB adopts an offline-first approach, allowing applications to function seamlessly in disconnected environments. By caching data locally, applications can continue to operate and make updates even without an internet connection. Once the connection is restored, RxDB's replication plugins take care of synchronizing the data with the server, ensuring consistency across all clients. ","version":"Next","tagName":"h3"},{"title":"RxDB Replication Plugins","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#rxdb-replication-plugins","content":" RxDB provides a range of replication plugins that simplify the process of synchronizing data between clients and servers. These plugins enable real-time replication using various protocols, such as WebSocket or HTTP, and handle conflict resolution strategies to ensure data integrity. By leveraging these replication plugins, developers can easily implement robust and scalable synchronization capabilities in their applications. ","version":"Next","tagName":"h3"},{"title":"Advanced RxDB Features and Techniques","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#advanced-rxdb-features-and-techniques","content":" Indexing and Performance Optimization To achieve optimal performance, RxDB offers indexing capabilities. Indexing allows for efficient data retrieval and faster query execution. By strategically defining indexes on frequently accessed fields, developers can significantly enhance the overall performance of their RxDB-powered applications. ","version":"Next","tagName":"h3"},{"title":"Encryption of Local Data","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#encryption-of-local-data","content":" In scenarios where data security is paramount, RxDB provides options for encrypting local data. By encrypting the data base contents, developers can ensure that sensitive information remains secure even if the underlying storage is compromised. RxDB integrates seamlessly with encryption libraries, making it easy to implement end-to-end encryption in applications. ","version":"Next","tagName":"h3"},{"title":"Change Streams and Event Handling","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#change-streams-and-event-handling","content":" RxDB offers change streams and event handling mechanisms, enabling developers to react to data changes in real-time. With change streams, applications can listen to specific collections or documents and trigger custom logic whenever a change occurs. This capability opens up possibilities for building real-time collaboration features, notifications, or other reactive behaviors. ","version":"Next","tagName":"h3"},{"title":"JSON Key Compression","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#json-key-compression","content":" In scenarios where storage size is a concern, RxDB provides JSON key compression. By applying compression techniques to JSON keys, developers can significantly reduce the storage footprint of their data bases. This feature is particularly beneficial for applications dealing with large datasets or limited storage capacities. ","version":"Next","tagName":"h3"},{"title":"Conclusion","type":1,"pageTitle":"RxDB as a data base: Empowering Web Applications with Reactive Data Handling","url":"/articles/data-base.html#conclusion","content":" RxDB provides an exceptional data base solution for web and mobile applications, empowering developers to create reactive, offline-ready, and synchronized applications. With its reactive data handling, offline-first approach, and replication plugins, RxDB simplifies the challenges of building real-time applications with data synchronization requirements. By embracing advanced features like indexing, encryption, change streams, and JSON key compression, developers can optimize performance, enhance security, and reduce storage requirements. As web and mobile applications continue to evolve, RxDB proves to be a reliable and powerful ","version":"Next","tagName":"h2"},{"title":"Using RxDB as an Embedded Database","type":0,"sectionRef":"#","url":"/articles/embedded-database.html","content":"","keywords":"","version":"Next"},{"title":"What is an Embedded Database?","type":1,"pageTitle":"Using RxDB as an Embedded Database","url":"/articles/embedded-database.html#what-is-an-embedded-database","content":" An embedded database refers to a client-side database system that is integrated directly within an application. It is designed to operate within the client environment, such as a web browser or a mobile app. This approach eliminates the need for a separate database server and allows the database to run locally on the client device. ","version":"Next","tagName":"h2"},{"title":"Embedded Database in UI Applications","type":1,"pageTitle":"Using RxDB as an Embedded Database","url":"/articles/embedded-database.html#embedded-database-in-ui-applications","content":" In the context of UI applications, an embedded database serves as a local data storage solution. It enables applications to efficiently manage data, facilitate real-time updates, and enhance performance. Let's explore some of the benefits of using an embedded database compared to a traditional server database: Replicating database state becomes easier: Implementing real-time data synchronization and replication is simpler with an embedded database compared to complex REST routes. The embedded nature allows for efficient replication of the database state across multiple instances of the application.Use the database for caching: An embedded database can be utilized for caching frequently accessed data. This caching mechanism enhances performance and reduces the need for repeated network requests, resulting in faster data retrieval.Building real-time applications is easier with local data: By leveraging local data storage, real-time applications can easily update the user interface in response to data changes. This approach simplifies the development of real-time features and enhances the responsiveness of the application.Store local data with encryption: Embedded databases, like RxDB, offer the ability to store local data with encryption. This ensures that sensitive information remains protected even when stored locally on the client device.Data is offline accessible: With an embedded database, data remains accessible even when the application is offline. Users can continue to interact with the application and access their data seamlessly, irrespective of their internet connectivity.Faster initial application start time: Since the data is already stored locally, there is no need for initial data fetching from a remote server. This significantly reduces the application's startup time and allows users to engage with the application more quickly.Improved scalability with local queries: Embedded databases, such as RxDB, perform queries locally on the client device instead of relying on server round-trips. This reduces latency and enhances scalability, particularly when dealing with large datasets or high query volumes.Seamless integration with JavaScript frameworks: Embedded databases, including RxDB, integrate seamlessly with popular JavaScript frameworks like Angular, React.js, Vue.js, and Svelte. This compatibility allows developers to leverage the capabilities of these frameworks while benefiting from embedded database functionality.Running queries locally has low latency: With an embedded database, queries are executed locally on the client device, resulting in minimal latency. This improves the overall performance and responsiveness of the application.Data is portable and always accessible by the user: Embedded databases enable data portability, allowing users to seamlessly transition between devices while maintaining their data and application state. This ensures that data is always accessible and available to the user.Using a local database for state management: Instead of relying on additional state management libraries like Redux or NgRx, an embedded database can be used for local state management. This simplifies state management and ensures data consistency within the application. ","version":"Next","tagName":"h2"},{"title":"Why RxDB as an Embedded Database for Real-time Applications","type":1,"pageTitle":"Using RxDB as an Embedded Database","url":"/articles/embedded-database.html#why-rxdb-as-an-embedded-database-for-real-time-applications","content":" RxDB is a JavaScript-based embedded database that offers numerous advantages for building real-time applications. Let's explore why RxDB is a compelling choice: Observable Queries (RxJS): RxDB leverages the power of Observables through RxJS, enabling developers to create queries that automatically update the user interface on data changes. This reactive approach simplifies UI updates and ensures real-time synchronization of data.NoSQL JSON Documents for UIs: RxDB utilizes NoSQL (JSON) documents as its data model, aligning seamlessly with the requirements of modern UI development. JavaScript's native support for JSON objects makes NoSQL documents a natural fit for UI-driven applications.Better TypeScript Support Compared to SQL: RxDB's NoSQL approach provides excellent TypeScript support. The flexibility of working with JSON objects enables robust typing and enhanced development experiences, ensuring type safety and reducing runtime errors.Observable Document Fields: RxDB allows developers to observe individual fields within documents. This granularity enables efficient tracking of specific data changes and facilitates targeted UI updates, enhancing performance and responsiveness.Made in JavaScript, Optimized for JavaScript Applications: Being built entirely in JavaScript, RxDB is optimized for JavaScript applications. It leverages JavaScript's capabilities and integrates seamlessly with JavaScript frameworks and libraries, making it a natural choice for JavaScript developers.Optimized Observed Queries with the EventReduce Algorithm: RxDB incorporates the EventReduce algorithm to optimize observed queries. This algorithm reduces the number of emitted events during query execution, resulting in enhanced query performance and reduced overhead.Built-in Multi-tab Support: RxDB provides built-in multi-tab support, allowing multiple instances of an application to share and synchronize data seamlessly. This feature enables collaborative and real-time scenarios across multiple browser tabs or windows.Handling of Schema Changes across Multiple Client Devices: With RxDB, handling schema changes across multiple client devices becomes straightforward. RxDB's schema migration capabilities ensure that applications can seamlessly adapt to evolving data structures, providing a consistent experience across different devices.Storing Documents Compressed: RxDB offers the ability to store documents in a compressed format. This reduces the storage footprint and improves performance, especially when dealing with large datasets.Flexible Storage Layer and Cross-platform Compatibility: RxDB provides a flexible storage layer that can be reused across various platforms, including Electron.js, React Native, hybrid apps (via Capacitor.js), and browsers. This cross-platform compatibility simplifies development and enables code reuse across different environments.Replication Algorithm for Backend Compatibility: RxDB's replication algorithm is open-source and can be made compatible with various backend solutions, such as self-hosted servers, Firebase, CouchDB, NATS, WebSockets, and more. This flexibility allows developers to choose their preferred backend infrastructure while benefiting from RxDB's embedded database capabilities. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"Using RxDB as an Embedded Database","url":"/articles/embedded-database.html#follow-up","content":" To further explore RxDB and leverage its capabilities as an embedded database, the following resources can be helpful: RxDB GitHub Repository: Visit the official GitHub repository of RxDB to access the source code, documentation, and community support.RxDB Quickstart: Get started quickly with RxDB by following the provided quickstart guide, which offers step-by-step instructions for setting up and using RxDB in your projects. By utilizing RxDB as an embedded database in UI applications, developers can harness the power of efficient data management, real-time updates, and enhanced user experiences. RxDB's features and benefits make it a compelling choice for building modern, responsive, and scalable applications. ","version":"Next","tagName":"h2"},{"title":"RxDB as In-memory NoSQL Database: Empowering Real-Time Applications","type":0,"sectionRef":"#","url":"/articles/in-memory-nosql-database.html","content":"","keywords":"","version":"Next"},{"title":"Speed and Performance Benefits","type":1,"pageTitle":"RxDB as In-memory NoSQL Database: Empowering Real-Time Applications","url":"/articles/in-memory-nosql-database.html#speed-and-performance-benefits","content":" One of the key advantages of using RxDB as an in-memory NoSQL database is its ability to leverage in-memory storage for faster database operations. By storing data directly in memory, database operations can be performed significantly faster compared to traditional disk-based databases. This is especially important for real-time applications where every millisecond counts. With RxDB, developers can achieve near-instantaneous data access and manipulation, enabling highly responsive user experiences. Additionally, RxDB eliminates disk I/O bottlenecks that are typically associated with traditional databases. In traditional databases, disk reads and writes can become a bottleneck as the amount of data grows. In contrast, an in-memory database like RxDB keeps the entire dataset in RAM, eliminating disk access overhead. This makes it an excellent choice for applications dealing with real-time analytics, high-throughput data processing, and caching. ","version":"Next","tagName":"h2"},{"title":"Persistence Options","type":1,"pageTitle":"RxDB as In-memory NoSQL Database: Empowering Real-Time Applications","url":"/articles/in-memory-nosql-database.html#persistence-options","content":" While RxDB offers an in-memory storage adapter, it also offers persistence storages. Adapters such as IndexedDB, SQLite, and OPFS enable developers to persist data locally in the browser, making applications accessible even when offline. This hybrid approach combines the benefits of in-memory performance with data durability, providing the best of both worlds. Developers can choose the adapter that best suits their needs, balancing the speed of in-memory storage with the long-term data persistence required for certain applications. import { createRxDatabase } from 'rxdb'; import { getRxStorageMemory } from 'rxdb/plugins/storage-memory'; const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageMemory() }); Also the memory synced RxStorage exists as a wrapper around any other RxStorage. The wrapper creates an in-memory storage that is used for query and write operations. This memory instance is replicated with the underlying storage for persistence. The main reason to use this is to improve initial page load and query/write times. This is mostly useful in browser based applications. ","version":"Next","tagName":"h2"},{"title":"Use Cases for RxDB","type":1,"pageTitle":"RxDB as In-memory NoSQL Database: Empowering Real-Time Applications","url":"/articles/in-memory-nosql-database.html#use-cases-for-rxdb","content":" RxDB's capabilities make it well-suited for various real-time applications. Some notable use cases include: Chat Applications and Real-Time Messaging: RxDB's in-memory performance and real-time synchronization capabilities make it an excellent choice for building chat applications and real-time messaging systems. Developers can ensure that messages are delivered and synchronized across multiple clients in real-time, providing a seamless and responsive chat experience. Collaborative Document Editors: RxDB's ability to handle data streams and propagate changes in real-time makes it ideal for collaborative document editing. Multiple users can simultaneously edit a document, and their changes are instantly synchronized, allowing for real-time collaboration and ensuring that everyone has the most up-to-date version of the document. Real-Time Analytics Dashboards: RxDB's speed and scalability make it a valuable tool for real-time analytics dashboards. It can handle high volumes of data and perform complex analytics operations in real-time, providing instant insights and visualizations to users. In conclusion, RxDB serves as a powerful in-memory NoSQL database that empowers developers to build real-time applications with exceptional speed, flexibility, and scalability. Its ability to leverage in-memory storage, eliminate disk I/O bottlenecks, and provide persistence options make it an attractive choice for a wide range of real-time use cases. Whether it's chat applications, collaborative document editors, or real-time analytics dashboards, RxDB provides the foundation for building responsive and interactive software that meets the demands of today's users. ","version":"Next","tagName":"h2"},{"title":"Ionic Storage - RxDB as database for hybrid apps","type":0,"sectionRef":"#","url":"/articles/ionic-database.html","content":"","keywords":"","version":"Next"},{"title":"What are Ionic Hybrid Apps?","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#what-are-ionic-hybrid-apps","content":" Ionic (aka Ionic 2 ) hybrid apps combine the strengths of web technologies (HTML, CSS, JavaScript) with native app development to deliver cross-platform applications. They are built using web technologies and then wrapped in a native container to be deployed on various platforms like iOS, Android, and the web. These apps provide a consistent user experience across devices while benefiting from the efficiency and familiarity of web development. ","version":"Next","tagName":"h2"},{"title":"Storing and Querying Data in an Ionic App","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#storing-and-querying-data-in-an-ionic-app","content":" Storing and querying data is a fundamental aspect of any application, including hybrid apps. These apps often need to operate offline, store user-generated content, and provide responsive user interfaces. Therefore, having a reliable and efficient way to manage data on the client's device is crucial. ","version":"Next","tagName":"h2"},{"title":"Introducing RxDB as a Client-Side Database for Ionic Apps","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#introducing-rxdb-as-a-client-side-database-for-ionic-apps","content":" RxDB steps in as a powerful solution to address the data management needs of ionic hybrid apps. It's a NoSQL client-side database that offers exceptional performance and features tailored to the unique requirements of client-side applications. Let's delve into the key features of RxDB that make it a great fit for these apps. ","version":"Next","tagName":"h2"},{"title":"Getting Started with RxDB","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#getting-started-with-rxdb","content":" ","version":"Next","tagName":"h3"},{"title":"What is RxDB?","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#what-is-rxdb","content":" At its core, RxDB is a NoSQL database that operates with a local-first approach. This means that your app's data is stored and processed primarily on the client's device, reducing the dependency on constant network connectivity. By doing so, RxDB ensures your app remains responsive and functional, even when offline. ","version":"Next","tagName":"h3"},{"title":"Local-First Approach","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#local-first-approach","content":" The local-first approach adopted by RxDB is a game-changer for hybrid applications. Storing data locally allows your app to function seamlessly without an internet connection, providing users with uninterrupted access to their data. When connectivity is restored, RxDB handles the synchronization of data, ensuring that any changes made offline are appropriately propagated. ","version":"Next","tagName":"h3"},{"title":"Observable Queries","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#observable-queries","content":" One of RxDB's standout features is its implementation of observable queries. This concept allows your app's user interface to be dynamically updated in real time as data changes within the database. RxDB's observables create a bridge between your database and user interface, keeping them in sync effortlessly. ","version":"Next","tagName":"h3"},{"title":"NoSQL Query Engine","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#nosql-query-engine","content":" RxDB's NoSQL query engine empowers you to perform powerful queries on your app's data, without the constraints imposed by traditional relational databases. This flexibility is particularly valuable when dealing with unstructured or semi-structured data. With the NoSQL query engine, you can retrieve, filter, and manipulate data according to your app's unique requirements. const foundDocuments = await myDatabase.todos.find({ selector: { done: { $eq: false } } }).exec(); ","version":"Next","tagName":"h3"},{"title":"Great Observe Performance with EventReduce","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#great-observe-performance-with-eventreduce","content":" RxDB introduces a concept called EventReduce, which optimizes the observation process. Instead of overwhelming your app's UI with every data change, EventReduce filters and batches these changes to provide a smooth and efficient experience. This leads to enhanced app performance, lower resource usage, and ultimately, happier users. ","version":"Next","tagName":"h3"},{"title":"Why NoSQL is a Better Fit for Client-Side Applications Compared to relational databases like SQLite","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#why-nosql-is-a-better-fit-for-client-side-applications-compared-to-relational-databases-like-sqlite","content":" When it comes to choosing the right database solution for your client-side applications, NoSQL RxDB presents compelling advantages over traditional options like SQLite. Let's delve into the key reasons why NoSQL RxDB is a superior fit for your ionic hybrid app development. ","version":"Next","tagName":"h2"},{"title":"Easier Document-Based Replication","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#easier-document-based-replication","content":" NoSQL databases, like RxDB, inherently embrace a document-based approach to data storage. This design choice simplifies data replication between clients and servers. With documents representing discrete units of data, you can easily synchronize individual pieces of information without the complexity that can arise when dealing with rows and tables in a relational database like SQLite. This document-centric replication model streamlines the synchronization process and ensures that your app's data remains consistent across devices. ","version":"Next","tagName":"h3"},{"title":"Offline Capable","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#offline-capable","content":" One of the defining features of client-side applications is the ability to function even when offline. NoSQL RxDB excels in this area by supporting a local-first approach. Data is cached on the client's device, enabling the app to remain fully functional even without an internet connection. As connectivity is restored, RxDB handles data synchronization with the server seamlessly. This offline capability ensures a smooth user experience, critical for ionic hybrid apps catering to users in various network conditions. ","version":"Next","tagName":"h3"},{"title":"NoSQL Has Better TypeScript Support","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#nosql-has-better-typescript-support","content":" TypeScript, a popular superset of JavaScript, is renowned for its static typing and enhanced developer experience. NoSQL databases like RxDB are inherently flexible, making them well-suited for TypeScript integration. With well-defined data structures and clear typings, NoSQL RxDB offers improved type safety and easier development when compared to traditional SQL databases like SQLite. This results in reduced debugging time and increased code reliability. ","version":"Next","tagName":"h3"},{"title":"Easier Schema Migration with NoSQL Documents","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#easier-schema-migration-with-nosql-documents","content":" Schema changes are a common occurrence in application development, and dealing with them can be challenging. NoSQL databases, including RxDB, are more forgiving in this aspect. Since documents in NoSQL databases don't enforce a rigid structure like tables in relational databases, schema changes are often simpler to manage. This flexibility makes it easier to evolve your app's data structure over time without the need for complex migration scripts, a notable advantage when compared to SQLite. ","version":"Next","tagName":"h3"},{"title":"Great Performance","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#great-performance","content":" RxDB's excellent performance stems from its advanced indexing capabilities, which streamline data retrieval and ensure swift query execution. Additionally, the JSON key compression employed by RxDB minimizes storage overhead, enabling efficient data transfer and quicker loading times. The incorporation of real-time updates through change streams and the EventReduce mechanism further enhances RxDB's performance, delivering a responsive user experience even as data changes are propagated seamlessly. ","version":"Next","tagName":"h2"},{"title":"Using RxDB in an Ionic Hybrid App","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#using-rxdb-in-an-ionic-hybrid-app","content":" RxDB's integration into your ionic hybrid app opens up a world of possibilities for efficient data management. Let's explore how to set up RxDB, use it with popular JavaScript frameworks, and take advantage of its diverse storage options. ","version":"Next","tagName":"h2"},{"title":"Setup RxDB","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#setup-rxdb","content":" Getting started with RxDB is a straightforward process. By including the RxDB library in your project, you can quickly start harnessing its capabilities. Begin by installing the RxDB package from the npm registry. Then, configure your database instance to suit your app's needs. This setup process paves the way for seamless data management in your ionic hybrid app. For a full instruction, follow the RxDB Quickstart. ","version":"Next","tagName":"h3"},{"title":"Using RxDB in Frameworks (React, Angular, Vue.js)","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#using-rxdb-in-frameworks-react-angular-vuejs","content":" RxDB seamlessly integrates with various JavaScript frameworks, ensuring compatibility with your preferred development environment. Whether you're building your ionic hybrid app with React, Angular, or Vue.js, RxDB offers bindings and tools that enable you to leverage its features effortlessly. This compatibility allows you to stay within the comfort zone of your chosen framework while benefiting from RxDB's powerful data management capabilities. ","version":"Next","tagName":"h3"},{"title":"Different RxStorage Layers for RxDB","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#different-rxstorage-layers-for-rxdb","content":" RxDB doesn't limit you to a single storage solution. Instead, it provides a range of RxStorage layers to accommodate diverse use cases. These storage layers offer flexibility and customization, enabling you to tailor your data management strategy to match your app's requirements. Let's explore some of the available RxStorage options: Dexie.js RxStorage: Dexie.js is a popular JavaScript library for indexedDB, and RxDB offers a compatible RxStorage layer. This option leverages indexedDB's capabilities to provide efficient data storage and retrieval.LokiJS RxStorage: LokiJS RxStorage integrates the LokiJS database with RxDB, giving you access to another powerful NoSQL database solution. LokiJS is known for its in-memory storage capabilities and ease of use.IndexedDB RxStorage: Leveraging the native browser storage, IndexedDB RxStorage offers reliable data persistence. This storage option is suitable for a wide range of scenarios and is supported by most modern browsers.OPFS RxStorage: Operating within the browser's file system, OPFS RxStorage is a unique choice that can handle larger data volumes efficiently. It's particularly useful for applications that require substantial data storage.Memory RxStorage: Memory RxStorage is perfect for temporary or cache-like data storage. It keeps data in memory, which can result in rapid data access but doesn't provide long-term persistence.SQLite RxStorage: SQLite is the goto database for mobile applications. It is build in on android and iOS devices. The SQLite RxDB storage layer is build upon SQLite and offers the best performance on hybrid apps, like ionic. ","version":"Next","tagName":"h3"},{"title":"Replication of Data with RxDB between Clients and Servers","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#replication-of-data-with-rxdb-between-clients-and-servers","content":" Efficient data replication between clients and servers is the backbone of modern application development, ensuring that data remains consistent and up-to-date across various devices and platforms. RxDB provides a suite of replication methods that facilitate seamless communication between clients and servers, ensuring that your data is always in sync. ","version":"Next","tagName":"h2"},{"title":"RxDB Replication Algorithm","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#rxdb-replication-algorithm","content":" At the heart of RxDB's replication capabilities lies a sophisticated algorithm designed to manage data synchronization between clients and servers. This algorithm intelligently handles data changes, conflict resolution, and network connectivity fluctuations, resulting in reliable and efficient data replication. With the RxDB replication algorithm, your application can maintain data consistency across devices without unnecessary complexities. CouchDB Replication: RxDB's integration with CouchDB replication presents a powerful way to synchronize data between clients and servers. CouchDB, a well-established NoSQL database, excels at distributed and decentralized data scenarios. By utilizing RxDB's CouchDB replication, you can establish bidirectional synchronization between your RxDB-powered client and a CouchDB server. This synchronization ensures that data updates made on either end are seamlessly propagated to the other, facilitating collaboration and data sharing. Firestore Replication: Firestore, Google's cloud-hosted NoSQL database, offers another avenue for data replication in RxDB. With Firestore replication, you can establish a connection between your RxDB-powered app and Firestore's cloud infrastructure. This integration provides real-time updates to data across multiple instances of your application, ensuring that users always have access to the latest information. RxDB's support for Firestore replication empowers you to build dynamic and responsive applications that thrive in today's fast-paced digital landscape. WebRTC Replication: Peer-to-peer (P2P) replication via WebRTC introduces a cutting-edge approach to data synchronization in RxDB. P2P replication allows devices to communicate directly with each other, bypassing the need for a central server. This method proves invaluable in scenarios where network connectivity is limited or unreliable. With WebRTC replication, devices can exchange data directly, enabling collaboration and information sharing even in challenging network conditions. ","version":"Next","tagName":"h3"},{"title":"RxDB as an Alternative for Ionic Secure Storage","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#rxdb-as-an-alternative-for-ionic-secure-storage","content":" When it comes to securing sensitive data in your Ionic applications, RxDB emerges as a powerful alternative to traditional secure storage solutions. Let's delve into why RxDB is an exceptional choice for safeguarding your data while providing additional benefits. ","version":"Next","tagName":"h2"},{"title":"RxDB On-Device Encryption Plugin","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#rxdb-on-device-encryption-plugin","content":" RxDB offers an on-device encryption plugin, adding an extra layer of security to your app's data. This means that data stored within the RxDB database can be encrypted, ensuring that even if the device falls into the wrong hands, the sensitive information remains inaccessible without the proper decryption key. This level of data protection is crucial for applications that deal with personal or confidential information. Encryption runs either with AES on crypto-js or with the Web Crypto API which is faster and more secure. ","version":"Next","tagName":"h3"},{"title":"Works Offline","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#works-offline","content":" Security should never compromise functionality. RxDB excels in this area by allowing your application to operate seamlessly even when offline. The locally stored encrypted data remains accessible and functional, enabling users to interact with the app's features even without an active internet connection. This offline capability ensures that user data is secure, while the app continues to deliver a responsive and uninterrupted experience. ","version":"Next","tagName":"h3"},{"title":"Easy-to-Setup Replication with Your Backend","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#easy-to-setup-replication-with-your-backend","content":" Ensuring data consistency between your client-side application and backend is a key concern for developers. RxDB simplifies this process with its straightforward replication setup. You can effortlessly configure data synchronization between your local RxDB instance and your backend server. This replication capability ensures that encrypted data remains up-to-date and aligned with the central database, enhancing data integrity and security. ","version":"Next","tagName":"h3"},{"title":"Compression of Client-Side Stored Data","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#compression-of-client-side-stored-data","content":" In addition to security and offline capabilities, RxDB also offers data compression. This means that the data stored on the client's device is efficiently compressed, reducing storage requirements and improving overall app performance. This compression ensures that your app remains responsive and efficient, even as data volumes grow. ","version":"Next","tagName":"h3"},{"title":"Cost-Effective Solution","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#cost-effective-solution","content":" In addition to its security features, RxDB offers cost-effective benefits. RxDB is priced more affordably compared to some other secure storage solutions, making it an attractive option for developers seeking robust security without breaking the bank. For many users, the free version of RxDB provides ample features to meet their application's security and data management needs. ","version":"Next","tagName":"h3"},{"title":"Follow Up","type":1,"pageTitle":"Ionic Storage - RxDB as database for hybrid apps","url":"/articles/ionic-database.html#follow-up","content":" Try out the RxDB ionic example projectTry out the RxDB QuickstartJoin the RxDB Chat ","version":"Next","tagName":"h2"},{"title":"RxDB as a Database in a Flutter Application","type":0,"sectionRef":"#","url":"/articles/flutter-database.html","content":"","keywords":"","version":"Next"},{"title":"Overview of Flutter Mobile Applications","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#overview-of-flutter-mobile-applications","content":" Flutter is an open-source UI software development kit created by Google that allows developers to build high-performance mobile applications for iOS and Android platforms using a single codebase. Flutter's framework provides a wide range of widgets and tools that enable developers to create visually appealing and responsive applications. ","version":"Next","tagName":"h3"},{"title":"Importance of Databases in Flutter Applications","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#importance-of-databases-in-flutter-applications","content":" Databases play a vital role in Flutter applications by providing a persistent and reliable storage solution for storing and retrieving data. Whether it's user profiles, app settings, or complex data structures, a database helps in efficiently managing and organizing the application's data. Choosing the right database for a Flutter application can significantly impact the performance, scalability, and user experience of the app. ","version":"Next","tagName":"h3"},{"title":"Introducing RxDB as a Database Solution","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#introducing-rxdb-as-a-database-solution","content":" RxDB is a powerful NoSQL database solution that is designed to work seamlessly with JavaScript-based frameworks, such as Flutter. It stands for Reactive Database and offers a variety of features that make it an excellent choice for building Flutter applications. RxDB combines the simplicity of JavaScript's document-based database model with the reactive programming paradigm, enabling developers to build real-time and offline-first applications with ease. ","version":"Next","tagName":"h3"},{"title":"Getting Started with RxDB","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#getting-started-with-rxdb","content":" To understand how RxDB can be utilized in a Flutter application, let's explore its core features and advantages. ","version":"Next","tagName":"h2"},{"title":"What is RxDB?","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#what-is-rxdb","content":" RxDB is a client-side database built on top of IndexedDB, which is a low-level browser-based database API. It provides a simple and intuitive API for performing CRUD operations (Create, Read, Update, Delete) on documents. RxDB's underlying architecture allows for efficient handling of data synchronization between multiple clients and servers. ","version":"Next","tagName":"h3"},{"title":"Reactive Data Handling","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#reactive-data-handling","content":" One of the key strengths of RxDB is its reactive data handling. It leverages the power of Observables, a concept from reactive programming, to automatically update the UI in response to data changes. With RxDB, developers can define queries and subscribe to their results, ensuring that the UI is always in sync with the database. ","version":"Next","tagName":"h3"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#offline-first-approach","content":" RxDB follows an offline-first approach, making it ideal for building Flutter applications that need to function even without an internet connection. It allows data to be stored locally and seamlessly synchronizes it with the server when a connection is available. This ensures that users can access and interact with their data regardless of network availability. ","version":"Next","tagName":"h3"},{"title":"Data Replication","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#data-replication","content":" Data replication is a critical aspect of building distributed applications. RxDB provides robust replication capabilities that enable synchronization of data between different clients and servers. With its replication plugins, RxDB simplifies the process of setting up real-time data synchronization, ensuring consistency across all connected devices. ","version":"Next","tagName":"h3"},{"title":"Observable Queries","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#observable-queries","content":" RxDB introduces the concept of observable queries, which are queries that automatically update when the underlying data changes. This feature is particularly useful for keeping the UI up to date with the latest data. By subscribing to an observable query, developers can receive real-time updates and reflect them in the user interface without manual intervention. ","version":"Next","tagName":"h3"},{"title":"RxDB vs. Other Flutter Database Options","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#rxdb-vs-other-flutter-database-options","content":" When considering database options for Flutter applications, developers often come across alternatives such as SQLite or LokiJS. While these databases have their merits, RxDB offers several advantages over them. RxDB's seamless integration with Flutter, its offline-first approach, reactive data handling, and built-in data replication make it a compelling choice for building feature-rich and scalable Flutter applications. ","version":"Next","tagName":"h3"},{"title":"Using RxDB in a Flutter Application","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#using-rxdb-in-a-flutter-application","content":" Now that we understand the core features of RxDB, let's explore how to integrate it into a Flutter application. ","version":"Next","tagName":"h2"},{"title":"How RxDB can run in Flutter","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#how-rxdb-can-run-in-flutter","content":" RxDB is written in TypeScript and compiled to JavaScript. To run it in a Flutter application, the flutter_qjs library is used to spawn a QuickJS JavaScript runtime. RxDB itself runs in that runtime and communicates with the flutter dart runtime. To store data persistent, the LokiJS RxStorage is used together with a custom storage adapter that persists the database inside of the shared_preferences data. To use RxDB, you have to create a compatible JavaScript file that creates your RxDatabase and starts some connectors which are used by Flutter to communicate with the JavaScript RxDB database via setFlutterRxDatabaseConnector(). import { createRxDatabase } from 'rxdb'; import { getRxStorageLoki } from 'rxdb/plugins/storage-lokijs'; import { setFlutterRxDatabaseConnector, getLokijsAdapterFlutter } from 'rxdb/plugins/flutter'; // do all database creation stuff in this method. async function createDB(databaseName) { // create the RxDatabase const db = await createRxDatabase({ // the database.name is variable so we can change it on the flutter side name: databaseName, storage: getRxStorageLoki({ adapter: getLokijsAdapterFlutter() }), multiInstance: false }); await db.addCollections({ heroes: { schema: { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 }, name: { type: 'string', maxLength: 100 }, color: { type: 'string', maxLength: 30 } }, indexes: ['name'], required: ['id', 'name', 'color'] } } }); return db; } // start the connector so that flutter can communicate with the JavaScript process setFlutterRxDatabaseConnector( createDB ); Before you can use the JavaScript code, you have to bundle it into a single .js file. In this example we do that with webpack in a npm script here which bundles everything into the javascript/dist/index.js file. To allow Flutter to access that file during runtime, add it to the assets inside of your pubspec.yaml: flutter: assets: - javascript/dist/index.js Also you need to install RxDB in your flutter part of the application. First you have to use the rxdb dart package as a flutter dependency. Currently the package is not published at the dart pub.dev. Instead you have to install it from the local filesystem inside of your RxDB npm installation. # inside of pubspec.yaml dependencies: rxdb: path: path/to/your/node_modules/rxdb/src/plugins/flutter/dart Afterwards you can import the rxdb library in your dart code and connect to the JavaScript process from there. For reference, check out the lib/main.dart file. import 'package:rxdb/rxdb.dart'; // start the javascript process and connect to the database RxDatabase database = await getRxDatabase("javascript/dist/index.js", databaseName); // get a collection RxCollection collection = database.getCollection('heroes'); // insert a document RxDocument document = await collection.insert({ "id": "zflutter-${DateTime.now()}", "name": nameController.text, "color": colorController.text }); // create a query RxQuery<RxHeroDocType> query = RxDatabaseState.collection.find(); // create list to store query results List<RxDocument<RxHeroDocType>> documents = []; // subscribe to a query query.$().listen((results) { setState(() { documents = results; }); }); ","version":"Next","tagName":"h2"},{"title":"Different RxStorage layers for RxDB","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#different-rxstorage-layers-for-rxdb","content":" RxDB offers multiple storage options, known as RxStorage layers, to store data locally. These options include: LokiJS RxStorage: LokiJS is an in-memory database that can be used as a storage layer for RxDB. It provides fast and efficient in-memory data management capabilities.SQLite RxStorage: SQLite is a popular and widely used embedded database that offers robust storage capabilities. RxDB utilizes SQLite as a storage layer to persist data on the device.Memory RxStorage: As the name suggests, Memory RxStorage stores data in memory. While this option does not provide persistence, it can be useful for temporary or cache-based data storage. By choosing the appropriate RxStorage layer based on the specific requirements of the application, developers can optimize performance and storage efficiency. ","version":"Next","tagName":"h3"},{"title":"Synchronizing Data with RxDB between Clients and Servers","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#synchronizing-data-with-rxdb-between-clients-and-servers","content":" One of the key strengths of RxDB is its ability to synchronize data between multiple clients and servers seamlessly. Let's explore how this synchronization can be achieved. ","version":"Next","tagName":"h2"},{"title":"Offline-First Approach","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#offline-first-approach-1","content":" RxDB's offline-first approach ensures that data can be accessed and modified even when there is no internet connection. Changes made offline are automatically synchronized with the server once a connection is reestablished. This ensures data consistency across all devices, providing a seamless user experience. ","version":"Next","tagName":"h3"},{"title":"RxDB Replication Plugins","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#rxdb-replication-plugins","content":" RxDB provides replication plugins that simplify the process of setting up data synchronization between clients and servers. These plugins offer various synchronization strategies, such as one-way replication, two-way replication, and conflict resolution mechanisms. By configuring the appropriate replication plugin, developers can easily establish real-time data synchronization in their Flutter applications. ","version":"Next","tagName":"h3"},{"title":"Advanced RxDB Features and Techniques","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#advanced-rxdb-features-and-techniques","content":" RxDB offers a range of advanced features and techniques that enhance its functionality and performance. Let's explore a few of these features: ","version":"Next","tagName":"h2"},{"title":"Indexing and Performance Optimization","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#indexing-and-performance-optimization","content":" Indexing is a technique used to optimize query performance by creating indexes on specific fields. RxDB allows developers to define indexes on document fields, improving the efficiency of queries and data retrieval. ","version":"Next","tagName":"h3"},{"title":"Encryption of Local Data","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#encryption-of-local-data","content":" To ensure data privacy and security, RxDB supports encryption of local data. By encrypting the data stored on the device, developers can protect sensitive information and prevent unauthorized access. ","version":"Next","tagName":"h3"},{"title":"Change Streams and Event Handling","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#change-streams-and-event-handling","content":" RxDB provides change streams, which emit events whenever data changes occur. By leveraging change streams, developers can implement custom event handling logic, such as updating the UI or triggering background processes, in response to specific data changes. ","version":"Next","tagName":"h3"},{"title":"JSON Key Compression","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#json-key-compression","content":" To minimize storage requirements and optimize performance, RxDB offers JSON key compression. This feature reduces the size of keys used in the database, resulting in more efficient storage and improved query performance. ","version":"Next","tagName":"h3"},{"title":"Conclusion","type":1,"pageTitle":"RxDB as a Database in a Flutter Application","url":"/articles/flutter-database.html#conclusion","content":" RxDB offers a powerful and flexible database solution for Flutter applications. With its offline-first approach, real-time data synchronization, and reactive data handling capabilities, RxDB simplifies the development of feature-rich and scalable Flutter applications. By integrating RxDB into your Flutter projects, you can leverage its advanced features and techniques to build responsive and data-driven applications that provide an exceptional user experience. note You can find the source code for an example RxDB Flutter Application at the github repo ","version":"Next","tagName":"h2"},{"title":"Localstorage vs. IndexedDB vs. Cookies vs. OPFS vs. Wasm-SQLite","type":0,"sectionRef":"#","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm","content":"","keywords":"","version":"Next"},{"title":"Things this does not talk about","type":1,"pageTitle":"Localstorage vs. IndexedDB vs. Cookies vs. OPFS vs. Wasm-SQLite","url":"/articles/localstorage-indexeddb-cookies-opfs-sqlite-wasm#things-this-does-not-talk-about","content":" WebSQL session storage. Web Storage API ","version":"Next","tagName":"h3"},{"title":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","type":0,"sectionRef":"#","url":"/articles/frontend-database.html","content":"","keywords":"","version":"Next"},{"title":"Why you might want to store data in the frontend","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#why-you-might-want-to-store-data-in-the-frontend","content":" ","version":"Next","tagName":"h2"},{"title":"Offline accessibility","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#offline-accessibility","content":" One compelling reason to store data in the frontend is to enable offline accessibility. By leveraging a frontend database, applications can cache essential data locally, allowing users to continue using the application even when an internet connection is unavailable. This feature is particularly useful for mobile applications or web apps with limited or intermittent connectivity. ","version":"Next","tagName":"h3"},{"title":"Caching","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#caching","content":" Frontend databases also serve as efficient caching mechanisms. By storing frequently accessed data locally, applications can minimize network requests and reduce latency, resulting in faster and more responsive user experiences. Caching is particularly beneficial for applications that heavily rely on remote data or perform computationally intensive operations. ","version":"Next","tagName":"h3"},{"title":"Decreased initial application start time","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#decreased-initial-application-start-time","content":" Storing data in the frontend decreases the initial application start time because the data is already present locally. By eliminating the need to fetch data from a server during startup, applications can quickly render the UI and provide users with an immediate interactive experience. This is especially advantageous for applications with large datasets or complex data retrieval processes. ","version":"Next","tagName":"h3"},{"title":"Password encryption for local data","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#password-encryption-for-local-data","content":" Security is a crucial aspect of data storage. With a front end database, developers can encrypt sensitive local data, such as user credentials or personal information, using encryption algorithms. This ensures that even if the device is compromised, the data remains securely stored and protected. ","version":"Next","tagName":"h3"},{"title":"Local database for state management","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#local-database-for-state-management","content":" Frontend databases provide an alternative to traditional state management libraries like Redux or NgRx. By utilizing a local database, developers can store and manage application state directly in the frontend, eliminating the need for additional libraries. This approach simplifies the codebase, reduces complexity, and provides a more straightforward data flow within the application. ","version":"Next","tagName":"h3"},{"title":"Low-latency local queries","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#low-latency-local-queries","content":" Frontend databases enable low-latency queries that run entirely on the client's device. Instead of relying on server round-trips for each query, the database executes queries locally, resulting in faster response times. This is particularly beneficial for applications that require real-time updates or frequent data retrieval. ","version":"Next","tagName":"h3"},{"title":"Building realtime applications with local data","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#building-realtime-applications-with-local-data","content":" Realtime applications often require immediate updates based on data changes. By storing data locally and utilizing a frontend database, developers can build realtime applications more easily. The database can observe data changes and automatically update the UI, providing a seamless and responsive user experience. ","version":"Next","tagName":"h3"},{"title":"Easier integration with JavaScript frameworks","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#easier-integration-with-javascript-frameworks","content":" Frontend databases, including RxDB, are designed to integrate seamlessly with popular JavaScript frameworks such as Angular, React.js, Vue.js, and Svelte. These databases offer well-defined APIs and support that align with the specific requirements of these frameworks, enabling developers to leverage the full potential of the frontend database within their preferred development environment. ","version":"Next","tagName":"h3"},{"title":"Simplified replication of database state","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#simplified-replication-of-database-state","content":" Replicating database state between the frontend and backend can be challenging, especially when dealing with complex REST routes. Frontend databases, however, provide simple mechanisms for replicating database state. They offer intuitive replication algorithms that facilitate data synchronization between the frontend and backend, reducing the complexity and potential pitfalls associated with complex REST-based replication. ","version":"Next","tagName":"h3"},{"title":"Improved scalability","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#improved-scalability","content":" Frontend databases offer improved scalability compared to traditional SQL databases. By leveraging the computational capabilities of client devices, the burden on server resources is reduced. Queries and operations are performed locally, minimizing the need for server round-trips and enabling applications to scale more efficiently. ","version":"Next","tagName":"h3"},{"title":"Why SQL databases are not a good fit for the front end of an application","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#why-sql-databases-are-not-a-good-fit-for-the-front-end-of-an-application","content":" While SQL databases excel in server-side scenarios, they pose limitations when used on the frontend. Here are some reasons why SQL databases are not well-suited for frontend applications: ","version":"Next","tagName":"h2"},{"title":"Push/Pull based vs. reactive","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#pushpull-based-vs-reactive","content":" SQL databases typically rely on a push/pull model, where the server pushes data to the client upon request. This approach is not inherently reactive, as it requires explicit requests for data updates. In contrast, frontend applications often require reactive data flows, where changes in data trigger automatic updates in the UI. Frontend databases, like RxDB, provide reactive capabilities that seamlessly integrate with the dynamic nature of frontend development. ","version":"Next","tagName":"h3"},{"title":"Initialization time and performance","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#initialization-time-and-performance","content":" SQL databases designed for server-side usage tend to have larger build sizes and initialization times, making them less efficient for browser-based applications. Frontend databases, on the other hand, directly leverage browser APIs like IndexedDB, OPFS, and WebWorker, resulting in leaner builds and faster initialization times. Often the queries are such fast, that it is not even necessary to implement a loading spinner. ","version":"Next","tagName":"h3"},{"title":"Build size considerations","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#build-size-considerations","content":" Server-side SQL databases typically come with a significant build size, which can be impractical for browser applications where code size optimization is crucial. Frontend databases, on the other hand, are specifically designed to operate within the constraints of browser environments, ensuring efficient resource utilization and smaller build sizes. For example the SQLite Webassembly file alone has a size of over 0.8 Megabyte with an additional 0.2 Megabyte in JavaScript code for connection. ","version":"Next","tagName":"h3"},{"title":"Why RxDB is a good fit for the frontend","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#why-rxdb-is-a-good-fit-for-the-frontend","content":" RxDB is a powerful frontend JavaScript database that addresses the limitations of SQL databases and provides an optimal solution for frontend data storage. Let's explore why RxDB is an excellent fit for frontend applications: ","version":"Next","tagName":"h2"},{"title":"Made in JavaScript, optimized for JavaScript applications","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#made-in-javascript-optimized-for-javascript-applications","content":" RxDB is designed and optimized for JavaScript applications. Built using JavaScript itself, RxDB offers seamless integration with JavaScript frameworks and libraries, allowing developers to leverage their existing JavaScript knowledge and skills. ","version":"Next","tagName":"h3"},{"title":"NoSQL (JSON) documents for UIs","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#nosql-json-documents-for-uis","content":" RxDB adopts a NoSQL approach, using JSON documents as its primary data structure. This aligns well with the JavaScript ecosystem, as JavaScript natively works with JSON objects. By using NoSQL documents, RxDB provides a more natural and intuitive data model for UI-centric applications. ","version":"Next","tagName":"h3"},{"title":"Better TypeScript support compared to SQL","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#better-typescript-support-compared-to-sql","content":" TypeScript has become increasingly popular for building frontend applications. RxDB provides excellent TypeScript support, allowing developers to leverage static typing and benefit from enhanced code quality and tooling. This is particularly advantageous when compared to SQL databases, which often have limited TypeScript support. ","version":"Next","tagName":"h3"},{"title":"Observable Queries for automatic UI updates","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#observable-queries-for-automatic-ui-updates","content":" RxDB introduces the concept of observable queries, powered by RxJS. Observable queries automatically update the UI whenever there are changes in the underlying data. This reactive approach eliminates the need for manual UI updates and ensures that the frontend remains synchronized with the database state. ","version":"Next","tagName":"h3"},{"title":"Optimized observed queries with the EventReduce Algorithm","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#optimized-observed-queries-with-the-eventreduce-algorithm","content":" RxDB optimizes observed queries with its EventReduce Algorithm. This algorithm intelligently reduces redundant events and ensures that UI updates are performed efficiently. By minimizing unnecessary re-renders, RxDB significantly improves performance and responsiveness in frontend applications. const query = myCollection.find({ selector: { age: { $gt: 21 } } }); const querySub = query.$.subscribe(results => { console.log('got results: ' + results.length); }); ","version":"Next","tagName":"h3"},{"title":"Observable document fields","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#observable-document-fields","content":" RxDB supports observable document fields, enabling developers to track changes at a granular level within documents. By observing specific fields, developers can reactively update the UI when those fields change, ensuring a responsive and synchronized frontend interface. myDocument.firstName$.subscribe(newName => console.log('name is: ' + newName)); ","version":"Next","tagName":"h3"},{"title":"Storing Documents Compressed","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#storing-documents-compressed","content":" RxDB provides the option to store documents in a compressed format, reducing storage requirements and improving overall database performance. Compressed storage offers benefits such as reduced disk space usage, faster data read/write operations, and improved network transfer speeds, making it an essential feature for efficient frontend data storage. ","version":"Next","tagName":"h3"},{"title":"Built-in Multi-tab support","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#built-in-multi-tab-support","content":" RxDB offers built-in multi-tab support, allowing data synchronization and state management across multiple browser tabs. This feature ensures consistent data access and synchronization, enabling users to work seamlessly across different tabs without conflicts or data inconsistencies. ","version":"Next","tagName":"h3"},{"title":"Replication Algorithm can be made compatible with any backend","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#replication-algorithm-can-be-made-compatible-with-any-backend","content":" RxDB's realtime replication algorithm is designed to be flexible and compatible with various backend systems. Whether you're using your own servers, Firebase, CouchDB, NATS, WebSocket, or any other backend, RxDB can be seamlessly integrated and synchronized with the backend system of your choice. ","version":"Next","tagName":"h3"},{"title":"Flexible storage layer for code reuse","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#flexible-storage-layer-for-code-reuse","content":" RxDB provides a flexible storage layer that enables code reuse across different platforms. Whether you're building applications with Electron.js, React Native, hybrid apps using Capacitor.js, or traditional web browsers, RxDB allows you to reuse the same codebase and leverage the power of a frontend database across different environments. ","version":"Next","tagName":"h3"},{"title":"Handling schema changes in distributed environments","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#handling-schema-changes-in-distributed-environments","content":" In distributed environments where data is stored on multiple client devices, handling schema changes can be challenging. RxDB tackles this challenge by providing robust mechanisms for handling schema changes. It ensures that schema updates propagate smoothly across devices, maintaining data integrity and enabling seamless schema evolution. ","version":"Next","tagName":"h3"},{"title":"Follow Up","type":1,"pageTitle":"RxDB JavaScript Frontend Database: Efficient Data Storage in Frontend Applications","url":"/articles/frontend-database.html#follow-up","content":" To further explore RxDB and get started with using it in your frontend applications, consider the following resources: RxDB Quickstart: A step-by-step guide to quickly set up RxDB in your project and start leveraging its features.RxDB GitHub Repository: The official repository for RxDB, where you can find the code, examples, and community support. By adopting RxDB as your frontend database, you can unlock the full potential of frontend data storage and empower your applications with offline accessibility, caching, improved performance, and seamless data synchronization. RxDB's JavaScript-centric approach and powerful features make it an ideal choice for frontend developers seeking efficient and scalable data storage solutions. ","version":"Next","tagName":"h2"},{"title":"RxDB - JSON Database for JavaScript","type":0,"sectionRef":"#","url":"/articles/json-database.html","content":"","keywords":"","version":"Next"},{"title":"Why Choose a JSON Database?","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#why-choose-a-json-database","content":" JavaScript Friendliness: JavaScript, a prevalent language for web development, naturally uses JSON for data representation. Using a JSON database aligns seamlessly with JavaScript's native data format. Compatibility: JSON is widely supported across different programming languages and platforms. Storing data in JSON format ensures compatibility with a broad range of tools and systems. All modern programming ecosystems have packages to parse, validate and process JSON data. Flexibility: JSON documents can accommodate complex and nested data structures, allowing developers to store data in a more intuitive and hierarchical manner compared to SQL table rows. Nested data can be just stored in-document instead of having related tables. Human-Readable: JSON is easy to read and understand, simplifying debugging and data inspection tasks. ","version":"Next","tagName":"h2"},{"title":"Storage and Access Options for JSON Documents","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#storage-and-access-options-for-json-documents","content":" When incorporating JSON documents into your application, you have several storage and access options to consider: Local In-App Database with In-Memory Storage: Ideal for lightweight applications or temporary data storage, this option keeps data in memory, ensuring fast read and write operations. However, data is not persisted beyond the current application session, making it suitable for temporary data storage. With RxDB, the memory RxStorage can be utilized to create an in-memory database. Local In-App Database with Persistent Storage: Suitable for applications requiring data retention across sessions. Data is stored on the user's device or inside of the Node.js application, offering persistence between application sessions. It balances speed and data retention, making it versatile for various applications. With RxDB, a whole range of persistend storages is available. As example, for browser there is the IndexedDB storage. For server side applications, the Node.js Filesystem storage can be used. There are many more storages for React-Native, Flutter, Capacitors.js and others. Server Database Connected to the Application: For applications requiring data synchronization and accessibility from multiple processes, a server-based database is the preferred choice. Data is stored on a remote server, facilitating data sharing, synchronization, and accessibility across multiple processes. It's suitable for scenarios requiring centralized data management and enhanced security and backup capabilities on the server. RxDB supports the FoundationDB and MongoDB as a remote database server. ","version":"Next","tagName":"h2"},{"title":"Compression Storage for JSON Documents","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#compression-storage-for-json-documents","content":" Compression storage for JSON documents is made effortless with RxDB's key-compression plugin. This feature enables the efficient storage of compressed document data, reducing storage requirements while maintaining data integrity. Queries on compressed documents remain seamless, ensuring that your application benefits from both space-saving advantages and optimal query performance, making RxDB a compelling choice for managing JSON data efficiently. The compression happens inside of the RxDatabase and does not affect the API usage. The only limitation is that encrypted fields themself cannot be used inside a query. ","version":"Next","tagName":"h2"},{"title":"Schema Validation and Data Migration on Schema Changes","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#schema-validation-and-data-migration-on-schema-changes","content":" Storing JSON documents inside of a database in an application, can cause a problem when the format of the data changes. Instead of having a single server where the data must be migrated, many client devices are out there that have to run a migration. When your application's schema evolves, RxDB provides migration strategies to facilitate the transition, ensuring data consistency throughout schema updates. JSONSchema Validation Plugins: RxDB supports multiple JSONSchema validation plugins, guaranteeing that only valid data is stored in the database. RxDB uses the JsonSchema standardization that you might know from other technologies like OpenAPI (aka Swagger). // RxDB Schema example const mySchema = { version: 0, primaryKey: 'id', // <- define the primary key for your documents type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, name: { type: 'string', maxLength: 100 }, done: { type: 'boolean' }, timestamp: { type: 'string', format: 'date-time' } }, required: ['id', 'name', 'done', 'timestamp'] } ","version":"Next","tagName":"h2"},{"title":"Store JSON with RxDB in Browser Applications","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#store-json-with-rxdb-in-browser-applications","content":" RxDB offers versatile storage solutions for browser-based applications: Multiple Storage Plugins: RxDB supports various storage backends, including IndexedDB, Dexie.js, In-Memory, and Loki.js, catering to a range of browser environments. Observable Queries: With RxDB, you can create observable queries that work seamlessly across multiple browser tabs, providing real-time updates and synchronization. ","version":"Next","tagName":"h2"},{"title":"RxDB JSON Database Performance","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#rxdb-json-database-performance","content":" Certainly! Let's delve deeper into the performance aspects of RxDB when it comes to working with JSON data. Efficient Querying: RxDB is engineered for rapid and efficient querying of JSON data. It employs a well-optimized indexing system that allows for lightning-fast retrieval of specific data points within your JSON documents. Whether you're fetching individual values or complex nested structures, RxDB's query performance is designed to keep your application responsive, even when dealing with large datasets. Scalability: As your application grows and your JSON dataset expands, RxDB scales gracefully. Its performance remains consistent, enabling you to handle increasingly larger volumes of data without compromising on speed or responsiveness. This scalability is essential for applications that need to accommodate growing user bases and evolving data needs. Reduced Latency: RxDB's streamlined data access mechanisms significantly reduce latency when working with JSON data. Whether you're reading from the database, making updates, or synchronizing data between clients and servers, RxDB's optimized operations help minimize the delays often associated with data access. Observed queris are optimized with the EventReduce algorithm to provide nearly-instand UI updates on data changes. RxStorage Layer: Because RxDB allows you to swap out the storage layer. A storage with the most optimal performance can be chosen for each runtime while not touching other database code. Depending on the access patterns, you can pick exactly the storage that is best: ","version":"Next","tagName":"h2"},{"title":"RxDB in Node.js","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#rxdb-in-nodejs","content":" Node.js developers can also benefit from RxDB's capabilities. By integrating RxDB into your Node.js applications, you can harness the power of a NoSQL JSON db to efficiently manage your data on the server-side. RxDB's flexibility, performance, and essential features are equally valuable in server-side development. Read more about RxDB+Node.js. ","version":"Next","tagName":"h2"},{"title":"RxDB to store JSON documents in React Native","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#rxdb-to-store-json-documents-in-react-native","content":" For mobile app developers working with React Native, RxDB offers a convenient solution for handling JSON data. Whether you're building Android or iOS applications, RxDB's compatibility with JavaScript and its ability to work with JSON documents make it a natural choice for data management within your React Native apps. Read more about RxDB+React-Native. ","version":"Next","tagName":"h2"},{"title":"Using SQLite as a JSON Database","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#using-sqlite-as-a-json-database","content":" In some cases, you might want to use SQLite as a backend storage solution for your JSON data. RxDB can be configured to work with SQLite, providing the benefits of both a relational database system and JSON document storage. This hybrid approach can be advantageous when dealing with complex data relationships while retaining the flexibility of JSON data representation. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"RxDB - JSON Database for JavaScript","url":"/articles/json-database.html#follow-up","content":" To further explore RxDB and get started with using it in your frontend applications, consider the following resources: RxDB Quickstart: A step-by-step guide to quickly set up RxDB in your project and start leveraging its features.RxDB GitHub Repository: The official repository for RxDB, where you can find the code, examples, and community support. By embracing RxDB as your JSON database solution, you can tap into the extensive capabilities of JSON data storage. This empowers your applications with offline accessibility, caching, enhanced performance, and effortless data synchronization. RxDB's focus on JavaScript and its robust feature set render it the perfect selection for frontend developers in pursuit of efficient and scalable data storage solutions. ","version":"Next","tagName":"h2"},{"title":"Mobile Database - RxDB as Database for Mobile Applications","type":0,"sectionRef":"#","url":"/articles/mobile-database.html","content":"","keywords":"","version":"Next"},{"title":"Understanding Mobile Databases","type":1,"pageTitle":"Mobile Database - RxDB as Database for Mobile Applications","url":"/articles/mobile-database.html#understanding-mobile-databases","content":" Mobile databases are specialized software systems designed to handle data storage and management for mobile applications. These databases are optimized for the unique requirements of mobile environments, which often include limited device resources, fluctuations in network connectivity, and the need for offline functionality. There are various types of mobile databases available, each with its own strengths and use cases. Local databases, such as SQLite and Realm, reside directly on the user's device, providing offline capabilities and faster data access. Cloud-based databases, like Firebase Realtime Database and Amazon DynamoDB, rely on remote servers to store and retrieve data, enabling synchronization across multiple devices. Hybrid databases, as the name suggests, combine the benefits of both local and cloud-based approaches, offering a balance between offline functionality and data synchronization. ","version":"Next","tagName":"h2"},{"title":"Introducing RxDB: A Paradigm Shift in Mobile Database Solutions","type":1,"pageTitle":"Mobile Database - RxDB as Database for Mobile Applications","url":"/articles/mobile-database.html#introducing-rxdb-a-paradigm-shift-in-mobile-database-solutions","content":" RxDB, also known as Reactive Database, has emerged as a game-changer in the realm of mobile databases. Built on top of popular web technologies like JavaScript, TypeScript, and RxJS (Reactive Extensions for JavaScript), RxDB provides an elegant solution for seamless offline-first capabilities and real-time data synchronization in mobile applications. Benefits of RxDB for Hybrid App Development Offline-First Approach: One of the major advantages of RxDB is its ability to work in an offline mode. It allows mobile applications to store and access data locally, ensuring uninterrupted functionality even when the network connection is weak or unavailable. The database automatically syncs the data with the server once the connection is reestablished, guaranteeing data consistency. Real-Time Data Synchronization: RxDB leverages the power of real-time data synchronization, making it an excellent choice for applications that require collaborative features or live updates. It uses the concept of change streams to detect modifications made to the database and instantly propagates those changes across connected devices. This real-time synchronization enables seamless collaboration and enhances user experience. Reactive Programming Paradigm: RxDB embraces the principles of reactive programming, which simplifies the development process by handling asynchronous events and data streams. By leveraging RxJS observables, developers can write concise, declarative code that reacts to changes in data, ensuring a highly responsive user experience. The reactive programming paradigm enhances code maintainability, scalability, and testability. Easy Integration with Hybrid App Frameworks: RxDB seamlessly integrates with popular hybrid app development frameworks like React Native and Capacitor. This compatibility allows developers to leverage the existing ecosystem and tools of these frameworks, making the transition to RxDB smoother and more efficient. By utilizing RxDB within these frameworks, developers can harness the power of a robust database solution without sacrificing the advantages of hybrid app development. Cross-Platform Support: RxDB enables developers to build cross-platform mobile applications that run seamlessly on both iOS and Android devices. This versatility eliminates the need for separate database implementations for different platforms, saving development time and effort. With RxDB, developers can focus on building a unified codebase and delivering a consistent user experience across platforms. ","version":"Next","tagName":"h2"},{"title":"Use Cases for RxDB in Hybrid App Development","type":1,"pageTitle":"Mobile Database - RxDB as Database for Mobile Applications","url":"/articles/mobile-database.html#use-cases-for-rxdb-in-hybrid-app-development","content":" Offline-First Applications: RxDB is an ideal choice for applications that heavily rely on offline functionality. Whether it's a note-taking app, a task manager, or a survey application, RxDB ensures that users can continue working even when connectivity is compromised. The seamless synchronization capabilities of RxDB ensure that changes made offline are automatically propagated once the device reconnects to the internet. Real-Time Collaboration: Applications that require real-time collaboration, such as messaging platforms or collaborative editing tools, can greatly benefit from RxDB. The real-time synchronization capabilities enable multiple users to work on the same data simultaneously, ensuring that everyone sees the latest updates in real-time. Data-Intensive Applications: RxDB's performance and scalability make it suitable for data-intensive applications that handle large datasets or complex data structures. Whether it's a media-rich app, a data visualization tool, or an analytics platform, RxDB can handle the heavy lifting and provide a smooth user experience. Cross-Platform Applications: Hybrid app frameworks like React Native and Capacitor have gained popularity due to their ability to build cross-platform applications. By utilizing RxDB within these frameworks, developers can create a unified codebase that runs seamlessly on both iOS and Android, significantly reducing development time and effort. ","version":"Next","tagName":"h2"},{"title":"Conclusion","type":1,"pageTitle":"Mobile Database - RxDB as Database for Mobile Applications","url":"/articles/mobile-database.html#conclusion","content":" Mobile databases play a vital role in the performance and functionality of mobile applications. RxDB, with its offline-first approach, real-time data synchronization, and seamless integration with hybrid app development frameworks like React Native and Capacitor, offers a robust solution for managing data in mobile apps. By leveraging the power of reactive programming, RxDB empowers developers to build highly responsive, scalable, and cross-platform applications that deliver an exceptional user experience. With its versatility and ease of use, RxDB is undoubtedly a database solution worth considering for hybrid app development. Embrace the power of RxDB and unlock the full potential of your mobile applications. ","version":"Next","tagName":"h2"},{"title":"Using localStorage in Modern Applications: A Comprehensive Guide","type":0,"sectionRef":"#","url":"/articles/localstorage.html","content":"","keywords":"","version":"Next"},{"title":"What is the localStorage API?","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#what-is-the-localstorage-api","content":" The localStorage API is a built-in feature of web browsers that enables web developers to store small amounts of data persistently on a user's device. It operates on a simple key-value basis, allowing developers to save strings, numbers, and other simple data types. This data remains available even after the user closes the browser or navigates away from the page. The API provides a convenient way to maintain state and store user preferences without relying on server-side storage. ","version":"Next","tagName":"h2"},{"title":"RxDB as a Database for Progressive Web Apps (PWA)","type":0,"sectionRef":"#","url":"/articles/progressive-web-app-database.html","content":"","keywords":"","version":"Next"},{"title":"What is a Progressive Web App","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#what-is-a-progressive-web-app","content":" Progressive Web Apps are the future of web development, seamlessly combining the best of both web and mobile app worlds. They can be easily installed on the user's home screen, function offline, and load at lightning speed. Unlike hybrid apps, PWAs offer a consistent user experience across platforms, making them a versatile choice for modern applications. PWAs bring a plethora of advantages to the table. They eliminate the hassle of app store installations and updates, reduce dependency on network connectivity, and prioritize fast loading times. By harnessing the power of service workers and intelligent caching mechanisms, PWAs ensure users can access content even in offline mode. Furthermore, PWAs are device-agnostic, seamlessly adapting to various devices, from desktops to smartphones. ","version":"Next","tagName":"h2"},{"title":"Introducing RxDB as a Client-Side Database for PWAs","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#introducing-rxdb-as-a-client-side-database-for-pwas","content":" At the heart of PWAs lies efficient data management, and RxDB steps in as a reliable ally. As a client-side NoSQL database, RxDB seamlessly integrates into web applications, offering real-time data synchronization and manipulation capabilities. This article sheds light on the transformative potential of RxDB as it collaborates harmoniously with PWAs, enabling local-first strategies and elevating user interactions to a whole new level. ","version":"Next","tagName":"h2"},{"title":"Getting Started with RxDB","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#getting-started-with-rxdb","content":" RxDB emerges as a reactive, schema-based NoSQL database crafted explicitly for client-side applications. Its real-time data synchronization and responsiveness align seamlessly with the dynamic demands of modern PWAs. Local-First Approach The cornerstone of RxDB's philosophy is the local-first approach, empowering PWAs to prioritize data storage and manipulation on the client side. This paradigm ensures that PWAs remain functional even when offline, allowing users to access and interact with data seamlessly. RxDB bridges any gaps in data synchronization once network connectivity is restored. Observable Queries Observable queries (aka Live-Queries) serve as the engine of RxDB's dynamic capabilities. By leveraging these queries, PWAs can monitor and respond to data changes in real time. The result is an engaging user interface with instantaneous updates that captivate users and keep them engaged. await db.heroes.find({ selector: { healthpoints: { $gt: 0 } } }) .$ // the $ returns an observable that emits each time the result set of the query changes .subscribe(aliveHeroes => console.dir(aliveHeroes)); Multi-Tab Support RxDB extends its prowess to multi-tab scenarios, guaranteeing data consistency across different tabs or windows of the same PWA. This feature promotes a seamless transition between various sections of the application, while minimizing data conflicts. ","version":"Next","tagName":"h3"},{"title":"Using RxDB in a Progressive Web App","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#using-rxdb-in-a-progressive-web-app","content":" Integrating RxDB into a Progressive Web App, driven by technologies like React, is a straightforward process. By configuring RxDB and installing the necessary packages, developers establish a solid foundation for robust data management within their PWA. ","version":"Next","tagName":"h3"},{"title":"Exploring Different RxStorage Layers","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#exploring-different-rxstorage-layers","content":" RxDB caters to diverse needs through its various RxStorage layers: Dexie.js RxStorage: Leveraging the capabilities of the Dexie.js library for storage.LokiJS RxStorage: Utilizing the strengths of the LokiJS library for storage.IndexedDB RxStorage: Tapping into the browser's IndexedDB for efficient data storage.OPFS RxStorage: Interfacing with the Offline-First Persistence System for seamless persistence.Memory RxStorage: Storing data in memory, ideal for temporary data requirements. This flexibility empowers developers to optimize data storage based on the unique needs of their PWA. Synchronizing Data with RxDB between PWA Clients and Servers To facilitate seamless data synchronization between PWA clients and servers, RxDB offers a range of replication options: RxDB Replication Algorithm: RxDB introduces its own replication algorithm, enabling efficient and reliable data synchronization between clients and servers. CouchDB Replication: Leveraging its roots in CouchDB, RxDB facilitates smooth data replication between clients and CouchDB servers, ensuring data consistency and synchronization across devices. Firestore Replication: RxDB synchronizes data with Google Firestore, a real-time cloud-hosted NoSQL database. This integration guarantees up-to-date data across different instances of the PWA. Peer-to-Peer (P2P) via WebRTC Replication: RxDB supports P2P replication, facilitating direct data synchronization between clients without intermediaries. This decentralized approach is invaluable in scenarios where server infrastructure is limited. ","version":"Next","tagName":"h2"},{"title":"Advanced RxDB Features and Techniques","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#advanced-rxdb-features-and-techniques","content":" ","version":"Next","tagName":"h2"},{"title":"Encryption of Local Data","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#encryption-of-local-data","content":" RxDB empowers PWAs with the ability to encrypt local data, enhancing data security and safeguarding sensitive information. This feature is indispensable for applications handling user credentials, financial transactions, and other confidential data. ","version":"Next","tagName":"h3"},{"title":"Indexing and Performance Optimization","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#indexing-and-performance-optimization","content":" Performance optimization is a top priority for PWAs. RxDB addresses this concern by offering indexing options that expedite data retrieval, resulting in a snappier user interface and heightened responsiveness. ","version":"Next","tagName":"h3"},{"title":"JSON Key Compression","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#json-key-compression","content":" RxDB introduces JSON key compression, a feature that reduces storage requirements. This optimization is particularly beneficial for PWAs dealing with substantial data volumes, enhancing overall efficiency and resource utilization. ","version":"Next","tagName":"h3"},{"title":"Change Streams and Event Handling","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#change-streams-and-event-handling","content":" RxDB introduces change streams, enabling PWAs to react to data changes in real time. This capability empowers dynamic updates to the user interface, promoting interactivity and engagement. ","version":"Next","tagName":"h3"},{"title":"Conclusion","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#conclusion","content":" In the ever-evolving landscape of web application development, Progressive Web Apps continue to redefine user experiences. RxDB emerges as a pivotal player, seamlessly integrating with PWAs and enhancing their capabilities. With features like the local-first approach, observable queries, replication mechanisms, and advanced encryption, RxDB empowers developers to create responsive, offline-capable, and data-driven PWAs. As the demand for sophisticated PWAs continues to surge, RxDB remains an indispensable tool for developers aiming to push the boundaries of innovation and redefine the standards of user engagement. By embracing RxDB, developers ensure their PWAs remain at the forefront of the digital revolution, offering seamless and immersive experiences to users around the world. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"RxDB as a Database for Progressive Web Apps (PWA)","url":"/articles/progressive-web-app-database.html#follow-up","content":" To explore more about RxDB and leverage its capabilities for browser database development, check out the following resources: RxDB GitHub Repository: Visit the official GitHub repository of RxDB to access the source code, documentation, and community support.RxDB Quickstart: Get started quickly with RxDB by following the provided quickstart guide, which provides step-by-step instructions for setting up and using RxDB in your projects.RxDB Progressive Web App in Angular Example ","version":"Next","tagName":"h2"},{"title":"Exploring local storage Methods: A Practical Example","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#exploring-local-storage-methods-a-practical-example","content":" Let's dive into some hands-on code examples to better understand how to leverage the power of localStorage. The API offers several methods for interaction, including setItem, getItem, removeItem, and clear. Consider the following code snippet: // Storing data using setItem localStorage.setItem('username', 'john_doe'); // Retrieving data using getItem const storedUsername = localStorage.getItem('username'); // Removing data using removeItem localStorage.removeItem('username'); // Clearing all data localStorage.clear(); ","version":"Next","tagName":"h2"},{"title":"Storing Complex Data in JavaScript with JSON Serialization","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#storing-complex-data-in-javascript-with-json-serialization","content":" While js localStorage excels at handling simple key-value pairs, it also supports more intricate data storage through JSON serialization. By utilizing JSON.stringify and JSON.parse, you can store and retrieve structured data like objects and arrays. Here's an example of storing a document: const user = { name: 'Alice', age: 30, email: '[email protected]' }; // Storing a user object localStorage.setItem('user', JSON.stringify(user)); // Retrieving and parsing the user object const storedUser = JSON.parse(localStorage.getItem('user')); ","version":"Next","tagName":"h2"},{"title":"Understanding the Limitations of local storage","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#understanding-the-limitations-of-local-storage","content":" Despite its convenience, localStorage does come with a set of limitations that developers should be aware of: Non-Async Blocking API: One significant drawback is that js localStorage operates as a non-async blocking API. This means that any operations performed on localStorage can potentially block the main thread, leading to slower application performance and a less responsive user experience.Limited Data Structure: Unlike more advanced databases, localStorage is limited to a simple key-value store. This restriction makes it unsuitable for storing complex data structures or managing relationships between data elements.Stringification Overhead: Storing JSON data in localStorage requires stringifying the data before storage and parsing it when retrieved. This process introduces performance overhead, potentially slowing down operations by up to 10 times.Lack of Indexing: localStorage lacks indexing capabilities, making it challenging to perform efficient searches or iterate over data based on specific criteria. This limitation can hinder applications that rely on complex data retrieval.Tab Blocking: In a multi-tab environment, one tab's localStorage operations can impact the performance of other tabs by monopolizing CPU resources. You can reproduce this behavior by opening this test file in two browser windows and trigger localstorage inserts in one of them. You will observe that the indication spinner will stuck in both windows.Storage Limit: Browsers typically impose a storage limit of around 5 MiB for each origin's localStorage. ","version":"Next","tagName":"h2"},{"title":"Reasons to Still Use localStorage","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#reasons-to-still-use-localstorage","content":" ","version":"Next","tagName":"h2"},{"title":"Is localStorage Slow?","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#is-localstorage-slow","content":" Contrary to concerns about performance, the localStorage API in JavaScript is surprisingly fast when compared to alternative storage solutions like IndexedDB or OPFS. It excels in handling small key-value assignments efficiently. Due to its simplicity and direct integration with browsers, accessing and modifying localStorage data incur minimal overhead. For scenarios where quick and straightforward data storage is required, localStorage remains a viable option. For example RxDB uses localStorage in the localStorage meta optimizer to manage simple key values pairs while storing the "normal" documents inside of another storage like IndexedDB. ","version":"Next","tagName":"h3"},{"title":"When Not to Use localStorage","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#when-not-to-use-localstorage","content":" While localStorage offers convenience, it may not be suitable for every use case. Consider the following situations where alternatives might be more appropriate: Data Must Be Queryable: If your application relies heavily on querying data based on specific criteria, localStorage might not provide the necessary querying capabilities. Complex data retrieval might lead to inefficient code and slow performance.Big JSON Documents: Storing large JSON documents in localStorage can consume a significant amount of memory and degrade performance. It's essential to assess the size of the data you intend to store and consider more robust solutions for handling substantial datasets.Many Read/Write Operations: Excessive read and write operations on localStorage can lead to performance bottlenecks. Other storage solutions might offer better performance and scalability for applications that require frequent data manipulation.Lack of Persistence: If your application can function without persistent data across sessions, consider using in-memory data structures like new Map() or new Set(). These options offer speed and efficiency for transient data. ","version":"Next","tagName":"h2"},{"title":"What to use instead of the localStorage API in JavaScript","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#what-to-use-instead-of-the-localstorage-api-in-javascript","content":" ","version":"Next","tagName":"h2"},{"title":"localStorage vs IndexedDB","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#localstorage-vs-indexeddb","content":" While localStorage serves as a reliable storage solution for simpler data needs, it's essential to explore alternatives like IndexedDB when dealing with more complex requirements. IndexedDB is designed to store not only key-value pairs but also JSON documents. Unlike localStorage, which usually has a storage limit of around 5-10MB per domain, IndexedDB can handle significantly larger datasets. IndexDB with its support for indexing facilitates efficient querying, making range queries possible. However, it's worth noting that IndexedDB lacks observability, which is a feature unique to localStorage through the storage event. Also, complex queries can pose a challenge with IndexedDB, and while its performance is acceptable, IndexedDB can be too slow for some use cases. // localStorage can observe chanes with the storage event. // This feature is missing in IndexedDB addEventListener("storage", (event) => {}); For those looking to harness the full power of IndexedDB with added capabilities, using wrapper libraries like RxDB or Dexie.js is recommended. These libraries augment IndexedDB with features such as complex queries and observability, enhancing its usability for modern applications. In summary when you compare IndexedDB vs localStorage, IndexedDB will win at any case where much data is handled while localStorage has better performance on small key-value datasets. ","version":"Next","tagName":"h3"},{"title":"File System API (OPFS)","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#file-system-api-opfs","content":" Another intriguing option is the OPFS (File System API). This API provides direct access to an origin-based, sandboxed filesystem which is highly optimized for performance and offers in-place write access to its content. OPFS offers impressive performance benefits. However, working with the OPFS API can be complex, and it's only accessible within a WebWorker. To simplify its usage and extend its capabilities, consider using a wrapper library like RxDB's OPFS RxStorage, which builds a comprehensive database on top of the OPFS API. This abstraction allows you to harness the power of the OPFS API without the intricacies of direct usage. ","version":"Next","tagName":"h3"},{"title":"localStorage vs Cookies","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#localstorage-vs-cookies","content":" Cookies, once a primary method of client-side data storage, have fallen out of favor in modern web development due to their limitations. While they can store data, they are about 100 times slower when compared to the localStorage API. Additionally, cookies are included in the HTTP header, which can impact network performance. As a result, cookies are not recommended for data storage purposes in contemporary web applications. ","version":"Next","tagName":"h3"},{"title":"localStorage vs WebSQL","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#localstorage-vs-websql","content":" WebSQL, despite offering a SQL-based interface for client-side data storage, is a deprecated technology and should be avoided. Its API has been phased out of modern browsers, and it lacks the robustness of alternatives like IndexedDB. Moreover, WebSQL tends to be around 10 times slower than IndexedDB, making it a suboptimal choice for applications that demand efficient data manipulation and retrieval. ","version":"Next","tagName":"h3"},{"title":"localStorage vs sessionStorage","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#localstorage-vs-sessionstorage","content":" In scenarios where data persistence beyond a session is unnecessary, developers often turn to sessionStorage. This storage mechanism retains data only for the duration of a tab or browser session. It survives page reloads and restores, providing a handy solution for temporary data needs. However, it's important to note that sessionStorage is limited in scope and may not suit all use cases. ","version":"Next","tagName":"h3"},{"title":"AsyncStorage for React Native","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#asyncstorage-for-react-native","content":" For React Native developers, the AsyncStorage API is the go-to solution, mirroring the behavior of localStorage but with asynchronous support. Since not all JavaScript runtimes support localStorage, AsyncStorage offers a seamless alternative for data persistence in React Native applications. ","version":"Next","tagName":"h3"},{"title":"node-localstorage for Node.js","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#node-localstorage-for-nodejs","content":" Because native localStorage is absent in the Node.js JavaScript runtime, you will get the error ReferenceError: localStorage is not defined in Node.js or node based runtimes like Next.js. The node-localstorage npm package bridges the gap. This package replicates the browser's localStorage API within the Node.js environment, ensuring consistent and compatible data storage capabilities. ","version":"Next","tagName":"h3"},{"title":"localStorage in browser extensions","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#localstorage-in-browser-extensions","content":" While browser extensions for chrome and firefox support the localStorage API, it is not recommended to use it in that context to store extension-related data. The browser will clear the data in many scenarios like when the users clear their browsing history. Instead the Extension Storage API should be used for browser extensions. In contrast to localStorage, the storage API works async and all operations return a Promise. Also it provides automatic sync to replicate data between all instances of that browser that the user is logged into. The storage API is even able to storage JSON-ifiable objects instead of plain strings. // Using the storage API in chrome await chrome.storage.local.set({ foobar: {nr: 1} }); const result = await chrome.storage.local.get('foobar'); console.log(result.foobar); // {nr: 1} ","version":"Next","tagName":"h2"},{"title":"localStorage in Deno and Bun","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#localstorage-in-deno-and-bun","content":" The Deno JavaScript runtime has a working localStorage API so running localStorage.setItem() and the other methods, will just work and the locally stored data is persisted across multiple runs. Bun does not support the localStorage JavaScript API. Trying to use localStorage will error with ReferenceError: Can't find variable: localStorage. To store data locally in Bun, you could use the bun:sqlite module instead or directly use a in-JavaScript database with Bun support like RxDB. ","version":"Next","tagName":"h2"},{"title":"Conclusion: Choosing the Right Storage Solution","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#conclusion-choosing-the-right-storage-solution","content":" In the world of modern web development, localStorage serves as a valuable tool for lightweight data storage. Its simplicity and speed make it an excellent choice for small key-value assignments. However, as application complexity grows, developers must assess their storage needs carefully. For scenarios that demand advanced querying, complex data structures, or high-volume operations, alternatives like IndexedDB, wrapper libraries with additional features like RxDB, or platform-specific APIs offer more robust solutions. By understanding the strengths and limitations of various storage options, developers can make informed decisions that pave the way for efficient and scalable applications. ","version":"Next","tagName":"h2"},{"title":"Follow up","type":1,"pageTitle":"Using localStorage in Modern Applications: A Comprehensive Guide","url":"/articles/localstorage.html#follow-up","content":" Learn how to store and query data with RxDB in the RxDB QuickstartWhy IndexedDB is slow and how to fix itRxStorage performance comparison ","version":"Next","tagName":"h2"},{"title":"What is a realtime database?","type":0,"sectionRef":"#","url":"/articles/realtime-database.html","content":"","keywords":"","version":"Next"},{"title":"Realtime as in realtime computing","type":1,"pageTitle":"What is a realtime database?","url":"/articles/realtime-database.html#realtime-as-in-realtime-computing","content":" When "normal" developers hear the word "realtime", they think of Real-time computing (RTC). Real-time computing is a type of computer processing that guarantees specific response times for tasks or events, crucial in applications like industrial control, automotive systems, and aerospace. It relies on specialized operating systems (RTOS) to ensure predictability and low latency. Hard real-time systems must never miss deadlines, while soft real-time systems can tolerate occasional delays. Real-time responses are often understood to be in the order of milliseconds, and sometimes microseconds. Consider the role of real-time computing in car airbags: sensors detect collision force, swiftly process the data, and immediately decide to deploy the airbags within milliseconds. Such rapid action is imperative for safeguarding passengers. Hence, the controlling chip must guarantee a certain response time — it must operate in "realtime". But when people talk about realtime databases, especially in the web-development world, they almost never mean realtime, as in realtime computing, they mean something else. In fact, with any programming language that run on end users devices, it is not even possible to built a "real" realtime database. A program, like a JavaScript (browser or Node.js) process, can be halted by the operating systems task manager at any time and therefore it will never be able to guarantee specific response times. To build a realtime computing database, you would need a realtime capable operating system. ","version":"Next","tagName":"h2"},{"title":"Real time Database as in realtime replication","type":1,"pageTitle":"What is a realtime database?","url":"/articles/realtime-database.html#real-time-database-as-in-realtime-replication","content":" When talking about realtime databases, most people refer to realtime, as in realtime replication. Often they mean a very specific product which is the Firebase Realtime Database (not the Firestore). In the context of the Firebase Realtime Database, "realtime" means that data changes are synchronized and delivered to all connected clients or devices as soon as they occur, typically within milliseconds. This means that when any client updates, adds, or removes data in the database, all other clients that are connected to the same database instance receive those updates instantly, without the need for manual polling or frequent HTTP requests. In short, when replicating data between databases, instead of polling, we use a websocket connection to live-stream all changes between the server and the clients, this is labeled as "realtime database". A similar thing can be done with RxDB and the RxDB Replication Plugins. ","version":"Next","tagName":"h2"},{"title":"Realtime as in realtime applications","type":1,"pageTitle":"What is a realtime database?","url":"/articles/realtime-database.html#realtime-as-in-realtime-applications","content":" In the context of realtime client-side applications, "realtime" refers to the immediate or near-instantaneous processing and response to events or data inputs. When data changes, the application must directly update to reflect the new data state, without any user interaction or delay. Notice that the change to the data could have come from any source, like a user action, an operation in another browser tab, or even an operation from another device that has been replicated to the client. In contrast to push-pull based databases (e.g., MySQL or MongoDB servers), a realtime database contains features which make it easy to build realtime applications. For example with RxDB you can not only fetch query results once, but instead you can subscribe to a query and directly update the HTML dom tree whenever the query has a new result set: await db.heroes.find({ selector: { healthpoints: { $gt: 0 } } }) .$ // The $ returns an observable that emits whenever the query's result set changes. .subscribe(aliveHeroes => { // Refresh the HTML list each time there are new query results. const newContent = aliveHeroes.map(doc => '<li>' + doc.name + '</li>'); document.getElementById('#myList').innerHTML = newContent; }); // You can even subscribe to any RxDB document's fields. myDocument.firstName$.subscribe(newName => console.log('name is: ' + newName)); A competent realtime application is engineered to offer feedback or results swiftly, ideally within milliseconds to microseconds. Ideally, a data modification should be processed in under 16 milliseconds (since 1 second divided by 60 frames equals 16.66ms) to ensure users don't perceive any lag from input to visualization. RxDB utilizes the EventReduce algorithm to manage changes more swiftly than 16ms. However, it can never assure fixed response times as a "realtime computing database" would. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"What is a realtime database?","url":"/articles/realtime-database.html#follow-up","content":" Dive into the RxDB QuickstartDiscover more about the RxDB realtime replication protocolJoin the conversation at RxDB Chat ","version":"Next","tagName":"h2"},{"title":"RxDB as a Database for React Applications","type":0,"sectionRef":"#","url":"/articles/react-database.html","content":"","keywords":"","version":"Next"},{"title":"Introducing RxDB as a JavaScript Database","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#introducing-rxdb-as-a-javascript-database","content":" RxDB, a powerful JavaScript database, has garnered attention as an optimal solution for managing data in React applications. Built on top of the IndexedDB standard, RxDB combines the principles of reactive programming with database management. Its core features include reactive data handling, offline-first capabilities, and robust data replication. ","version":"Next","tagName":"h2"},{"title":"What is RxDB?","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#what-is-rxdb","content":" RxDB, short for Reactive Database, is an open-source JavaScript database that seamlessly integrates reactive programming with database operations. It offers a comprehensive API for performing database actions and synchronizing data across clients and servers. RxDB's underlying philosophy revolves around observables, allowing developers to reactively manage data changes and create dynamic user interfaces. ","version":"Next","tagName":"h2"},{"title":"Reactive Data Handling","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#reactive-data-handling","content":" One of RxDB's standout features is its support for reactive data handling. Traditional databases often require manual intervention for data fetching and updating, leading to complex and error-prone code. RxDB, however, automatically notifies subscribers whenever data changes occur, eliminating the need for explicit data manipulation. This reactive approach simplifies code and enhances the responsiveness of React components. ","version":"Next","tagName":"h3"},{"title":"Local-First Approach","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#local-first-approach","content":" RxDB embraces a local-first methodology, enabling applications to function seamlessly even in offline scenarios. By storing data locally, RxDB ensures that users can interact with the application and make updates regardless of internet connectivity. Once the connection is reestablished, RxDB synchronizes the local changes with the remote database, maintaining data consistency across devices. ","version":"Next","tagName":"h3"},{"title":"Data Replication","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#data-replication","content":" Data replication is a cornerstone of modern applications that require synchronization between multiple clients and servers. RxDB provides robust data replication mechanisms that facilitate real-time synchronization between different instances of the database. This ensures that changes made on one client are promptly propagated to others, contributing to a cohesive and unified user experience. ","version":"Next","tagName":"h3"},{"title":"Observable Queries","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#observable-queries","content":" RxDB extends the concept of observables beyond data changes. It introduces observable queries, allowing developers to observe the results of database queries. This feature enables automatic updates to query results whenever relevant data changes occur. Observable queries simplify state management by eliminating the need to manually trigger updates in response to changing data. await db.heroes.find({ selector: { healthpoints: { $gt: 0 } } }) .$ // the $ returns an observable that emits each time the result set of the query changes .subscribe(aliveHeroes => console.dir(aliveHeroes)); ","version":"Next","tagName":"h3"},{"title":"Multi-Tab Support","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#multi-tab-support","content":" Web applications often operate in multiple browser tabs or windows. RxDB accommodates this scenario by offering built-in multi-tab support. It ensures that data changes made in one tab are efficiently propagated to other tabs, maintaining data consistency and providing a seamless experience for users interacting with the application across different tabs. ","version":"Next","tagName":"h3"},{"title":"RxDB vs. Other React Database Options","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#rxdb-vs-other-react-database-options","content":" While considering database options for React applications, RxDB stands out due to its unique combination of reactive programming and database capabilities. Unlike traditional solutions such as IndexedDB or Web Storage, which provide basic data storage, RxDB offers a dedicated database solution with advanced features. Additionally, while state management libraries like Redux and MobX can be adapted for database use, RxDB provides an integrated solution specifically designed for handling data. ","version":"Next","tagName":"h3"},{"title":"IndexedDB in React and the Advantage of RxDB","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#indexeddb-in-react-and-the-advantage-of-rxdb","content":" Using IndexedDB directly in React can be challenging due to its low-level, callback-based API which doesn't align neatly with modern React's Promise and async/await patterns. This intricacy often leads to bulky and complex implementations for developers. Also, when used wrong, IndexedDB can have a worse performance profile then it could have. In contrast, RxDB, with the IndexedDB RxStorage and the Dexie.js RxStorage, abstracts these complexities, integrating reactive programming and providing a more streamlined experience for data management in React applications. Thus, RxDB offers a more intuitive approach, eliminating much of the manual overhead required with IndexedDB. ","version":"Next","tagName":"h3"},{"title":"Using RxDB in a React Application","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#using-rxdb-in-a-react-application","content":" The process of integrating RxDB into a React application is straightforward. Begin by installing RxDB as a dependency:npm install rxdb rxjsOnce installed, RxDB can be imported and initialized within your React components. The following code snippet illustrates a basic setup: import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const db = await createRxDatabase({ name: 'heroesdb', // <- name storage: getRxStorageDexie(), // <- RxStorage password: 'myPassword', // <- password (optional) multiInstance: true, // <- multiInstance (optional, default: true) eventReduce: true, // <- eventReduce (optional, default: false) cleanupPolicy: {} // <- custom cleanup policy (optional) }); ","version":"Next","tagName":"h3"},{"title":"Using RxDB React Hooks","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#using-rxdb-react-hooks","content":" The rxdb-hooks package provides a set of React hooks that simplify data management within components. These hooks leverage RxDB's reactivity to automatically update components when data changes occur. The following example demonstrates the usage of the useRxCollection and useRxQuery hooks to query and observe a collection: const collection = useRxCollection('characters'); const query = collection.find().where('affiliation').equals('Jedi'); const { result: characters, isFetching, fetchMore, isExhausted, } = useRxQuery(query, { pageSize: 5, pagination: 'Infinite', }); if (isFetching) { return 'Loading...'; } return ( <CharacterList> {characters.map((character, index) => ( <Character character={character} key={index} /> ))} {!isExhausted && <button onClick={fetchMore}>load more</button>} </CharacterList> ); ","version":"Next","tagName":"h3"},{"title":"Different RxStorage Layers for RxDB","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#different-rxstorage-layers-for-rxdb","content":" RxDB offers multiple storage layers, each backed by a different underlying technology. Developers can choose the storage layer that best suits their application's requirements. Some available options include: Dexie.js RxStorage: Built on top of Dexie.js, a popular IndexedDB wrapper.LokiJS RxStorage: Utilizes the LokiJS in-memory database.IndexedDB RxStorage: The default RxDB storage layer, providing efficient data storage in modern browsers.OPFS RxStorage: Uses the Operational File System (OPFS) for storage, suitable for Electron applications.Memory RxStorage: Stores data in memory, primarily intended for testing and development purposes.SQLite RxStorage: Stores data in an SQLite database. Can be used in a browser with react by using a SQLite database that was compiled to WebAssembly. Using SQLite in react might not be the best idea, because a compiled SQLite wasm file is about one megabyte of code that has to be loaded and rendered by your users. Using native browser APIs like IndexedDB and OPFS have shown to be a more optimal database solution for browser based react apps compared to SQLite. ","version":"Next","tagName":"h3"},{"title":"Synchronizing Data with RxDB between Clients and Servers","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#synchronizing-data-with-rxdb-between-clients-and-servers","content":" The offline-first approach is a fundamental principle of RxDB's design. When dealing with client-server synchronization, RxDB ensures that changes made offline are captured and propagated to the server once connectivity is reestablished. This mechanism guarantees that data remains consistent across different client instances, even when operating in an occasionally connected environment. RxDB offers a range of replication plugins that facilitate data synchronization between clients and servers. These plugins support various synchronization strategies, such as one-way replication, two-way replication, and custom conflict resolution. Developers can select the appropriate plugin based on their application's synchronization requirements. ","version":"Next","tagName":"h3"},{"title":"Advanced RxDB Features and Techniques","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#advanced-rxdb-features-and-techniques","content":" Encryption of Local Data Security is paramount when handling sensitive user data. RxDB supports data encryption, ensuring that locally stored information remains protected from unauthorized access. This feature is particularly valuable when dealing with sensitive data in offline scenarios. ","version":"Next","tagName":"h3"},{"title":"Indexing and Performance Optimization","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#indexing-and-performance-optimization","content":" Efficient indexing is critical for achieving optimal database performance. RxDB provides mechanisms to define indexes on specific fields, enhancing query speed and reducing the computational overhead of data retrieval. ","version":"Next","tagName":"h3"},{"title":"JSON Key Compression","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#json-key-compression","content":" RxDB employs JSON key compression to reduce storage space and improve performance. This technique minimizes the memory footprint of the database, making it suitable for applications with limited resources. ","version":"Next","tagName":"h3"},{"title":"Change Streams and Event Handling","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#change-streams-and-event-handling","content":" RxDB enables developers to subscribe to change streams, which emit events whenever data changes occur. This functionality facilitates real-time event handling and provides opportunities for implementing features such as notifications and live updates. ","version":"Next","tagName":"h3"},{"title":"Conclusion","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#conclusion","content":" In the realm of React application development, efficient data management is pivotal to delivering a seamless and engaging user experience. RxDB emerges as a compelling solution, seamlessly integrating reactive programming principles with sophisticated database capabilities. By adopting RxDB, React developers can harness its powerful features, including reactive data handling, offline-first support, and real-time synchronization. With RxDB as a foundational pillar, React applications can excel in responsiveness, scalability, and data integrity. As the landscape of web development continues to evolve, RxDB remains a steadfast companion for creating robust and dynamic React applications. ","version":"Next","tagName":"h2"},{"title":"Follow Up","type":1,"pageTitle":"RxDB as a Database for React Applications","url":"/articles/react-database.html#follow-up","content":" To explore more about RxDB and leverage its capabilities for browser database development, check out the following resources: RxDB GitHub Repository: Visit the official GitHub repository of RxDB to access the source code, documentation, and community support.RxDB Quickstart: Get started quickly with RxDB by following the provided quickstart guide, which provides step-by-step instructions for setting up and using RxDB in your projects.RxDB React Example at GitHub ","version":"Next","tagName":"h2"},{"title":"📥 Backup Plugin","type":0,"sectionRef":"#","url":"/backup.html","content":"","keywords":"","version":"Next"},{"title":"import","type":1,"pageTitle":"📥 Backup Plugin","url":"/backup.html#import","content":" The backup plugin works only in node.js, not in browser. This means we have to import it to RxDB before it can be used. import { addRxPlugin } from 'rxdb'; import { RxDBBackupPlugin } from 'rxdb/plugins/backup'; addRxPlugin(RxDBBackupPlugin); ","version":"Next","tagName":"h2"},{"title":"one-time backup","type":1,"pageTitle":"📥 Backup Plugin","url":"/backup.html#one-time-backup","content":" Write the whole database to the filesystem once. When called multiple times, it will continue from the last checkpoint and not start all over again. const backupOptions = { // if false, a one-time backup will be written live: false, // the folder where the backup will be stored directory: '/my-backup-folder/, // if true, attachments will also be saved attachments: true } const backupState = myDatabase.backup(backupOptions); await backupState.awaitInitialBackup(); // call again to run from the last checkpoint const backupState2 = myDatabase.backup(backupOptions); await backupState2.awaitInitialBackup(); ","version":"Next","tagName":"h2"},{"title":"live backup","type":1,"pageTitle":"📥 Backup Plugin","url":"/backup.html#live-backup","content":" When live: true is set, the backup will write all ongoing changes to the backup directory. const backupOptions = { // set live: true to have an ongoing backup live: true, directory: '/my-backup-folder/, attachments: true } const backupState = myDatabase.backup(backupOptions); // you can still await the initial backup write, but further changes will still be processed. await backupState.awaitInitialBackup(); ","version":"Next","tagName":"h2"},{"title":"writeEvents$","type":1,"pageTitle":"📥 Backup Plugin","url":"/backup.html#writeevents","content":" You can listen to the writeEvents$ Observable to get notified about written backup files. const backupOptions = { live: false, directory: '/my-backup-folder/, attachments: true } const backupState = myDatabase.backup(backupOptions); const subscription = backupState.writeEvents$.subscribe(writeEvent => console.dir(writeEvent)); /* > { collectionName: 'humans', documentId: 'foobar', files: [ '/my-backup-folder/foobar/document.json' ], deleted: false } */ ","version":"Next","tagName":"h2"},{"title":"Import backup","type":1,"pageTitle":"📥 Backup Plugin","url":"/backup.html#import-backup","content":" It is currently not possible to import from a written backup. If you need this functionality, please make a pull request. ","version":"Next","tagName":"h2"},{"title":"Contribution","type":0,"sectionRef":"#","url":"/contribution.html","content":"","keywords":"","version":"Next"},{"title":"Requirements","type":1,"pageTitle":"Contribution","url":"/contribution.html#requirements","content":" Before you can start developing, do the following: Make sure you have installed nodejs with the version stated in the .nvmrcClone the repository git clone https://github.com/pubkey/rxdb.gitInstall the dependencies cd rxdb && npm installMake sure that the tests work for you npm run test ","version":"Next","tagName":"h2"},{"title":"Flow","type":1,"pageTitle":"Contribution","url":"/contribution.html#flow","content":" While developing you should run npm run dev and leave it open in the console. This will run the unit-tests on every file-change. If you have a slow device, you can also manually run npm run test:node every time you want to check if the tests work. ","version":"Next","tagName":"h2"},{"title":"Adding tests","type":1,"pageTitle":"Contribution","url":"/contribution.html#adding-tests","content":" Before you start creating a bugfix or a feature, you should create a test to reproduce it. Tests are in the test/unit-folder. If you want to reproduce a bug, you can modify the test in this file. ","version":"Next","tagName":"h2"},{"title":"Making a PR","type":1,"pageTitle":"Contribution","url":"/contribution.html#making-a-pr","content":" If you make a pull-request, ensure the following: Every feature or bugfix must be committed together with a unit-test which ensures everything works as expected.Do not commit build-files (anything in the dist-folder)Before you add non-trivial changes, create an issue to discuss if this will be merged and you don't waste your time.To run the unit and integration-tests, do npm run test and ensure everything works as expected ","version":"Next","tagName":"h2"},{"title":"Getting help","type":1,"pageTitle":"Contribution","url":"/contribution.html#getting-help","content":" If you need help with your contribution, ask at discord. Docs The source of the documentation is at the docs-src-folder. To read the docs locally, run npm run docs:install && npm run docs:serve and open http://localhost:4000/ Thank you for contributing! ","version":"Next","tagName":"h2"},{"title":"🧹 Cleanup","type":0,"sectionRef":"#","url":"/cleanup.html","content":"","keywords":"","version":"Next"},{"title":"Add the cleanup plugin","type":1,"pageTitle":"🧹 Cleanup","url":"/cleanup.html#add-the-cleanup-plugin","content":" import { addRxPlugin } from 'rxdb'; import { RxDBCleanupPlugin } from 'rxdb/plugins/cleanup'; addRxPlugin(RxDBCleanupPlugin); ","version":"Next","tagName":"h2"},{"title":"Create a database with cleanup options","type":1,"pageTitle":"🧹 Cleanup","url":"/cleanup.html#create-a-database-with-cleanup-options","content":" You can set a specific cleanup policy when a RxDatabase is created. For most use cases, the defaults should be ok. import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const db = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageDexie(), cleanupPolicy: { /** * The minimum time in milliseconds for how long * a document has to be deleted before it is * purged by the cleanup. * [default=one month] */ minimumDeletedTime: 1000 * 60 * 60 * 24 * 31, // one month, /** * The minimum amount of that that the RxCollection must have existed. * This ensures that at the initial page load, more important * tasks are not slowed down because a cleanup process is running. * [default=60 seconds] */ minimumCollectionAge: 1000 * 60, // 60 seconds /** * After the initial cleanup is done, * a new cleanup is started after [runEach] milliseconds * [default=5 minutes] */ runEach: 1000 * 60 * 5, // 5 minutes /** * If set to true, * RxDB will await all running replications * to not have a replication cycle running. * This ensures we do not remove deleted documents * when they might not have already been replicated. * [default=true] */ awaitReplicationsInSync: true, /** * If true, it will only start the cleanup * when the current instance is also the leader. * This ensures that when RxDB is used in multiInstance mode, * only one instance will start the cleanup. * [default=true] */ waitForLeadership: true } }); ","version":"Next","tagName":"h2"},{"title":"Calling cleanup manually","type":1,"pageTitle":"🧹 Cleanup","url":"/cleanup.html#calling-cleanup-manually","content":" You can manually run a cleanup per collection by calling RxCollection.cleanup(). /** * Manually run the cleanup with the * minimumDeletedTime from the cleanupPolicy. */ await myRxCollection.cleanup(); /** * Overwrite the minimumDeletedTime * be setting it explicitly (time in milliseconds) */ await myRxCollection.cleanup(1000); /** * Purge all deleted documents no * mather when they where deleted * by setting minimumDeletedTime to zero. */ await myRxCollection.cleanup(0); ","version":"Next","tagName":"h2"},{"title":"data-migration","type":0,"sectionRef":"#","url":"/data-migration","content":"data-migration This documentation page has been moved to here","keywords":"","version":"Next"},{"title":"Capacitor Database - SQLite, RxDB and others","type":0,"sectionRef":"#","url":"/capacitor-database.html","content":"","keywords":"","version":"Next"},{"title":"Database Solutions for Capacitor","type":1,"pageTitle":"Capacitor Database - SQLite, RxDB and others","url":"/capacitor-database.html#database-solutions-for-capacitor","content":" ","version":"Next","tagName":"h2"},{"title":"Preferences API","type":1,"pageTitle":"Capacitor Database - SQLite, RxDB and others","url":"/capacitor-database.html#preferences-api","content":" Capacitor comes with a native Preferences API which is a simple, persistent key->value store for lightweight data, similar to the browsers localstorage or React Native AsyncStorage. To use it, you first have to install it from npm npm install @capacitor/preferences and then you can import it and write/read data. Notice that all calls to the preferences API are asynchronous so they return a Promise that must be await-ed. import { Preferences } from '@capacitor/preferences'; // write await Preferences.set({ key: 'foo', value: 'baar', }); // read const { value } = await Preferences.get({ key: 'foo' }); // > 'bar' // delete await Preferences.remove({ key: 'foo' }); The preferences API is good when only a small amount of data needs to be stored and when no query capabilities besides the key access are required. Complex queries or other features like indexes or replication are not supported which makes the preferences API not suitable for anything more then storing simple data like user settings. ","version":"Next","tagName":"h3"},{"title":"Localstorage/IndexedDB/WebSQL","type":1,"pageTitle":"Capacitor Database - SQLite, RxDB and others","url":"/capacitor-database.html#localstorageindexeddbwebsql","content":" Since Capacitor apps run in a web view, Web APIs like IndexedDB, Localstorage and WebSQL are available. But the default browser behavior is to clean up these storages regularly when they are not in use for a long time or the device is low on space. Therefore you cannot 100% rely on the persistence of the stored data and your application needs to expect that the data will be lost eventually. Storing data in these storages can be done in browsers, because there is no other option. But in Capacitor iOS and Android, you should not rely on these. ","version":"Next","tagName":"h3"},{"title":"SQLite","type":1,"pageTitle":"Capacitor Database - SQLite, RxDB and others","url":"/capacitor-database.html#sqlite","content":" SQLite is a SQL based relational database written in C that was crafted to be embed inside of applications. Operations are written in the SQL query language and SQLite generally follows the PostgreSQL syntax. To use SQLite in Capacitor, there are three options: The @capacitor-community/sqlite packageThe cordova-sqlite-storage packageThe non-free IonicSecure Storage which comes at 999$ per month. It is recommended to use the @capacitor-community/sqlite because it has the best maintenance and is open source. Install it first npm install --save @capacitor-community/sqlite and then set the storage location for iOS apps: { "plugins": { "CapacitorSQLite": { "iosDatabaseLocation": "Library/CapacitorDatabase" } } } Now you can create a database connection and use the SQLite database. import { Capacitor } from '@capacitor/core'; import { CapacitorSQLite, SQLiteDBConnection, SQLiteConnection, capSQLiteSet, capSQLiteChanges, capSQLiteValues, capEchoResult, capSQLiteResult, capNCDatabasePathResult } from '@capacitor-community/sqlite'; const sqlite = new SQLiteConnection(CapacitorSQLite); const database: SQLiteDBConnection = await this.sqlite.createConnection(databaseName, encrypted, mode, version, readOnly); let { rows } = database.query('SELECT somevalue FROM sometable'); The downside of SQLite is that it is lacking many features that are handful when using a database together with an UI based application like your Capacitor app. For example it is not possible to observe queries or document fields. Also there is no realtime replication feature, you can only import json files. This makes SQLite a good solution when you just want to store data on the client, but when you want to sync data with a server or other clients or create big complex realtime applications, you have to use something else. ","version":"Next","tagName":"h3"},{"title":"RxDB","type":1,"pageTitle":"Capacitor Database - SQLite, RxDB and others","url":"/capacitor-database.html#rxdb","content":" RxDB is an local first, NoSQL database for JavaScript Applications like hybrid apps. Because it is reactive, you can subscribe to all state changes like the result of a query or even a single field of a document. This is great for UI-based realtime applications in a way that makes it easy to develop realtime applications like what you need in Capacitor. Because RxDB is made for Web applications, most of the available RxStorage plugins can be used to store and query data in a Capacitor app. However it is recommended to use the SQLite RxStorage because it stores the data on the filesystem of the device, not in the JavaScript runtime (like IndexedDB). Storing data on the filesystem ensures it is persistent and will not be cleaned up by any process. Also the performance of SQLite is much faster compared to IndexedDB, because SQLite does not have to go through a browsers permission layers. For the SQLite binding you should use the @capacitor-community/sqlite package. Because the SQLite RxStorage is part of the 👑 Premium Plugins which must be purchased, it is recommended to use the Dexie.js RxStorage while testing and prototyping your Capacitor app. To use the SQLite RxStorage in Capacitor you have to install all dependencies via npm install rxdb rxjs rxdb-premium @capacitor-community/sqlite. For iOS apps you should add a database location in your Capacitor settings: { "plugins": { "CapacitorSQLite": { "iosDatabaseLocation": "Library/CapacitorDatabase" } } } Then you can assemble the RxStorage and create a database with it: import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsCapacitor } from 'rxdb-premium/plugins/storage-sqlite'; import { CapacitorSQLite, SQLiteConnection } from '@capacitor-community/sqlite'; import { Capacitor } from '@capacitor/core'; const sqlite = new SQLiteConnection(CapacitorSQLite); // create database const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsCapacitor(sqlite, Capacitor) }) }); // create collections const collections = await myRxDatabase.addCollections({ humans: { /* ... */ } }); // insert document await collections.humans.insert({id: 'foo', name: 'bar'}); // run a query const result = await collections.humans.find({ selector: { name: 'bar' } }).exec(); // observe a query await collections.humans.find({ selector: { name: 'bar' } }).$.subscribe(result => {/* ... */}); ","version":"Next","tagName":"h3"},{"title":"Follow up","type":1,"pageTitle":"Capacitor Database - SQLite, RxDB and others","url":"/capacitor-database.html#follow-up","content":" If you haven't done yet, you should start learning about RxDB with the Quickstart Tutorial.There is a followup list of other client side database alternatives. ","version":"Next","tagName":"h2"},{"title":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","type":0,"sectionRef":"#","url":"/articles/websockets-sse-polling-webrtc-webtransport.html","content":"","keywords":"","version":"Next"},{"title":"What is Long Polling?","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#what-is-long-polling","content":" Long polling was the first "hack" to enable a server-client messaging method that can be used in browsers over HTTP. The technique emulates server push communications with normal XHR requests. Unlike traditional polling, where the client repeatedly requests data from the server at regular intervals, long polling establishes a connection to the server that remains open until new data is available. Once the server has new information, it sends the response to the client, and the connection is closed. Immediately after receiving the server's response, the client initiates a new request, and the process repeats. This method allows for more immediate data updates and reduces unnecessary network traffic and server load. However, it can still introduce delays in communication and is less efficient than other real-time technologies like WebSockets. // long-polling in a JavaScript client function longPoll() { fetch('http://example.com/poll') .then(response => response.json()) .then(data => { console.log("Received data:", data); longPoll(); // Immediately establish a new long polling request }) .catch(error => { /** * Errors can appear in normal conditions when a * connection timeout is reached or when the client goes offline. * On errors we just restart the polling after some delay. */ setTimeout(longPoll, 10000); }); } longPoll(); // Initiate the long polling Implementing long-polling on the client side is pretty simple, as shown in the code above. However on the backend there can be multiple difficulties to ensure the client receives all events and does not miss out updates when the client is currently reconnecting. ","version":"Next","tagName":"h3"},{"title":"What are WebSockets?","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#what-are-websockets","content":" WebSockets provide a full-duplex communication channel over a single, long-lived connection between the client and server. This technology enables browsers and servers to exchange data without the overhead of HTTP request-response cycles, facilitating real-time data transfer for applications like live chat, gaming, or financial trading platforms. WebSockets represent a significant advancement over traditional HTTP by allowing both parties to send data independently once the connection is established, making it ideal for scenarios that require low latency and high-frequency updates. // WebSocket in a JavaScript client const socket = new WebSocket('ws://example.com'); socket.onopen = function(event) { console.log('Connection established'); // Sending a message to the server socket.send('Hello Server!'); }; socket.onmessage = function(event) { console.log('Message from server:', event.data); }; While the basics of the WebSocket API are easy to use it has shown to be rather complex in production. A socket can loose connection and must be re-created accordingly. Especially detecting if a connection is still usable or not, can be very tricky. Mostly you would add a ping-and-pong heartbeat to ensure that the open connection is not closed. This complexity is why most people use a library on top of WebSockets like Socket.IO which handles all these cases and even provides fallbacks to long-polling if required. ","version":"Next","tagName":"h3"},{"title":"What are Server-Sent-Events?","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#what-are-server-sent-events","content":" Server-Sent Events (SSE) provide a standard way to push server updates to the client over HTTP. Unlike WebSockets, SSEs are designed exclusively for one-way communication from server to client, making them ideal for scenarios like live news feeds, sports scores, or any situation where the client needs to be updated in real time without sending data to the server. You can think of Server-Sent-Events as a single HTTP request where the backend does not send the whole body at once, but instead keeps the connection open and trickles the answer by sending a single line each time an event has to be send to the client. Creating a connection for receiving events with SSE is straightforward. On the client side in a browser, you initialize an EventSource instance with the URL of the server-side script that generates the events. Listening for messages involves attaching event handlers directly to the EventSource instance. The API distinguishes between generic message events and named events, allowing for more structured communication. Here's how you can set it up in JavaScript: // Connecting to the server-side event stream const evtSource = new EventSource("https://example.com/events"); // Handling generic message events evtSource.onmessage = event => { console.log('got message: ' + event.data); }; In difference to WebSockets, an EventSource will automatically reconnect on connection loss. On the server side, your script must set the Content-Type header to text/event-stream and format each message according to the SSE specification. This includes specifying event types, data payloads, and optional fields like event ID and retry timing. Here's how you can set up a simple SSE endpoint in a Node.js Express app: import express from 'express'; const app = express(); const PORT = process.env.PORT || 3000; app.get('/events', (req, res) => { res.writeHead(200, { 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache', 'Connection': 'keep-alive', }); const sendEvent = (data) => { // all message lines must be prefixed with 'data: ' const formattedData = `data: ${JSON.stringify(data)}\\n\\n`; res.write(formattedData); }; // Send an event every 2 seconds const intervalId = setInterval(() => { const message = { time: new Date().toTimeString(), message: 'Hello from the server!', }; sendEvent(message); }, 2000); // Clean up when the connection is closed req.on('close', () => { clearInterval(intervalId); res.end(); }); }); app.listen(PORT, () => console.log(`Server running on http://localhost:${PORT}`)); ","version":"Next","tagName":"h3"},{"title":"What is the WebTransport API?","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#what-is-the-webtransport-api","content":" WebTransport is a cutting-edge API designed for efficient, low-latency communication between web clients and servers. It leverages the HTTP/3 QUIC protocol to enable a variety of data transfer capabilities, such as sending data over multiple streams, in both reliable and unreliable manners, and even allowing data to be sent out of order. This makes WebTransport a powerful tool for applications requiring high-performance networking, such as real-time gaming, live streaming, and collaborative platforms. However, it's important to note that WebTransport is currently a working draft and has not yet achieved widespread adoption. As of now (March 2024), WebTransport is in a Working Draft and not widely supported. You cannot yet use WebTransport in the Safari browser and there is also no native support in Node.js. This limits its usability across different platforms and environments. Even when WebTransport will become widely supported, its API is very complex to use and likely it would be something where people build libraries on top of WebTransport, not using it directly in an application's sourcecode. ","version":"Next","tagName":"h3"},{"title":"What is WebRTC?","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#what-is-webrtc","content":" WebRTC (Web Real-Time Communication) is an open-source project and API standard that enables real-time communication (RTC) capabilities directly within web browsers and mobile applications without the need for complex server infrastructure or the installation of additional plugins. It supports peer-to-peer connections for streaming audio, video, and data exchange between browsers. WebRTC is designed to work through NATs and firewalls, utilizing protocols like ICE, STUN, and TURN to establish a connection between peers. While WebRTC is made to be used for client-client interactions, it could also be leveraged for server-client communication where the server just simulated being also a client. This approach only makes sense for niche use cases which is why in the following WebRTC will be ignored as an option. The problem is that for WebRTC to work, you need a signaling-server anyway which would then again run over websockets, SSE or WebTransport. This defeats the purpose of using WebRTC as a replacement for these technologies. ","version":"Next","tagName":"h3"},{"title":"Limitations of the technologies","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#limitations-of-the-technologies","content":" ","version":"Next","tagName":"h2"},{"title":"Sending Data in both directions","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#sending-data-in-both-directions","content":" Only WebSockets and WebTransport allow to send data in both directions so that you can receive server-data and send client-data over the same connection. While it would also be possible with Long-Polling in theory, it is not recommended because sending "new" data to an existing long-polling connection would require to do an additional http-request anyway. So instead of doing that you can send data directly from the client to the server with an additional http-request without interrupting the long-polling connection. Server-Sent-Events do not support sending any additional data to the server. You can only do the initial request, and even there you cannot send POST-like data in the http-body by default with the native EventSource API. Instead you have to put all data inside of the url parameters which is considered a bad practice for security because credentials might leak into server logs, proxies and caches. To fix this problem, RxDB for example uses the eventsource polyfill instead of the native EventSource API. This library adds additional functionality like sending custom http headers. Also there is this library from microsoft which allows to send body data and use POST requests instead of GET. ","version":"Next","tagName":"h3"},{"title":"6-Requests per Domain Limit","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#6-requests-per-domain-limit","content":" Most modern browsers allow six connections per domain () which limits the usability of all steady server-to-client messaging methods. The limitation of six connections is even shared across browser tabs so when you open the same page in multiple tabs, they would have to shared the six-connection-pool with each other. This limitation is part of the HTTP/1.1-RFC (which even defines a lower number of only two connections). Quote From RFC 2616 – Section 8.1.4: "Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy. A proxy SHOULD use up to 2*N connections to another server or proxy, where N is the number of simultaneously active users. These guidelines are intended to improve HTTP response times and avoid congestion." While that policy makes sense to prevent website owners from using their visitors to D-DOS other websites, it can be a big problem when multiple connections are required to handle server-client communication for legitimate use cases. To workaround the limitation you have to use HTTP/2 or HTTP/3 with which the browser will only open a single connection per domain and then use multiplexing to run all data through a single connection. While this gives you a virtually infinity amount of parallel connections, there is a SETTINGS_MAX_CONCURRENT_STREAMS setting which limits the actually connections amount. The default is 100 concurrent streams for most configurations. In theory the connection limit could also be increased by the browser, at least for specific APIs like EventSource, but the issues have beem marked as "won't fix" by chromium and firefox. Lower the amount of connections in Browser Apps When you build a browser application, you have to assume that your users will use the app not only once, but in multiple browser tabs in parallel. By default you likely will open one server-stream-connection per tab which is often not necessary at all. Instead you open only a single connection and shared it between tabs, no matter how many tabs are open. RxDB does that with the LeaderElection from the broadcast-channel npm package to only have one stream of replication between server and clients. You can use that package standalone (without RxDB) for any type of application. ","version":"Next","tagName":"h3"},{"title":"Connections are not kept open on mobile apps","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#connections-are-not-kept-open-on-mobile-apps","content":" In the context of mobile applications running on operating systems like Android and iOS, maintaining open connections, such as those used for WebSockets and the others, poses a significant challenge. Mobile operating systems are designed to automatically move applications into the background after a certain period of inactivity, effectively closing any open connections. This behavior is a part of the operating system's resource management strategy to conserve battery and optimize performance. As a result, developers often rely on mobile push notifications as an efficient and reliable method to send data from servers to clients. Push notifications allow servers to alert the application of new data, prompting an action or update, without the need for a persistent open connection. ","version":"Next","tagName":"h3"},{"title":"Proxies and Firewalls","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#proxies-and-firewalls","content":" From consutling many RxDB users, it was shown that in enterprise environments (aka "at work") it is often hard to implement a WebSocket server into the infrastructure because many proxies and firewalls block non-HTTP connections. Therefore using the Server-Sent-Events provides and easier way of enterprise integration. Also long-polling uses only plain HTTP-requests and might be an option. ","version":"Next","tagName":"h3"},{"title":"Performance Comparison","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#performance-comparison","content":" Comparing the performance of WebSockets, Server-Sent Events (SSE), Long-Polling and WebTransport directly involves evaluating key aspects such as latency, throughput, server load, and scalability under various conditions. First lets look at the raw numbers. A good performance comparison can be found in this repo which tests the messages times in a Go Lang server implementation. Here we can see that the performance of WebSockets, WebRTC and WebTransport are comparable: note Remember that WebTransport is a pretty new technologie based on the also new HTTP/3 protocol. In the future (after March 2024) there might be more performance optimizations. Also WebTransport is optimized to use less power which metric is not tested. Lets also compare the Latency, the throughput and the scalability: ","version":"Next","tagName":"h2"},{"title":"Latency","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#latency","content":" WebSockets: Offers the lowest latency due to its full-duplex communication over a single, persistent connection. Ideal for real-time applications where immediate data exchange is critical.Server-Sent Events: Also provides low latency for server-to-client communication but cannot natively send messages back to the server without additional HTTP requests.Long-Polling: Incurs higher latency as it relies on establishing new HTTP connections for each data transmission, making it less efficient for real-time updates. Also it can occur that the server wants to send an event when the client is still in the process of opening a new connection. In these cases the latency would be significantly larger.WebTransport: Promises to offer low latency similar to WebSockets, with the added benefits of leveraging the HTTP/3 protocol for more efficient multiplexing and congestion control. ","version":"Next","tagName":"h3"},{"title":"Throughput","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#throughput","content":" WebSockets: Capable of high throughput due to its persistent connection, but throughput can suffer from backpressure where the client cannot process data as fast as the server is capable of sending it.Server-Sent Events: Efficient for broadcasting messages to many clients with less overhead than WebSockets, leading to potentially higher throughput for unidirectional server-to-client communication.Long-Polling: Generally offers lower throughput due to the overhead of frequently opening and closing connections, which consumes more server resources.WebTransport: Expected to support high throughput for both unidirectional and bidirectional streams within a single connection, outperforming WebSockets in scenarios requiring multiple streams. ","version":"Next","tagName":"h3"},{"title":"Scalability and Server Load","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#scalability-and-server-load","content":" WebSockets: Maintaining a large number of WebSocket connections can significantly increase server load, potentially affecting scalability for applications with many users.Server-Sent Events: More scalable for scenarios that primarily require updates from server to client, as it uses less connection overhead than WebSockets because it uses "normal" HTTP request without things like protocol updates that have to be run with WebSockets.Long-Polling: The least scalable due to the high server load generated by frequent connection establishment, making it suitable only as a fallback mechanism.WebTransport: Designed to be highly scalable, benefiting from HTTP/3's efficiency in handling connections and streams, potentially reducing server load compared to WebSockets and SSE. ","version":"Next","tagName":"h3"},{"title":"Recommendations and Use-Case Suitability","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#recommendations-and-use-case-suitability","content":" In the landscape of server-client communication technologies, each has its distinct advantages and use case suitability. Server-Sent Events (SSE) emerge as the most straightforward option to implement, leveraging the same HTTP/S protocols as traditional web requests, thereby circumventing corporate firewall restrictions and other technical problems that can appear with other protocols. They are easily integrated into Node.js and other server frameworks, making them an ideal choice for applications requiring frequent server-to-client updates, such as news feeds, stock tickers, and live event streaming. On the other hand, WebSockets excel in scenarios demanding ongoing, two-way communication. Their ability to support continuous interaction makes them the prime choice for browser games, chat applications, and live sports updates. However, WebTransport, despite its potential, faces adoption challenges. It is not widely supported by server frameworks including Node.js and lacks compatibility with safari. Moreover, its reliance on HTTP/3 further limits its immediate applicability because many WebServers like nginx only have experimental HTTP/3 support. While promising for future applications with its support for both reliable and unreliable data transmission, WebTransport is not yet a viable option for most use cases. Long-Polling, once a common technique, is now largely outdated due to its inefficiency and the high overhead of repeatedly establishing new HTTP connections. Although it may serve as a fallback in environments lacking support for WebSockets or SSE, its use is generally discouraged due to significant performance limitations. ","version":"Next","tagName":"h2"},{"title":"Known Problems","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#known-problems","content":" For all of the realtime streaming technologies, there are known problems. When you build anything on top of them, keep these in mind. ","version":"Next","tagName":"h2"},{"title":"A client can miss out events when reconnecting","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#a-client-can-miss-out-events-when-reconnecting","content":" When a client is connecting, reconnecting or offline, it can miss out events that happened on the server but could not be streamed to the client. This missed out events are not relevant when the server is streaming the full content each time anyway, like on a live updating stock ticker. But when the backend is made to stream partial results, you have to account for missed out events. Fixing that on the backend scales pretty bad because the backend would have to remember for each client which events have been successfully send already. Instead this should be implemented with client side logic. The RxDB replication protocol for example uses two modes of operation for that. One is the checkpoint iteration mode where normal http requests are used to iterate over backend data, until the client is in sync again. Then it can switch to event observation mode where updates from the realtime-stream are used to keep the client in sync. Whenever a client disconnects or has any error, the replication shortly switches to checkpoint iteration mode until the client is in sync again. This method accounts for missed out events and ensures that clients can always sync to the exact equal state of the server. ","version":"Next","tagName":"h3"},{"title":"Company firewalls can cause problems","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#company-firewalls-can-cause-problems","content":" There are many known problems with company infrastructure when using any of the streaming technologies. Proxies and firewall can block traffic or unintentionally break requests and responses. Whenever you implement a realtime app in such an infrastructure, make sure you first test out if the technology itself works for you. ","version":"Next","tagName":"h3"},{"title":"Follow Up","type":1,"pageTitle":"WebSockets vs Server-Sent-Events vs Long-Polling vs WebRTC vs WebTransport","url":"/articles/websockets-sse-polling-webrtc-webtransport.html#follow-up","content":" Check out the hackernews discussion of this articleShared/Like my announcement tweetLearn how to use Server-Sent-Events to replicate a client side RxDB database with your backend.Learn how to use RxDB with the RxDB QuickstartCheck out the RxDB github repo and leave a star ⭐ ","version":"Next","tagName":"h2"},{"title":"Dev Mode","type":0,"sectionRef":"#","url":"/dev-mode.html","content":"","keywords":"","version":"Next"},{"title":"Usage with Node.js","type":1,"pageTitle":"Dev Mode","url":"/dev-mode.html#usage-with-nodejs","content":" async function createDb() { if (process.env.NODE_ENV !== "production") { await import('rxdb/plugins/dev-mode').then( module => addRxPlugin(module.RxDBDevModePlugin) ); } const db = createRxDatabase( /* ... */ ); } ","version":"Next","tagName":"h2"},{"title":"Usage with Angular","type":1,"pageTitle":"Dev Mode","url":"/dev-mode.html#usage-with-angular","content":" import { isDevMode } from '@angular/core'; async function createDb() { if (isDevMode()){ await import('rxdb/plugins/dev-mode').then( module => addRxPlugin(module.RxDBDevModePlugin) ); } const db = createRxDatabase( /* ... */ ); // ... } ","version":"Next","tagName":"h2"},{"title":"Usage with webpack","type":1,"pageTitle":"Dev Mode","url":"/dev-mode.html#usage-with-webpack","content":" In the webpack.config.js: module.exports = { entry: './src/index.ts', /* ... */ plugins: [ // set a global variable that can be accessed during runtime new webpack.DefinePlugin({ MODE: JSON.stringify("production") }) ] /* ... */ }; In your source code: declare var MODE: 'production' | 'development'; async function createDb() { if (MODE === 'development') { await import('rxdb/plugins/dev-mode').then( module => addRxPlugin(module.RxDBDevModePlugin) ); } const db = createRxDatabase( /* ... */ ); // ... } ","version":"Next","tagName":"h2"},{"title":"Disable the dev-mode warning","type":1,"pageTitle":"Dev Mode","url":"/dev-mode.html#disable-the-dev-mode-warning","content":" When the dev-mode is enabled, it will print a console.warn() message to the console so that you do not accidentally use the dev-mode in production. To disable this warning you can call the disableWarnings() function. import { disableWarnings } from 'rxdb/plugins/dev-mode'; disableWarnings(); ","version":"Next","tagName":"h2"},{"title":"RxDB CRDT Plugin (beta)","type":0,"sectionRef":"#","url":"/crdt.html","content":"","keywords":"","version":"Next"},{"title":"RxDB CRDT operations","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#rxdb-crdt-operations","content":" In RxDB, a CRDT operation is defined with NoSQL update operators, like you might know them from MongoDB update operations or the RxDB update plugin. To run the operators, RxDB uses the mingo library. A CRDT operator example: const myCRDTOperation = { // increment the points field by +1 $inc: { points: 1 }, // set the modified field to true $set: { modified: true } }; ","version":"Next","tagName":"h2"},{"title":"Operators","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#operators","content":" At the moment, not all possible operators are implemented in mingo, if you need additional ones, you should make a pull request there. The following operators can be used at this point in time: $min$max$inc$set$unset$push$addToSet$pop$pullAll$rename For the exact definition on how each operator behaves, check out the MongoDB documentation on update operators. ","version":"Next","tagName":"h3"},{"title":"Installation","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#installation","content":" To use CRDTs with RxDB, you need the following: Add the CRDT plugin via addRxPlugin.Add a field to your schema that defines where to store the CRDT operations via getCRDTSchemaPart()Set the crdt options in your schema.Do NOT set a custom conflict handler, the plugin will use its own one. // import the relevant parts from the CRDT plugin import { getCRDTSchemaPart, RxDBcrdtPlugin } from 'rxdb/plugins/crdt'; // add the CRDT plugin to RxDB import { addRxPlugin } from 'rxdb'; addRxPlugin(RxDBcrdtPlugin); // create a database import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const myDatabase = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageDexie() }); // create a schema with the CRDT options const mySchema = { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 }, points: { type: 'number', maximum: 100, minimum: 0 }, crdts: getCRDTSchemaPart() // use this field to store the CRDT operations }, required: ['id', 'points'], crdt: { // CRDT options field: 'crdts' } } // add a collection await db.addCollections({ users: { schema: mySchema } }); // insert a document const myDocument = await db.users.insert({id: 'alice', points: 0}); // run a CRDT operation that increments the 'points' by one await myDocument.updateCRDT({ ifMatch: { $inc: { points: 1 } } }); ","version":"Next","tagName":"h2"},{"title":"Conditional CRDT operations","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#conditional-crdt-operations","content":" By default, all CRDTs operations will be run to build the current document state. But in many cases, more granular operations are required to better reflect the desired business logic. For these cases, conditional CRDTs can be used. For example if you have a field points with a maximum of 100, you might want to only run an $inc operations, if the points value is less then 100. In an conditional CRDT, you can specify a selector and the operation sets ifMatch and ifNotMatch. At each time the CRDT is applied to the document state, first the selector will run and evaluate which operations path must be used. await myDocument.updateCRDT({ // only if the selector matches, the ifMatch operation will run selector: { age: { $lt: 100 } }, // an operation that runs if the selector matches ifMatch: { $inc: { points: 1 } }, // if the selector does NOT match, you could run a different operation instead ifNotMatch: { // ... } }); ","version":"Next","tagName":"h2"},{"title":"Running multiples operations at once","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#running-multiples-operations-at-once","content":" By default, one CRDT operation is applied to the document in a single database write. To represent more complex logic chains, it might make sense to use multiple CRDTs and write them at once inside of a single atomic document write. For these cases, the updateCRDT() method allows to pass an array of operations. await myDocument.updateCRDT([ { selector: { /** ... **/ }, ifMatch: { /** ... **/ } }, { selector: { /** ... **/ }, ifMatch: { /** ... **/ } }, { selector: { /** ... **/ }, ifMatch: { /** ... **/ } }, { selector: { /** ... **/ }, ifMatch: { /** ... **/ } } ]); ","version":"Next","tagName":"h2"},{"title":"CRDTs on inserts","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#crdts-on-inserts","content":" When CRDTs are enabled with the plugin, all insert operations are automatically mapped as CRDT operation with the $set operator. // Calling RxCollection.insert() await myRxCollection.insert({ id: 'foo' points: 1 }); // is exactly equal to calling insertCRDT() await myRxCollection.insertCRDT({ ifMatch: { $set: { id: 'foo' points: 1 } } }); When the same document is inserted in multiple client instances and then replicated, a conflict will emerge and the insert-CRDTs will overwrite each other in a deterministic order. You can use insertCRDT() to make conditional insert operations with any logic. To check for the previous existence of a document, use the $exists query operation on the primary key of the document. await myRxCollection.insertCRDT({ selector: { // only run if the document did not exist before. id: { $exists: false } }, ifMatch: { // if the document did not exist, insert it $set: { id: 'foo' points: 1 } }, ifNotMatch: { // if document existed already, increment the points by +1 $inc: { points: 1 } } }); ","version":"Next","tagName":"h2"},{"title":"Deleting documents","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#deleting-documents","content":" You can delete a document with a CRDT operation by setting _deleted to true. Calling RxDocument.remove() will do exactly the same when CRDTs are activated. await doc.updateCRDT({ ifMatch: { $set: { _deleted: true } } }); // OR await doc.remove(); ","version":"Next","tagName":"h2"},{"title":"CRDTs with replication","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#crdts-with-replication","content":" CRDT operations are stored inside of a special field besides your 'normal' document fields. When replicating document data with the RxDB replication or the CouchDB replication or even any custom replication, the CRDT operations must be replicated together with the document data as if they would be 'normal' a document property. When any instances makes a write to the document, it is required to update the CRDT operations accordingly. For example if your custom backend updates a document, it must also do that by adding a CRDT operation. In dev-mode RxDB will refuse to store any document data where the document properties do not match the result of the CRDT operations. ","version":"Next","tagName":"h2"},{"title":"Why not automerge.js or yjs?","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#why-not-automergejs-or-yjs","content":" There are already CRDT libraries out there that have been considered to be used with RxDB. The biggest ones are automerge and yjs. The decision was made to not use these but instead go for a more NoSQL way of designing the CRDT format because: Users do not have to learn a new syntax but instead can use the NoSQL query operations which they already know to manipulate the JSON data of a document.RxDB is often used to replicate data with any custom backend on an already existing infrastructure. Using NoSQL operators instead of binary data in CRDTs, makes it easy to implement the exact same logic on these backends so that the backend can also do document writes and still be compliant to the RxDB CRDT plugin. So instead of using YJS or Automerge with a database, you can use RxDB with the CRDT plugin to have a more database specific CRDT approach. This gives you additional features for free such as schema validation or data migration. ","version":"Next","tagName":"h2"},{"title":"When to not use CRDTs","type":1,"pageTitle":"RxDB CRDT Plugin (beta)","url":"/crdt.html#when-to-not-use-crdts","content":" CRDT can only be use when your business logic allows to represent document changes via static json operators. If you can have cases where user interaction is required to correctly merge conflicting document states, you cannot use CRDTs for that. Also when CRDTs are used, it is no longer allowed to do non-CRDT writes to the document properties. ","version":"Next","tagName":"h2"},{"title":"Downsides of Local First / Offline First","type":0,"sectionRef":"#","url":"/downsides-of-offline-first.html","content":"","keywords":"","version":"Next"},{"title":"It only works with small datasets","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#it-only-works-with-small-datasets","content":" Making data available offline means it must be loaded from the server and then stored at the clients device. You need to load the full dataset on the first pageload and on every ongoing load you need to download the new changes to that set. While in theory you could download in infinite amount of data, in practice you have a limit how long the user can wait before having an up-to-date state. You want to display chat messages like Whatsapp? No problem. Syncing all the messages a user could write, can be done with a few HTTP requests. Want to make a tool that displays server logs? Good luck downloading terabytes of data to the client just to search for a single string. This will not work. Besides the network usage, there is another limit for the size of your data. In browsers you have some options for storage: Cookies, Localstorage, WebSQL and IndexedDB. Because Cookies and Localstorage is slow and WebSQL is deprecated, you will use IndexedDB. The limit of how much data you can store in IndexedDB depends on two factors: Which browser is used and how much disc space is left on the device. You can assume that at least a couple of hundred megabytes are available at least. The maximum is potentially hundreds of gigabytes or more, but the browser implementations vary. Chrome allows the browser to use up to 60% of the total disc space per origin. Firefox allows up to 50%. But on safari you can only store up to 1GB and the browser will prompt the user on each additional 200MB increment. The problem is, that you have no chance to really predict how much data can be stored. So you have to make assumptions that are hopefully true for all of your users. Also, you have no way to increase that space like you would add another hard drive to your backend server. Once your clients reach the limit, you likely have to rewrite big parts of your applications. UPDATE (2023): Newer versions of browsers can store way more data, for example firefox stores up to 10% of the total disk size. For an overview about how much can be stored, read this guide ","version":"Next","tagName":"h2"},{"title":"Browser storage is not really persistent","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#browser-storage-is-not-really-persistent","content":" When data is stored inside IndexedDB or one of the other storage APIs, it cannot be trusted to stay there forever. Apple for example deletes the data when the website was not used in the last 7 days. The other browsers also have logic to clean up the stored data, and in the end the user itself could be the one that deletes the browsers local data. The most common way to handle this, is to replicate everything from the backend to the client again. Of course, this does not work for state that is not stored at the backend. So if you assume you can store the users private data inside the browser in a secure way, you are wrong. ","version":"Next","tagName":"h2"},{"title":"There can be conflicts","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#there-can-be-conflicts","content":" Imagine two of your users modify the same JSON document, while both are offline. After they go online again, their clients replicate the modified document to the server. Now you have two conflicting versions of the same document, and you need a way to determine how the correct new version of that document should look like. This process is called conflict resolution. The default in many offline first databases is a deterministic conflict resolution strategy. Both conflicting versions of the document are kept in the storage and when you query for the document, a winner is determined by comparing the hashes of the document and only the winning document is returned. Because the comparison is deterministic, all clients and servers will always pick the same winner. This kind of resolution only works when it is not that important that one of the document changes gets dropped. Because conflicts are rare, this might be a viable solution for some use cases. A better resolution can be applied by listening to the changestream of the database. The changestream emits an event each time a write happens to the database. The event contains information about the written document and also a flag if there is a conflicting version. For each event with a conflict, you fetch all versions for that document and create a new document that contains the winning state. With that you can implement pretty complex conflict resolution strategies, but you have to manually code it for each collection of documents. Instead of the solving conflict once at every client, it can be made a bit easier by solely relying on the backend. This can be done when all of your clients replicate with the same single backend server. With RxDB's Graphql Replication each client side change is sent to the server where conflicts can be resolved and the winning document can be sent back to the clients. Sometimes there is no way to solve a conflict with code. If your users edit text based documents or images, often only the users themselves can decide how the winning revision has to look. For these cases, you have to implement complex UI parts where the users can inspect the conflict and manage its resolution. You do not have to handle conflicts if they cannot happen in the first place. You can achieve that by designing a write only database where existing documents cannot be touched. Instead of storing the current state in a single document, you store all the events that lead to the current state. Sometimes called the "everything is a delta" strategy, others would call it Event Sourcing. Like an accountant that does not need an eraser, you append all changes and afterwards aggregate the current state at the client. // create one new document for each change to the users balance {id: new Date().toJSON(), change: 100} // balance increased by $100 {id: new Date().toJSON(), change: -50} // balance decreased by $50 {id: new Date().toJSON(), change: 200} // balance increased by $200 There is this thing called conflict-free replicated data type, short CRDT. Using a CRDT library like automerge will magically solve all of your conflict problems. Until you use it in production where you observe that implementing CRDTs has basically the same complexity as implementing conflict resolution strategies. ","version":"Next","tagName":"h2"},{"title":"Realtime is a lie","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#realtime-is-a-lie","content":" So you replicate stuff between the clients and your backend. Each change on one side directly changes the state of the other sides in realtime. But this "realtime" is not the same as in realtime computing. In the offline first world, the word realtime was introduced by firebase and is more meant as a marketing slogan than a technical description. There is an internet between your backend and your clients and everything you do on one machine takes at least once the latency until it can affect anything on the other machines. You have to keep this in mind when you develop anything where the timing is important, like a multiplayer game or a stock trading app. Even when you run a query against the local database, there is no "real" realtime. Client side databases run on JavaScript and JavaScript runs on a single CPU that might be partially blocked because the user is running some background processes. So you can never guarantee a response deadline which violates the time constraint of realtime computing. ","version":"Next","tagName":"h2"},{"title":"Eventual consistency","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#eventual-consistency","content":" An offline first app does not have a single source of truth. There is a source on the backend, one on the own client, and also each other client has its own definition of truth. At the moment your user starts the app, the local state is hopefully already replicated with the backend and all other clients. But this does not have to be true, the states can have converged and you have to plan for that. The user could update a document based on wrong assumptions because it was not fully replicated at that point in time because the user is offline. A good way to handle this problem is to show the replication state in the UI and tell the user when the replication is running, stopped, paused or finished. And some data is just too important to be "eventual consistent". Create a wire transfer in your online banking app while you are offline. You keep the smartphone laying at your night desk and when you use again in the next morning, it goes online and replicates the transaction. No thank you, do not use offline first for these kind of things, or at least you have to display the replication state of each document in the UI. ","version":"Next","tagName":"h2"},{"title":"Permissions and authentication","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#permissions-and-authentication","content":" Every offline first app that goes beyond a prototype, does likely not have the same global state for all of its users. Each user has a different set of documents that are allowed to be replicated or seen by the user. So you need some kind of authentication and permission handling to divide the documents. The easy way is to just create one database for each user on the backend and only allow to replicate that one. Creating that many databases is not really a problem with for example CouchDB, and it makes permission handling easy. But as soon as you want to query all of your data in the backend, it will bite back. Your data is not at a single place, it is distributed between all of the user specific databases. This becomes even more complex as soon as you store information together with the documents that is not allowed to be seen by outsiders. You not only have to decide which documents to replicate, but also which fields of them. So what you really want is a single datastore in the backend and then replicate only the allowed document parts to each of the users. This always requires you to implement your custom replication endpoint like what you do with RxDBs GraphQL Replication. ","version":"Next","tagName":"h2"},{"title":"You have to migrate the client database","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#you-have-to-migrate-the-client-database","content":" While developing your app, sooner or later you want to change the data layout. You want to add some new fields to documents or change the format of them. So you have to update the database schema and also migrate the stored documents. With 'normal' applications, this is already hard enough and often dangerous. You wait until midnight, stop the webserver, make a database backup, deploy the new schema and then you hope that nothing goes wrong while it updates that many documents. With offline first applications, it is even more fun. You do not only have to migrate your local backend database, you also have to provide a migration strategy for all of these client databases out there. And you also cannot migrate everything at the same time. The clients can only migrate when the new code was updated from the appstore or the user visited your website again. This could be today or in a few weeks. ","version":"Next","tagName":"h2"},{"title":"Performance is not native","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#performance-is-not-native","content":" When you create a web based offline first app, you cannot store data directly on the users filesystem. In fact there are many layers between your JavaScript code and the filesystem of the operation system. Let's say you insert a document in RxDB: You call the RxDB API to validate and store the dataRxDB calls the underlying RxStorage, for example PouchDB.Pouchdb calls its underlying storage adapterThe storage adapter calls IndexedDBThe browser runs its internal handling of the IndexedDB APIIn most browsers IndexedDB is implemented on top of SQLiteSQLite calls the OS to store the data in the filesystem All these layers are abstractions. They are not build for exactly that one use case, so you lose some performance to tunnel the data through the layer itself, and you also lose some performance because the abstraction does not exactly provide the functions that are needed by the layer above and it will overfetch data. You will not find a benchmark comparison between how many transactions per second you can run on the browser compared to a server based database. Because it makes no sense to compare them. Browsers are slower, JavaScript is slower. Is it fast enough? What you really care about is "Is it fast enough?". For most use cases, the answer is yes. Offline first apps are UI based and you do not need to process a million transactions per second, because your user will not click the save button that often. "Fast enough" means that the data is processed in under 16 milliseconds so that you can render the updated UI in the next frame. This is of course not true for all use cases, so you better think about the performance limit before starting with the implementation. ","version":"Next","tagName":"h2"},{"title":"Nothing is predictable","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#nothing-is-predictable","content":" You have a PostgreSQL database and run a query over 1000ths of rows, which takes 200 milliseconds. Works great, so you now want to do something similar at the client device in your offline first app. How long does it take? You cannot know because people have different devices, and even equal devices have different things running in the background that slow the CPUs. So you cannot predict performance and as described above, you cannot even predict the storage limit. So if your app does heavy data analytics, you might better run everything on the backend and just send the results to the client. ","version":"Next","tagName":"h2"},{"title":"There is no relational data","type":1,"pageTitle":"Downsides of Local First / Offline First","url":"/downsides-of-offline-first.html#there-is-no-relational-data","content":" I started creating RxDB many years ago and while still maintaining it, I often worked with all these other offline first databases out there. RxDB and all of these other ones, are based on some kind of document databases similar to NoSQL. Often people want to have a relational database like the SQL one they use at the backend. So why are there no real relations in offline first databases? I could answer with these arguments like how JavaScript works better with document based data, how performance is better when having no joins or even how NoSQL queries are more composable. But the truth is, everything is NoSQL because it makes replication easy. An SQL query that mutates data in different tables based on some selects and joins, cannot be partially replicated without breaking the client. You have foreign keys that point to other rows and if these rows are not replicated yet, you have a problem. To implement a robust replication protocol for relational data, you need some stuff like a reliable atomic clock and you have to block queries over multiple tables while a transaction replicated. Watch this guy implementing offline first replication on top of SQLite or read this discussion about implementing offline first in supabase. So creating replication for an SQL offline first database is way more work than just adding some network protocols on top of PostgreSQL. It might not even be possible for clients that have no reliable clock. ","version":"Next","tagName":"h2"},{"title":"Install","type":0,"sectionRef":"#","url":"/install.html","content":"","keywords":"","version":"Next"},{"title":"npm","type":1,"pageTitle":"Install","url":"/install.html#npm","content":" To install the latest release of rxdb and its dependencies and save it to your package.json, run: npm i rxdb --save ","version":"Next","tagName":"h2"},{"title":"peer-dependency","type":1,"pageTitle":"Install","url":"/install.html#peer-dependency","content":" You also need to install the peer-dependency rxjs if you have not installed it before. npm i rxjs --save ","version":"Next","tagName":"h2"},{"title":"polyfills","type":1,"pageTitle":"Install","url":"/install.html#polyfills","content":" RxDB is coded with es8 and transpiled to es5. This means you have to install polyfills to support older browsers. For example you can use the babel-polyfills with: npm i @babel/polyfill --save If you need polyfills, you have to import them in your code. import '@babel/polyfill'; ","version":"Next","tagName":"h2"},{"title":"Polyfill the global variable","type":1,"pageTitle":"Install","url":"/install.html#polyfill-the-global-variable","content":" When you use RxDB with angular or other webpack based frameworks, you might get the error Uncaught ReferenceError: global is not defined. This is because some dependencies of RxDB assume a Node.js-specific global variable that is not added to browser runtimes by some bundlers. You have to add them by your own, like we do here. (window as any).global = window; (window as any).process = { env: { DEBUG: undefined }, }; ","version":"Next","tagName":"h2"},{"title":"Project Setup and Configuration","type":1,"pageTitle":"Install","url":"/install.html#project-setup-and-configuration","content":" In the examples folder you can find CI tested projects for different frameworks and use cases, while in the /config folder base configuration files for Webpack, Rollup, Mocha, Karma, Typescript are exposed. Consult package.json for the versions of the packages supported. ","version":"Next","tagName":"h2"},{"title":"Latest","type":1,"pageTitle":"Install","url":"/install.html#latest","content":" If you need the latest development state of RxDB, add it as git-dependency into your package.json. "dependencies": { "rxdb": "git+https://[email protected]/pubkey/rxdb.git#commitHash" } Replace commitHash with the hash of the latest build-commit. ","version":"Next","tagName":"h2"},{"title":"Import","type":1,"pageTitle":"Install","url":"/install.html#import","content":" To import rxdb, add this to your JavaScript file to import the default bundle that contains the RxDB core: import { createRxDatabase, /* ... */ } from 'rxdb'; ","version":"Next","tagName":"h2"},{"title":"Electron Plugin","type":0,"sectionRef":"#","url":"/electron.html","content":"","keywords":"","version":"Next"},{"title":"RxStorage Electron IpcRenderer & IpcMain","type":1,"pageTitle":"Electron Plugin","url":"/electron.html#rxstorage-electron-ipcrenderer--ipcmain","content":" To use RxDB in electron, it is recommended to run the RxStorage in the main process and the RxDatabase in the renderer processes. With the rxdb electron plugin you can create a remote RxStorage and consume it from the renderer process. To do this in a convenient way, the RxDB electron plugin provides the helper functions exposeIpcMainRxStorage and getRxStorageIpcRenderer. Similar to the Worker RxStorage, these wrap any other RxStorage once in the main process and once in each renderer process. In the renderer you can then use the storage to create a RxDatabase which communicates with the storage of the main process to store and query data. note nodeIntegration must be enabled in Electron. // main.js const { exposeIpcMainRxStorage } = require('rxdb/plugins/electron'); const { getRxStorageMemory } = require('rxdb/plugins/storage-memory'); app.on('ready', async function () { exposeIpcMainRxStorage({ key: 'main-storage', storage: getRxStorageMemory(), ipcMain: electron.ipcMain }); }); // renderer.js const { getRxStorageIpcRenderer } = require('rxdb/plugins/electron'); const { getRxStorageMemory } = require('rxdb/plugins/storage-memory'); const db = await createRxDatabase({ name, storage: getRxStorageIpcRenderer({ key: 'main-storage', ipcRenderer: electron.ipcRenderer }) }); /* ... */ ","version":"Next","tagName":"h2"},{"title":"Related","type":1,"pageTitle":"Electron Plugin","url":"/electron.html#related","content":" Comparison of Electron Databases ","version":"Next","tagName":"h2"},{"title":"Key Compression","type":0,"sectionRef":"#","url":"/key-compression.html","content":"","keywords":"","version":"Next"},{"title":"Enable key compression","type":1,"pageTitle":"Key Compression","url":"/key-compression.html#enable-key-compression","content":" The key compression plugin is a wrapper around any other RxStorage. You first have to wrap your RxStorage with the key compression pluginThen use that as RxStorage when calling createRxDatabase()Then you have to enable the key compression by adding keyCompression: true to your collection schema. import { wrappedKeyCompressionStorage } from 'rxdb/plugins/key-compression'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const storageWithKeyCompression = wrappedKeyCompressionStorage({ storage: getRxStorageDexie() }); const db = await createRxDatabase({ name: 'mydatabase', storage: storageWithKeyCompression }); const mySchema = { keyCompression: true, // set this to true, to enable the keyCompression version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength } /* ... */ } }; /* ... */ ","version":"Next","tagName":"h2"},{"title":"🔒 Encrypted Local Storage with RxDB","type":0,"sectionRef":"#","url":"/encryption.html","content":"","keywords":"","version":"Next"},{"title":"Querying encrypted data","type":1,"pageTitle":"🔒 Encrypted Local Storage with RxDB","url":"/encryption.html#querying-encrypted-data","content":" RxDB handles the encryption and decryption of data internally. This means that when you work with a RxDocument, you can access the properties of the document just like you would with normal, unencrypted data. RxDB automatically decrypts the data for you when you retrieve it, making it transparent to your application code. This means the encryption works with all RxStorage like SQLite, IndexedDB, OPFS and so on. However, there's a limitation when it comes to querying encrypted fields. Encrypted fields cannot be used as operators in queries. This means you cannot perform queries like "find all documents where the encrypted field equals a certain value." RxDB does not expose the encrypted data in a way that allows direct querying based on the encrypted content. To filter or search for documents based on the contents of encrypted fields, you would need to first decrypt the data and then perform the query, which might not be efficient or practical in some cases. You could however use the memory synced RxStorage to replicate the encrypted documents into a non-encrypted in-memory storage and then query them like normal. ","version":"Next","tagName":"h2"},{"title":"Password handling","type":1,"pageTitle":"🔒 Encrypted Local Storage with RxDB","url":"/encryption.html#password-handling","content":" RxDB does not define how you should store or retrieve the encryption password. It only requires you to provide the password on database creation which grants you flexibility in how you manage encryption passwords. You could ask the user on app-start to insert the password, or you can retrieve the password from your backend on app start (or revoke access by no longer providing the password). ","version":"Next","tagName":"h2"},{"title":"Asymmetric encryption","type":1,"pageTitle":"🔒 Encrypted Local Storage with RxDB","url":"/encryption.html#asymmetric-encryption","content":" The encryption plugin itself uses symmetric encryption with a password to guarantee best performance when reading and storing data. It is not able to do Asymmetric encryption by itself. If you need Asymmetric encryption with a private/publicKey, it is recommended to encrypted the password itself with the asymentric keys and store the encrypted password beside the other data. On app-start you can decrypt the password with the private key and use the decrypted passwort in the RxDB encryption plugin ","version":"Next","tagName":"h2"},{"title":"Using the RxDB Encryption Plugins","type":1,"pageTitle":"🔒 Encrypted Local Storage with RxDB","url":"/encryption.html#using-the-rxdb-encryption-plugins","content":" RxDB currently has two plugins for encryption: The free encryption-crypto-js plugin that is based on the AES algorithm of the crypto-js libraryThe 👑 premiumencryption-web-crypto plugin that is based on the native Web Crypto API which makes it faster and more secure to use. Document inserts are about 10x faster compared to crypto-js and it has a smaller build size because it uses the browsers API instead of bundling an npm module. An RxDB encryption plugin is a wrapper around any other RxStorage. You first have to wrap your RxStorage with the encryptionThen use that as RxStorage when calling createRxDatabase()Also you have to set a password when creating the database. The format of the password depends on which encryption plugin is used.To define a field as being encrypted, you have to add it to the encrypted fields list in the schema. import { wrappedKeyEncryptionCryptoJsStorage } from 'rxdb/plugins/encryption-crypto-js'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // wrap the normal storage with the encryption plugin const encryptedDexieStorage = wrappedKeyEncryptionCryptoJsStorage({ storage: getRxStorageDexie() }); // create an encrypted database const db = await createRxDatabase({ name: 'mydatabase', storage: encryptedDexieStorage, password: 'sudoLetMeIn' }); const schema = { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 }, secret: { type: 'string' }, }, required: ['id'] encrypted: ['secret'] }; await db.addCollections({ myDocuments: { schema } }) /* ... */ Or with the web-crypto👑 premium plugin: import { wrappedKeyEncryptionWebCryptoStorage, createPassword } from 'rxdb-premium/plugins/encryption-web-crypto'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; // wrap the normal storage with the encryption plugin const encryptedIndexedDbStorage = wrappedKeyEncryptionWebCryptoStorage({ storage: getRxStorageIndexedDB() }); const myPasswordObject = { // Algorithm can be oneOf: 'AES-CTR' | 'AES-CBC' | 'AES-GCM' algorithm: 'AES-CTR', password: 'myRandomPasswordWithMin8Length' }; // create an encrypted database const db = await createRxDatabase({ name: 'mydatabase', storage: encryptedIndexedDbStorage, password: myPasswordObject }); /* ... */ ","version":"Next","tagName":"h2"},{"title":"Changing the password","type":1,"pageTitle":"🔒 Encrypted Local Storage with RxDB","url":"/encryption.html#changing-the-password","content":" The password is set database specific and it is not possible to change the password of a database. Opening an existing database with a different password will throw an error. To change the password you can either: Use the storage migration plugin to migrate the database state into a new database.Store a randomly created meta-password in a different RxDatabase as a value of a local document. Encrypt the meta password with the actual user password and read it out before creating the actual database. ","version":"Next","tagName":"h2"},{"title":"Encrypted attachments","type":1,"pageTitle":"🔒 Encrypted Local Storage with RxDB","url":"/encryption.html#encrypted-attachments","content":" To store the attachments data encrypted, you have to set encrypted: true in the attachments property of the schema. const mySchema = { version: 0, type: 'object', properties: { /* ... */ }, attachments: { encrypted: true // if true, the attachment-data will be encrypted with the db-password } }; ","version":"Next","tagName":"h2"},{"title":"Electron Database - RxDB with different storage for SQLite, Filesystem and In-Memory","type":0,"sectionRef":"#","url":"/electron-database.html","content":"","keywords":"","version":"Next"},{"title":"Databases for Electron","type":1,"pageTitle":"Electron Database - RxDB with different storage for SQLite, Filesystem and In-Memory","url":"/electron-database.html#databases-for-electron","content":" An Electron runtime can be divided in two parts: The "main" process which is a Node.js JavaScript process that runs without a UI in the background.One or multiple "renderer" processes that consist of a Chrome browser engine and runs the user interface. Each renderer process represents one "browser tab". This is important to understand because choosing the right database depends on your use case and on which of these JavaScript runtimes you want to keep the data. ","version":"Next","tagName":"h2"},{"title":"Server Side Databases in Electron.js","type":1,"pageTitle":"Electron Database - RxDB with different storage for SQLite, Filesystem and In-Memory","url":"/electron-database.html#server-side-databases-in-electronjs","content":" Because Electron runs on a desktop computer, you might think that it should be possible to use a common "server" database like MySQL, PostgreSQL or MongoDB. In theory you could ship the correct database server binaries with your electron application and start a process on the clients device which exposes a port to the database that can be consumed by Electron. In practice this is not a viable way to go because shipping the correct binaries and opening ports is way to complicated and troublesome. Instead you should use a database that can be bundled and run inside of Electron, either in the main or in the renderer process. ","version":"Next","tagName":"h3"},{"title":"Localstorage / IndexedDB / WebSQL as alternatives to SQLite in Electron","type":1,"pageTitle":"Electron Database - RxDB with different storage for SQLite, Filesystem and In-Memory","url":"/electron-database.html#localstorage--indexeddb--websql-as-alternatives-to-sqlite-in-electron","content":" Because Electron uses a common Chrome web browser in the renderer process, you can access the common Web Storage APIs like Localstorage, IndexedDB and WebSQL. This is easy to setup and storing small sets of data can be achieved in a short span of time. But as soon as your application goes beyond a simple TODO-app, there are multiple obstacles that come in your way. One thing is the bad multi-tab support. If you have more then one renderer process, it becomes hard to manage database writes between them. Each browser tab could modify the database state while the others do not know of the changes and keep an outdated UI. Another thing is performance. IndexedDB is slow mostly because it has to go through layers of browser security and abstractions. Storing and querying much data might become your performance bottleneck. Localstorage and WebSQL are even slower by the way. Using these Web Storage APIs is generally only recommend when you know for sure that there will be always only one rendering process and performance is not that relevant. The main reason for that is the security- and abstraction layers that writes and reads have to go through when using the browsers IndexedDB API. So instead of using IndexedDB in Electron in the renderer process, you should use something that runs in the "main" process in Node.js like the Filesystem RxStorage or the In Memory RxStorage. ","version":"Next","tagName":"h3"},{"title":"RxDB","type":1,"pageTitle":"Electron Database - RxDB with different storage for SQLite, Filesystem and In-Memory","url":"/electron-database.html#rxdb","content":" RxDB is a NoSQL database for JavaScript applications. It has many features that come in handy when RxDB is used with UI based applications like you Electron app. For example it is able to subscribe to query results of single fields of document. It has encryption and compression features and most important it has a battle tested replication protocol that can be used to do a realtime sync with your backend. Because of the flexible storage layer of RxDB, there are many options on how to use it with Electron: The memory RxStorage that stores the data inside of the JavaScript memory without persistenceThe SQLite RxStorageThe PouchDB RxStorage with the SQLite adapter mentioned above.The IndexedDB RxStorageThe Dexie.js RxStorageThe Node.js Filesystem It is recommended to use the SQLite RxStorage because it has the best performance and is the easiest to set up. However it is part of the 👑 Premium Plugins which must be purchased, so to try out RxDB with Electron, you might want to use one of the other options. To start with RxDB, I would recommend to use the Dexie.js RxStorage in the renderer processes. Because RxDB is able to broadcast the database state between browser tabs, having multiple renderer processes is not a problem like it would be when you use plain IndexedDB without RxDB. In production you would always run the RxStorage in the main process with the RxStorage Electron IpcRenderer & IpcMain plugins. First you have to install all dependencies via npm install rxdb rxjs. Then you can assemble the RxStorage and create a database with it: import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // create database const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageDexie() }); // create collections const collections = await myRxDatabase.addCollections({ humans: { /* ... */ } }); // insert document await collections.humans.insert({id: 'foo', name: 'bar'}); // run a query const result = await collections.humans.find({ selector: { name: 'bar' } }).exec(); // observe a query await collections.humans.find({ selector: { name: 'bar' } }).$.subscribe(result => {/* ... */}); For having a better performance in the renderer tab, you can later switch to the IndexedDB RxStorage. But in production it is recommended to use the SQLite RxStorage or the Filesystem RxStorage in the main process so that database operations do not block the rendering of the UI. To learn more about using RxDB with Electron, you might want to check out this example project. ","version":"Next","tagName":"h3"},{"title":"SQLite in Electron.js without RxDB","type":1,"pageTitle":"Electron Database - RxDB with different storage for SQLite, Filesystem and In-Memory","url":"/electron-database.html#sqlite-in-electronjs-without-rxdb","content":" SQLite is a SQL based relational database written in the C programming language that was crafted to be embed inside of applications and stores data locally. Operations are written in the SQL query language similar to the PostgreSQL syntax. Using SQLite in Electron is not possible in the renderer process, only in the main process. To communicate data operations between your main and your renderer processes, you have to use either @electron/remote (not recommended) or the ipcRenderer (recommended). So you start up SQLite in your main process and whenever you want to read or write data, you send the SQL queries to the main process and retrieve the result back as JSON data. To install SQLite, use the SQLite3 package which is a native Node.js module. Also you need the @electron/rebuild package to rebuild the SQLite module against the currently installed Electron version. Install them with npm install sqlite3 @electron/rebuild. Then you can rebuild SQLite with ./node_modules/.bin/electron-rebuild -f -w sqlite3In the JavaScript code of your main process you can now create a database: const sqlite3 = require('sqlite3'); const db = new sqlite3.Database('/path/to/database/file.db'); // create a table and insert a row db.serialize(() => { db.run("CREATE TABLE Users (name, lastName)"); db.run("INSERT INTO Users VALUES (?, ?)", ['foo', 'bar']); }); Also you have to set up the ipcRenderer so that message from the renderer process are handled: ipcMain.handle('db-query', async (event, sqlQuery) => { return new Promise(res => { db.all(sqlQuery, (err, rows) => { res(rows); }); }); }); In your renderer process you can now call the ipcHandler and fetch data from SQLite: const rows = await ipcRenderer.invoke('db-query', "SELECT * FROM Users"); The downside of SQLite (or SQL in general) is that it is lacking many features that are handful when using a database together with UI based applications. It is not possible to observe queries or document fields and there is no replication method to sync data with a server. This makes SQLite a good solution when you just want to store data on the client or process expensive SQL queries on the server, but it is not suitable for more complex operations like two-way replication, encryption, compression and so on. Also developer helpers like TypeScript type safety are totally out of reach. ","version":"Next","tagName":"h3"},{"title":"Follow up","type":1,"pageTitle":"Electron Database - RxDB with different storage for SQLite, Filesystem and In-Memory","url":"/electron-database.html#follow-up","content":" Learn how to use RxDB as database in electron with the Quickstart Tutorial.Check out the RxDB Electron exampleThere is a followup list of other client side database alternatives that you can try to use with Electron. ","version":"Next","tagName":"h2"},{"title":"Leader-Election","type":0,"sectionRef":"#","url":"/leader-election.html","content":"","keywords":"","version":"Next"},{"title":"Use-case-example","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#use-case-example","content":" Imagine we have a website which displays the current temperature of the visitors location in various charts, numbers or heatmaps. To always display the live-data, the website opens a websocket to our API-Server which sends the current temperature every 10 seconds. Using the way most sites are currently build, we can now open it in 5 browser-tabs and it will open 5 websockets which send data 6*5=30 times per minute. This will not only waste the power of your clients device, but also wastes your api-servers resources by opening redundant connections. ","version":"Next","tagName":"h2"},{"title":"Solution","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#solution","content":" The solution to this redundancy is the usage of a leader-election-algorithm which makes sure that always exactly one tab is managing the remote-data-access. The managing tab is the elected leader and stays leader until it is closed. No matter how many tabs are opened or closed, there must be always exactly one leader. You could now start implementing a messaging-system between your browser-tabs, hand out which one is leader, solve conflicts and reassign a new leader when the old one 'dies'. Or just use RxDB which does all these things for you. ","version":"Next","tagName":"h2"},{"title":"Add the leader election plugin","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#add-the-leader-election-plugin","content":" To enable the leader election, you have to add the leader-election plugin. import { addRxPlugin } from 'rxdb'; import { RxDBLeaderElectionPlugin } from 'rxdb/plugins/leader-election'; addRxPlugin(RxDBLeaderElectionPlugin); ","version":"Next","tagName":"h2"},{"title":"Code-example","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#code-example","content":" To make it easy, here is an example where the temperature is pulled every ten seconds and saved to a collection. The pulling starts at the moment where the opened tab becomes the leader. const db = await createRxDatabase({ name: 'weatherDB', storage: getRxStorageDexie(), password: 'myPassword', multiInstance: true }); await db.addCollections({ temperature: { schema: mySchema } }); db.waitForLeadership() .then(() => { console.log('Long lives the king!'); // <- runs when db becomes leader setInterval(async () => { const temp = await fetch('https://example.com/api/temp/'); db.temperature.insert({ degrees: temp, time: new Date().getTime() }); }, 1000 * 10); }); ","version":"Next","tagName":"h2"},{"title":"Handle Duplicate Leaders","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#handle-duplicate-leaders","content":" On rare occasions, it can happen that more then one leader is elected. This can happen when the CPU is on 100% or for any other reason the JavaScript process is fully blocked for a long time. For most cases this is not really problem because on duplicate leaders, both browser tabs replicate with the same backend anyways. To handle the duplicate leader event, you can access the leader elector and set a handler: import { getLeaderElectorByBroadcastChannel } from 'rxdb/plugins/leader-election'; const leaderElector = getLeaderElectorByBroadcastChannel(broadcastChannel); leaderElector.onduplicate = async () => { // Duplicate leader detected -> reload the page. location.reload(); } ","version":"Next","tagName":"h2"},{"title":"Live-Example","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#live-example","content":" In this example the leader is marked with the crown ♛ ","version":"Next","tagName":"h2"},{"title":"Try it out","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#try-it-out","content":" Run the angular-example where the leading tab is marked with a crown on the top-right-corner. ","version":"Next","tagName":"h2"},{"title":"Notice","type":1,"pageTitle":"Leader-Election","url":"/leader-election.html#notice","content":" The leader election is implemented via the broadcast-channel module. The leader is elected between different processes on the same javascript-runtime. Like multiple tabs in the same browser or multiple NodeJs-processes on the same machine. It will not run between different replicated instances. ","version":"Next","tagName":"h2"},{"title":"RxDB Logger Plugin","type":0,"sectionRef":"#","url":"/logger.html","content":"","keywords":"","version":"Next"},{"title":"Using the logger plugin","type":1,"pageTitle":"RxDB Logger Plugin","url":"/logger.html#using-the-logger-plugin","content":" The logger is a wrapper that can be wrapped around any RxStorage. Once your storage is wrapped, you can create your database with the wrapped storage and the logging will automatically happen. import { wrappedLoggerStorage } from 'rxdb-premium/plugins/logger'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; // wrap a storage with the logger const loggingStorage = wrappedLoggerStorage({ storage: getRxStorageIndexedDB({}) }); // create your database with the wrapped storage const db = await createRxDatabase({ name: 'mydatabase', storage: loggingStorage }); // create collections etc... ","version":"Next","tagName":"h2"},{"title":"Specify what to be logged","type":1,"pageTitle":"RxDB Logger Plugin","url":"/logger.html#specify-what-to-be-logged","content":" By default, the plugin will log all operations and it will also run a console.time()/console.timeEnd() around each operation. You can specify what to log so that your logs are less noisy. For this you provide a settings object when calling wrappedLoggerStorage(). const loggingStorage = wrappedLoggerStorage({ storage: getRxStorageIndexedDB({}), settings: { // can used to prefix all log strings, default='' prefix: 'my-prefix', /** * Be default, all settings are true. */ // if true, it will log timings with console.time() and console.timeEnd() times: true, // if false, it will not log meta storage instances like used in replication metaStorageInstances: true, // operations bulkWrite: true, findDocumentsById: true, query: true, count: true, info: true, getAttachmentData: true, getChangedDocumentsSince: true, cleanup: true, close: true, remove: true } }); ","version":"Next","tagName":"h2"},{"title":"Using custom logging functions","type":1,"pageTitle":"RxDB Logger Plugin","url":"/logger.html#using-custom-logging-functions","content":" With the logger plugin you can also run custom log functions for all operations. const loggingStorage = wrappedLoggerStorage({ storage: getRxStorageIndexedDB({}), onOperationStart: (operationsName, logId, args) => void, onOperationEnd: (operationsName, logId, args) => void, onOperationError: (operationsName, logId, args, error) => void }); ","version":"Next","tagName":"h2"},{"title":"Storage Migration","type":0,"sectionRef":"#","url":"/migration-storage.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"Storage Migration","url":"/migration-storage.html#usage","content":" Lets say you want to migrate from LokiJs to the Dexie.js RxStorage. import { migrateStorage } from 'rxdb/plugins/migration-storage'; import { getRxStorageLoki } from 'rxdb/plugins/storage-loki'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // create the new RxDatabase const db = await createRxDatabase({ name: dbLocation, storage: getRxStorageDexie(), multiInstance: false }); await migrateStorage({ database: db as any, /** * Name of the old database, * using the storage migration requires that the * new database has a different name. */ oldDatabaseName: 'myOldDatabaseName', oldStorage: getRxStorageLoki(), // RxStorage of the old database batchSize: 500, // batch size parallel: false, // <- true if it should migrate all collections in parallel. False (default) if should migrate in serial afterMigrateBatch: (input: AfterMigrateBatchHandlerInput) => { console.log('storage migration: batch processed'); } }); ","version":"Next","tagName":"h2"},{"title":"Migrate from a previous RxDB major version","type":1,"pageTitle":"Storage Migration","url":"/migration-storage.html#migrate-from-a-previous-rxdb-major-version","content":" To migrate from a previous RxDB major version, you have to install the 'old' RxDB in the package.json { "dependencies": { "rxdb-old": "npm:[email protected]", } } The you can run the migration by providing the old storage: /* ... */ import { migrateStorage } from 'rxdb/plugins/migration-storage'; import { getRxStorageLoki } from 'rxdb-old/plugins/storage-loki'; // <- import from the old RxDB version await migrateStorage({ database: db as any, /** * Name of the old database, * using the storage migration requires that the * new database has a different name. */ oldDatabaseName: 'myOldDatabaseName', oldStorage: getRxStorageLoki(), // RxStorage of the old database batchSize: 500, // batch size parallel: false, afterMigrateBatch: (input: AfterMigrateBatchHandlerInput) => { console.log('storage migration: batch processed'); } }); /* ... */ ","version":"Next","tagName":"h2"},{"title":"Disable Version Check on RxDB Premium 👑","type":1,"pageTitle":"Storage Migration","url":"/migration-storage.html#disable-version-check-on-rxdb-premium-","content":" RxDb Premium has a check in place that ensures that you do not accidentally use the wrong RxDB core and 👑 Premium version together which could break your database state. This can be a problem during migrations where you have multiple versions of RxDB in use and it will throw the error Version mismatch detected. You can disable that check by importing and running the disableVersionCheck() function from RxDB Premium. // RxDB Premium v15 or newer: import { disableVersionCheck } from 'rxdb-premium-old/plugins/shared'; disableVersionCheck(); // RxDB Premium v14: // for esm import { disableVersionCheck } from 'rxdb-premium-old/dist/es/shared/version-check.js'; disableVersionCheck(); // for cjs import { disableVersionCheck } from 'rxdb-premium-old/dist/lib/shared/version-check.js'; disableVersionCheck(); ","version":"Next","tagName":"h2"},{"title":"Node.js Database","type":0,"sectionRef":"#","url":"/nodejs-database.html","content":"","keywords":"","version":"Next"},{"title":"Persistent Database","type":1,"pageTitle":"Node.js Database","url":"/nodejs-database.html#persistent-database","content":" To get a "normal" database connection where the data is persisted to a file system, the RxDB real time database provides multiple storage implementations that work in Node.js. The FoundationDB storage connects to a FoundationDB cluster which itself is just a distributed key-value engine. RxDB adds the NoSQL query-engine, indexes and other features on top of it. It scales horizontally because you can always add more servers to the FoundationDB cluster to increase the capacity. Setting up a RxDB database is pretty simple. You import the FoundationDB RxStorage and tell RxDB to use that when calling createRxDatabase: import { createRxDatabase } from 'rxdb'; import { getRxStorageFoundationDB } from 'rxdb/plugins/storage-foundationdb'; const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageFoundationDB({ apiVersion: 620, clusterFile: '/path/to/fdb.cluster' }) }); // add a collection await db.addCollections({ users: { schema: mySchema } }); // run a query const result = await db.users.find({ selector: { name: 'foobar' } }).exec(); Another alternative storage is the SQLite RxStorage that stores the data inside of a SQLite filebased database. The SQLite storage is faster then FoundationDB and does not require to set up a cluster or anything because SQLite directly stores and reads the data inside of the filesystem. The downside of that is that it only scales vertically. import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsNode } from 'rxdb-premium/plugins/storage-sqlite'; import sqlite3 from 'sqlite3'; const myRxDatabase = await createRxDatabase({ name: 'path/to/database/file/foobar.db', storage: getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsNode(sqlite3) }) }); Because the SQLite RxStorage is not free and you might not want to set up a FoundationDB cluster, there is also the option to use the LokiJS RxStorage together with the filesystem adapter. This will store the data as plain json in a file and load everything into memory on startup. This works great for small prototypes but it is not recommended to be used in production. import { createRxDatabase } from 'rxdb'; const LokiFsStructuredAdapter = require('lokijs/src/loki-fs-structured-adapter.js'); import { getRxStorageLoki } from 'rxdb/plugins/storage-lokijs'; import sqlite3 from 'sqlite3'; const myRxDatabase = await createRxDatabase({ name: 'path/to/database/file/foobar.db', storage: getRxStorageLoki({ adapter: new LokiFsStructuredAdapter() }) }); Here is a performance comparison chart of the different storages (lower is better): ","version":"Next","tagName":"h2"},{"title":"RxDB as Node.js In-Memory Database","type":1,"pageTitle":"Node.js Database","url":"/nodejs-database.html#rxdb-as-nodejs-in-memory-database","content":" One of the easiest way to use RxDB in Node.js is to use the Memory RxStorage. As the name implies, it stores the data directly in-memory of the Node.js JavaScript process. This makes it really fast to read and write data but of course the data is not persisted and will be lost when the nodejs process exits. Often the in-memory option is used when RxDB is used in unit tests because it automatically cleans up everything afterwards. import { createRxDatabase } from 'rxdb'; import { getRxStorageMemory } from 'rxdb/plugins/storage-memory'; const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageMemory() }); Also notice that the default memory limit of Node.js is 4gb (might change of newer versions) so for bigger datasets you might want to increase the limit with the max-old-space-size flag: # increase the Node.js memory limit to 8GB node --max-old-space-size=8192 index.js ","version":"Next","tagName":"h2"},{"title":"Hybrid In-memory-persistence-synced storage","type":1,"pageTitle":"Node.js Database","url":"/nodejs-database.html#hybrid-in-memory-persistence-synced-storage","content":" If you want to have the performance of an in-memory database but require persistency of the data, you can use the memory-synced storage. On database creation it will load all data into the memory and on writes it will first write the data into memory and later also write it to the persistent storage in the background. In the following example the FoundationDB storage is used, but any other RxStorage can be used as persistence layer. import { createRxDatabase } from 'rxdb'; import { getRxStorageFoundationDB } from 'rxdb/plugins/storage-foundationdb'; import { getMemorySyncedRxStorage } from 'rxdb-premium/plugins/storage-memory-synced'; const db = await createRxDatabase({ name: 'exampledb', storage: getMemorySyncedRxStorage({ storage: getRxStorageFoundationDB({ apiVersion: 620, clusterFile: '/path/to/fdb.cluster' }) }) }); While this approach gives you a database with great performance and persistent, it has two major downsides: The database size is limited to the memory sizeWrites can be lost when the Node.js process exists between a write to the memory state and the background persisting. ","version":"Next","tagName":"h2"},{"title":"Share database between microservices with RxDB","type":1,"pageTitle":"Node.js Database","url":"/nodejs-database.html#share-database-between-microservices-with-rxdb","content":" Using a local, embedded database in Node.js works great until you have to share the data with another Node.js process or another server at all. To share the database state with other instances, RxDB provides two different methods. One is replication and the other is the remote RxStorage. The replication copies over the whole database set to other instances live-replicates all ongoing writes. This has the benefit of scaling better because each of your microservice will run queries on its own copy of the dataset. Sometimes however you might not want to store the full dataset on each microservice. Then it is better to use the remote RxStorage and connect it to the "main" database. The remote storage will run all operations the main database and return the result to the calling database. ","version":"Next","tagName":"h2"},{"title":"Follow up on RxDB+Node.js","type":1,"pageTitle":"Node.js Database","url":"/nodejs-database.html#follow-up-on-rxdbnodejs","content":" Check out the RxDB Nodejs example.If you haven't done yet, you should start learning about RxDB with the Quickstart Tutorial.I created a list of embedded JavaSCript databases that you will help you to pick a database if you do not want to use RxDB.Check out the MongoDB RxStorage that uses MongoDB for the database connection from your Node.js application and runs the RxDB real time database on top of it. ","version":"Next","tagName":"h2"},{"title":"Local First / Offline First","type":0,"sectionRef":"#","url":"/offline-first.html","content":"","keywords":"","version":"Next"},{"title":"UX is better without loading spinners","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#ux-is-better-without-loading-spinners","content":" In 'normal' web applications, most user interactions like fetching, saving or deleting data, correspond to a request to the backend server. This means that each of these interactions require the user to await the unknown latency to and from a remote server while looking at a loading spinner. In offline-first apps, the operations go directly against the local storage which happens almost instantly. There is no perceptible loading time and so it is not even necessary to implement a loading spinner at all. As soon as the user clicks, the UI represents the new state as if it was already changed in the backend. ","version":"Next","tagName":"h2"},{"title":"Multi-tab usage just works","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#multi-tab-usage-just-works","content":" Many, even big websites like amazon, reddit and stack overflow do not handle multi tab usage correctly. When a user has multiple tabs of the website open and does a login on one of these tabs, the state does not change on the other tabs. On offline first applications, there is always exactly one state of the data across all tabs. Offline first databases (like RxDB) store the data inside of IndexedDb and share the state between all tabs of the same origin. ","version":"Next","tagName":"h2"},{"title":"Latency is more important than bandwidth","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#latency-is-more-important-than-bandwidth","content":" In the past, often the bandwidth was the limiting factor on determining the loading time of an application. But while bandwidth has improved over the years, latency became the limiting factor. You can always increase the bandwidth by setting up more cables or sending more Starlink satellites to space. But reducing the latency is not so easy. It is defined by the physical properties of the transfer medium, the speed of light and the distance to the server. All of these three are hard to optimize. Offline first application benefit from that because sending the initial state to the client can be done much faster with more bandwidth. And once the data is there, we do no longer have to care about the latency to the backend server. ","version":"Next","tagName":"h2"},{"title":"Realtime comes for free","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#realtime-comes-for-free","content":" Most websites lie to their users. They do not lie because they display wrong data, but because they display old data that was loaded from the backend at the time the user opened the site. To overcome this, you could build a realtime website where you create a websocket that streams updates from the backend to the client. This means work. Your client needs to tell the server which page is currently opened and which updates the client is interested to. Then the server can push updates over the websocket and you can update the UI accordingly. With offline first applications, you already have a realtime replication with the backend. Most offline first databases provide some concept of changestream or data subscriptions and with RxDB you can even directly subscribe to query results or single fields of documents. This makes it easy to have an always updated UI whenever data on the backend changes. ","version":"Next","tagName":"h2"},{"title":"Scales with data size, not with the amount of user interaction","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#scales-with-data-size-not-with-the-amount-of-user-interaction","content":" On normal applications, each user interaction can result in multiple requests to the backend server which increase its load. The more users interact with your application, the more backend resources you have to provide. Offline first applications do not scale up with the amount of user actions but instead they scale up with the amount of data. Once that data is transferred to the client, the user can do as many interactions with it as required without connecting to the server. ","version":"Next","tagName":"h2"},{"title":"Modern apps have longer runtimes","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#modern-apps-have-longer-runtimes","content":" In the past you used websites only for a short time. You open it, perform some action and then close it again. This made the first load time the important metric when evaluating page speed. Today web applications have changed and with it the way we use them. Single page applications are opened once and then used over the whole day. Chat apps, email clients, PWAs and hybrid apps. All of these were made to have long runtimes. This makes the time for user interactions more important than the initial loading time. Offline first applications benefit from that because there is often no loading time on user actions while loading the initial state to the client is not that relevant. ","version":"Next","tagName":"h2"},{"title":"You might not need REST","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#you-might-not-need-rest","content":" On normal web applications, you make different requests for each kind of data interaction. For that you have to define a swagger route, implement a route handler on the backend and create some client code to send or fetch data from that route. The more complex your application becomes, the more REST routes you have to maintain and implement. With offline first apps, you have a way to hack around all this cumbersome work. You just replicate the whole state from the server to the client. The replication does not only run once, you have a realtime replication and all changes at one side are automatically there on the other side. On the client, you can access every piece of state with a simple database query. While this of course only works for amounts of data that the client can load and store, it makes implementing prototypes and simple apps much faster. ","version":"Next","tagName":"h2"},{"title":"You might not need Redux","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#you-might-not-need-redux","content":" Data is hard, especially for UI applications where many things can happen at the same time. The user is clicking around. Stuff is loaded from the server. All of these things interact with the global state of the app. To manage this complexity it is common to use state management libraries like Redux or MobX. With them, you write all this lasagna code to wrap the mutation of data and to make the UI react to all these changes. On offline first apps, your global state is already there in a single place stored inside of the local database. You do not have to care whether this data came from the UI, another tab, the backend or another device of the same user. You can just make writes to the database and fetch data out of it. ","version":"Next","tagName":"h2"},{"title":"Follow up","type":1,"pageTitle":"Local First / Offline First","url":"/offline-first.html#follow-up","content":" Learn how to store and query data with RxDB in the RxDB QuickstartDownsides of Offline First ","version":"Next","tagName":"h2"},{"title":"Middleware","type":0,"sectionRef":"#","url":"/middleware.html","content":"","keywords":"","version":"Next"},{"title":"List","type":1,"pageTitle":"Middleware","url":"/middleware.html#list","content":" RxDB supports the following hooks: preInsertpostInsertpreSavepostSavepreRemovepostRemovepostCreate ","version":"Next","tagName":"h2"},{"title":"Why is there no validate-hook?","type":1,"pageTitle":"Middleware","url":"/middleware.html#why-is-there-no-validate-hook","content":" Different to mongoose, the validation on document-data is running on the field-level for every change to a document. This means if you set the value lastName of a RxDocument, then the validation will only run on the changed field, not the whole document. Therefore it is not useful to have validate-hooks when a document is written to the database. ","version":"Next","tagName":"h3"},{"title":"Use Cases","type":1,"pageTitle":"Middleware","url":"/middleware.html#use-cases","content":" Middleware are useful for atomizing model logic and avoiding nested blocks of async code. Here are some other ideas: complex validationremoving dependent documentsasynchronous defaultsasynchronous tasks that a certain action triggerstriggering custom eventsnotifications ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"Middleware","url":"/middleware.html#usage","content":" All hooks have the plain data as first parameter, and all but preInsert also have the RxDocument-instance as second parameter. If you want to modify the data in the hook, change attributes of the first parameter. All hook functions are also this-bind to the RxCollection-instance. ","version":"Next","tagName":"h2"},{"title":"Insert","type":1,"pageTitle":"Middleware","url":"/middleware.html#insert","content":" An insert-hook receives the data-object of the new document. lifecycle RxCollection.insert is calledpreInsert series-hookspreInsert parallel-hooksschema validation runsnew document is written to databasepostInsert series-hookspostInsert parallel-hooksevent is emitted to RxDatabase and RxCollection preInsert // series myCollection.preInsert(function(plainData){ // set age to 50 before saving plainData.age = 50; }, false); // parallel myCollection.preInsert(function(plainData){ }, true); // async myCollection.preInsert(function(plainData){ return new Promise(res => setTimeout(res, 100)); }, false); // stop the insert-operation myCollection.preInsert(function(plainData){ throw new Error('stop'); }, false); postInsert // series myCollection.postInsert(function(plainData, rxDocument){ }, false); // parallel myCollection.postInsert(function(plainData, rxDocument){ }, true); // async myCollection.postInsert(function(plainData, rxDocument){ return new Promise(res => setTimeout(res, 100)); }, false); ","version":"Next","tagName":"h3"},{"title":"Save","type":1,"pageTitle":"Middleware","url":"/middleware.html#save","content":" A save-hook receives the document which is saved. lifecycle RxDocument.save is calledpreSave series-hookspreSave parallel-hooksupdated document is written to databasepostSave series-hookspostSave parallel-hooksevent is emitted to RxDatabase and RxCollection preSave // series myCollection.preSave(function(plainData, rxDocument){ // modify anyField before saving plainData.anyField = 'anyValue'; }, false); // parallel myCollection.preSave(function(plainData, rxDocument){ }, true); // async myCollection.preSave(function(plainData, rxDocument){ return new Promise(res => setTimeout(res, 100)); }, false); // stop the save-operation myCollection.preSave(function(plainData, rxDocument){ throw new Error('stop'); }, false); postSave // series myCollection.postSave(function(plainData, rxDocument){ }, false); // parallel myCollection.postSave(function(plainData, rxDocument){ }, true); // async myCollection.postSave(function(plainData, rxDocument){ return new Promise(res => setTimeout(res, 100)); }, false); ","version":"Next","tagName":"h3"},{"title":"Remove","type":1,"pageTitle":"Middleware","url":"/middleware.html#remove","content":" An remove-hook receives the document which is removed. lifecycle RxDocument.remove is calledpreRemove series-hookspreRemove parallel-hooksdeleted document is written to databasepostRemove series-hookspostRemove parallel-hooksevent is emitted to RxDatabase and RxCollection preRemove // series myCollection.preRemove(function(plainData, rxDocument){ }, false); // parallel myCollection.preRemove(function(plainData, rxDocument){ }, true); // async myCollection.preRemove(function(plainData, rxDocument){ return new Promise(res => setTimeout(res, 100)); }, false); // stop the remove-operation myCollection.preRemove(function(plainData, rxDocument){ throw new Error('stop'); }, false); postRemove // series myCollection.postRemove(function(plainData, rxDocument){ }, false); // parallel myCollection.postRemove(function(plainData, rxDocument){ }, true); // async myCollection.postRemove(function(plainData, rxDocument){ return new Promise(res => setTimeout(res, 100)); }, false); ","version":"Next","tagName":"h3"},{"title":"postCreate","type":1,"pageTitle":"Middleware","url":"/middleware.html#postcreate","content":" This hook is called whenever a RxDocument is constructed. You can use postCreate to modify every RxDocument-instance of the collection. This adds a flexible way to add specify behavior to every document. You can also use it to add custom getter/setter to documents. PostCreate-hooks cannot be asynchronous. myCollection.postCreate(function(plainData, rxDocument){ Object.defineProperty(rxDocument, 'myField', { get: () => 'foobar', }); }); const doc = await myCollection.findOne().exec(); console.log(doc.myField); // 'foobar' note This hook does not run on already created or cached documents. Make sure to add postCreate-hooks before interacting with the collection. ","version":"Next","tagName":"h3"},{"title":"Performance tips for RxDB and other NoSQL databases","type":0,"sectionRef":"#","url":"/nosql-performance-tips.html","content":"","keywords":"","version":"Next"},{"title":"Use bulk operations","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#use-bulk-operations","content":" When you run write operations on multiple documents, make sure you use bulk operations instead of single document operations. // wrong ❌ for(const docData of dataAr){ await myCollection.insert(docData); } // right ✔️ await myCollection.bulkInsert(dataAr); ","version":"Next","tagName":"h2"},{"title":"Help the query planner by adding operators that better restrict the index range","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#help-the-query-planner-by-adding-operators-that-better-restrict-the-index-range","content":" Often on complex queries, RxDB (and other databases) do not pick the optimal index range when querying a result set. You can add additional restrictive operators to ensure the query runs over a smaller index space and has a better performance. Lets see some examples for different query types. /** * Adding a restrictive operator for an $or query * so that it better limits the index space for the time-field. */ const orQuery = { selector: { $or: [ { time: { $gt: 1234 }, }, { time: { $eg: 1234 }, user: { $gt: 'foobar' } }, ] time: { $gte: 1234 } // <- add restrictive operator } } /** * Adding a restrictive operator for an $regex query * so that it better limits the index space for the user-field. * We know that all matching fields start with 'foo' so we can * tell the query to use that as lower constraint for the index. */ const regexQuery = { selector: { user: { $regex: '^foo(.*)0-9$', // a complex regex with a ^ in the beginning $gte: 'foo' // <- add restrictive operator } } } /** * Adding a restrictive operator for a query on an enum field. * so that it better limits the index space for the time-field. */ const enumQuery = { selector: { /** * Here lets assume our status field has the enum type ['idle', 'in-progress', 'done'] * so our restrictive operator can exclude all documents with 'done' as status. */ status: { $in: { 'idle', 'in-progress', }, $gt: 'done' // <- add restrictive operator on status } } } ","version":"Next","tagName":"h2"},{"title":"Set a specific index","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#set-a-specific-index","content":" Sometime the query planner of the database itself has no chance in picking the best index of the possible given indexes. For queries where performance is very important, you might want to explicitly specify which index must be used. const myQuery = myCollection.find({ selector: { /* ... */ }, // explicitly specify index index: [ 'fieldA', 'fieldB' ] }); ","version":"Next","tagName":"h2"},{"title":"Try different ordering of index fields","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#try-different-ordering-of-index-fields","content":" The order of the fields in a compound index is very important for performance. When optimizing index usage, you should try out different orders on the index fields and measure which runs faster. For that it is very important to run tests on real-world data where the distribution of the data is the same as in production. For example when there is a query on a user collection with an age and a gender field, it depends if the index ['gender', 'age'] performance better as ['age', 'gender'] based on the distribution of data: const query = myCollection .findOne({ selector: { age: { $gt: 18 }, gender: { $eq: 'm' } }, /** * Because the developer knows that 50% of the documents are 'male', * but only 20% are below age 18, * it makes sense to enforce using the ['gender', 'age'] index to improve performance. * This could not be known by the query planer which might have chosen ['age', 'gender'] instead. */ index: ['gender', 'age'] }); Notice that RxDB has the Query Optimizer Plugin that can be used to automatically find the best indexes. ","version":"Next","tagName":"h2"},{"title":"Make a Query \"hot\" to reduce load","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#make-a-query-hot-to-reduce-load","content":" Having a query where the up-to-date result set is needed more then once, you might want to make the query "hot" by permanently subscribing to it. This ensures that the query result is kept up to date by RxDB ant the EventReduce algorithm at any time so that at the moment you need the current results, it has them already. For example when you use RxDB at Node.js for a webserver, you should use an outer "hot" query instead of running the same query again on every request to a route. // wrong ❌ app.get('/list', (req, res) => { const result = await myCollection.find({/* ... */}).exec(); res.send(JSON.stringify(result)); }); // right ✔️ const query = myCollection.find({/* ... */}); query.subscribe(); // <- make it hot app.get('/list', (req, res) => { const result = await query.exec(); res.send(JSON.stringify(result)); }); ","version":"Next","tagName":"h2"},{"title":"Store parts of your document data as attachment","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#store-parts-of-your-document-data-as-attachment","content":" For in-app databases like RxDB, it does not make sense to partially parse the JSON of a document. Instead, always the whole document json is parsed and handled. This has a better performance because JSON.parse() in JavaScript directly calls a C++ binding which can parse really fast compared to a partial parsing in JavaScript itself. Also by always having the full document, RxDB can de-duplicate memory caches of document across multiple queries. The downside is that very very big documents with a complex structure can increase query time significantly. Documents fields with complex that are mostly not in use, can be move into an attachment. This would lead RxDB to not fetch the attachment data each time the document is loaded from disc. Instead only when explicitly asked for. const myDocument = await myCollection.insert({/* ... */}); const attachment = await myDocument.putAttachment( { id: 'otherStuff.json', data: createBlob(JSON.stringify({/* ... */}), 'application/json'), type: 'application/json' } ); ","version":"Next","tagName":"h2"},{"title":"Process queries in a worker process","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#process-queries-in-a-worker-process","content":" Moving database storage into a WebWorker can significantly improve performance in web applications that use RxDB or similar NoSQL databases. When database operations are executed in the main JavaScript thread, they can block or slow down the User Interface, especially during heavy or complex data operations. By offloading these operations to a WebWorker, you effectively separate the data processing workload from the UI thread. This means the main thread remains free to handle user interactions and render updates without delay, leading to a smoother and more responsive user experience. Additionally, WebWorkers allow for parallel data processing, which can expedite tasks like querying and indexing. This approach not only enhances UI responsiveness but also optimizes overall application performance by leveraging the multi-threading capabilities of modern browsers. With RxDB you can use the Worker and SharedWorker plugin to to move the query processing away from the main thread. ","version":"Next","tagName":"h2"},{"title":"Use less plugins and hooks","type":1,"pageTitle":"Performance tips for RxDB and other NoSQL databases","url":"/nosql-performance-tips.html#use-less-plugins-and-hooks","content":" Utilizing fewer hooks and plugins in RxDB or similar NoSQL database systems can lead to markedly better performance. Each additional hook or plugin introduces extra layers of processing and potential overhead, which can cumulatively slow down database operations. These extensions often execute additional code or enforce extra checks with each operation, such as insertions, updates, or deletions. While they can provide valuable functionalities or custom behaviors, their overuse can inadvertently increase the complexity and execution time of basic database operations. By minimizing their use and only employing essential hooks and plugins, the system can operate more efficiently. This streamlined approach reduces the computational burden on each transaction, leading to faster response times and a more efficient overall data handling process, especially critical in high-load or real-time applications where performance is paramount. ","version":"Next","tagName":"h2"},{"title":"Object-Data-Relational-Mapping","type":0,"sectionRef":"#","url":"/orm.html","content":"","keywords":"","version":"Next"},{"title":"statics","type":1,"pageTitle":"Object-Data-Relational-Mapping","url":"/orm.html#statics","content":" Statics are defined collection-wide and can be called on the collection. ","version":"Next","tagName":"h2"},{"title":"Add statics to a collection","type":1,"pageTitle":"Object-Data-Relational-Mapping","url":"/orm.html#add-statics-to-a-collection","content":" To add static functions, pass a statics-object when you create your collection. The object contains functions, mapped to their function-names. const heroes = await myDatabase.addCollections({ heroes: { schema: mySchema, statics: { scream: function(){ return 'AAAH!!'; } } } }); console.log(heroes.scream()); // 'AAAH!!' You can also use the this-keyword which resolves to the collection: const heroes = await myDatabase.addCollections({ heroes: { schema: mySchema, statics: { whoAmI: function(){ return this.name; } } } }); console.log(heroes.whoAmI()); // 'heroes' ","version":"Next","tagName":"h3"},{"title":"instance-methods","type":1,"pageTitle":"Object-Data-Relational-Mapping","url":"/orm.html#instance-methods","content":" Instance-methods are defined collection-wide. They can be called on the RxDocuments of the collection. ","version":"Next","tagName":"h2"},{"title":"Add instance-methods to a collection","type":1,"pageTitle":"Object-Data-Relational-Mapping","url":"/orm.html#add-instance-methods-to-a-collection","content":" const heroes = await myDatabase.addCollections({ heroes: { schema: mySchema, methods: { scream: function(){ return 'AAAH!!'; } } } }); const doc = await heroes.findOne().exec(); console.log(doc.scream()); // 'AAAH!!' Here you can also use the this-keyword: const heroes = await myDatabase.addCollections({ heroes: { schema: mySchema, methods: { whoAmI: function(){ return 'I am ' + this.name + '!!'; } } } }); await heroes.insert({ name: 'Skeletor' }); const doc = await heroes.findOne().exec(); console.log(doc.whoAmI()); // 'I am Skeletor!!' ","version":"Next","tagName":"h3"},{"title":"attachment-methods","type":1,"pageTitle":"Object-Data-Relational-Mapping","url":"/orm.html#attachment-methods","content":" Attachment-methods are defined collection-wide. They can be called on the RxAttachments of the RxDocuments of the collection. const heroes = await myDatabase.addCollections({ heroes: { schema: mySchema, attachments: { scream: function(){ return 'AAAH!!'; } } } }); const doc = await heroes.findOne().exec(); const attachment = await doc.putAttachment({ id: 'cat.txt', data: 'meow I am a kitty', type: 'text/plain' }); console.log(attachment.scream()); // 'AAAH!!' ","version":"Next","tagName":"h2"},{"title":"Migrate Database Data on schema changes","type":0,"sectionRef":"#","url":"/migration-schema.html","content":"","keywords":"","version":"Next"},{"title":"Providing strategies","type":1,"pageTitle":"Migrate Database Data on schema changes","url":"/migration-schema.html#providing-strategies","content":" Upon creation of a collection, you have to provide migrationStrategies when your schema's version-number is greater than 0. To do this, you have to add an object to the migrationStrategies property where a function for every schema-version is assigned. A migrationStrategy is a function which gets the old document-data as a parameter and returns the new, transformed document-data. If the strategy returns null, the document will be removed instead of migrated. myDatabase.addCollections({ messages: { schema: messageSchemaV1, migrationStrategies: { // 1 means, this transforms data from version 0 to version 1 1: function(oldDoc){ oldDoc.time = new Date(oldDoc.time).getTime(); // string to unix return oldDoc; } } } }); Asynchronous strategies can also be used: myDatabase.addCollections({ messages: { schema: messageSchemaV1, migrationStrategies: { 1: function(oldDoc){ oldDoc.time = new Date(oldDoc.time).getTime(); // string to unix return oldDoc; }, /** * 2 means, this transforms data from version 1 to version 2 * this returns a promise which resolves with the new document-data */ 2: function(oldDoc){ // in the new schema (version: 2) we defined 'senderCountry' as required field (string) // so we must get the country of the message-sender from the server const coordinates = oldDoc.coordinates; return fetch('http://myserver.com/api/countryByCoordinates/'+coordinates+'/') .then(response => { const response = response.json(); oldDoc.senderCountry = response; return oldDoc; }); } } } }); you can also filter which documents should be migrated: myDatabase.addCollections({ messages: { schema: messageSchemaV1, migrationStrategies: { // 1 means, this transforms data from version 0 to version 1 1: function(oldDoc){ oldDoc.time = new Date(oldDoc.time).getTime(); // string to unix return oldDoc; }, /** * this removes all documents older then 2017-02-12 * they will not appear in the new collection */ 2: function(oldDoc){ if(oldDoc.time < 1486940585) return null; else return oldDoc; } } } }); ","version":"Next","tagName":"h2"},{"title":"autoMigrate","type":1,"pageTitle":"Migrate Database Data on schema changes","url":"/migration-schema.html#automigrate","content":" By default, the migration automatically happens when the collection is created. Calling RxDatabase.addRxCollections() returns only when the migration has finished. If you have lots of data or the migrationStrategies take a long time, it might be better to start the migration 'by hand' and show the migration-state to the user as a loading-bar. const messageCol = await myDatabase.addCollections({ messages: { schema: messageSchemaV1, autoMigrate: false, // <- migration will not run at creation migrationStrategies: { 1: async function(oldDoc){ ... anything that takes very long ... return oldDoc; } } } }); // check if migration is needed const needed = await messageCol.migrationNeeded(); if(needed == false) return; // start the migration messageCol.startMigration(10); // 10 is the batch-size, how many docs will run at parallel const migrationState = messageCol.getMigrationState(); // 'start' the observable migrationState.$.subscribe( state => console.dir(state), error => console.error(error), done => console.log('done') ); // the emitted states look like this: { status: 'RUNNING' // oneOf 'RUNNING' | 'DONE' | 'ERROR' count: { total: 50, // amount of documents which must be migrated handled: 0, // amount of handled docs percent: 0 // percentage [0-100] } } If you don't want to show the state to the user, you can also use .migratePromise(): const migrationPromise = messageCol.migratePromise(10); await migratePromise; ","version":"Next","tagName":"h2"},{"title":"migrationStates()","type":1,"pageTitle":"Migrate Database Data on schema changes","url":"/migration-schema.html#migrationstates","content":" RxDatabase.migrationStates() returns an Observable that emits all migration states of any collection of the database. Use this when you add collections dynamically and want to show a loading-state of the migrations to the user. const allStatesObservable = myDatabase.migrationStates(); allStatesObservable.subscribe(allStates => { allStates.forEach(migrationState => { console.log( 'migration state of ' + migrationState.collection.name ); }); }); ","version":"Next","tagName":"h2"},{"title":"Migrating attachments","type":1,"pageTitle":"Migrate Database Data on schema changes","url":"/migration-schema.html#migrating-attachments","content":" When you store RxAttachments together with your document, they can also be changed, added or removed while running the migration. You can do this by mutating the oldDoc._attachments property. import { createBlob } from 'rxdb'; const migrationStrategies = { 1: async function(oldDoc){ // do nothing with _attachments to keep all attachments and have them in the new collection version. return oldDoc; }, 2: async function(oldDoc){ // set _attachments to an empty object to delete all existing ones during the migration. oldDoc._attachments = {}; return oldDoc; } 3: async function(oldDoc){ // update the data field of a single attachment to change its data. oldDoc._attachments.myFile.data = await createBlob( 'my new text', oldDoc._attachments.myFile.content_type ); return oldDoc; } } ","version":"Next","tagName":"h2"},{"title":"Migration on multi-tab in browsers","type":1,"pageTitle":"Migrate Database Data on schema changes","url":"/migration-schema.html#migration-on-multi-tab-in-browsers","content":" If you use RxDB in a multiInstance environment, like a browser, it will ensure that exactly one tab is running a migration of a collection. Also the migrationState.$ events are emitted between browser tabs. ","version":"Next","tagName":"h2"},{"title":"Migration and Replication","type":1,"pageTitle":"Migrate Database Data on schema changes","url":"/migration-schema.html#migration-and-replication","content":" If you use any of the RxReplication plugins, the migration will also run on the internal replication-state storage. It will migrate all assumedMasterState documents so that after the migration is done, you do not have to re-run the replication from scratch. RxDB assumes that you run the exact same migration on the servers and the clients. Notice that the replication pull-checkpoint will not be migrated. Your backend must be compatible with pull-checkpoints of older versions. ","version":"Next","tagName":"h2"},{"title":"Creating Plugins","type":0,"sectionRef":"#","url":"/plugins.html","content":"","keywords":"","version":"Next"},{"title":"rxdb","type":1,"pageTitle":"Creating Plugins","url":"/plugins.html#rxdb","content":" The rxdb-property signals that this plugin is an rxdb-plugin. The value should always be true. ","version":"Next","tagName":"h2"},{"title":"prototypes","type":1,"pageTitle":"Creating Plugins","url":"/plugins.html#prototypes","content":" The prototypes-property contains a function for each of RxDB's internal prototype that you want to manipulate. Each function gets the prototype-object of the corresponding class as parameter and then can modify it. You can see a list of all available prototypes here ","version":"Next","tagName":"h2"},{"title":"overwritable","type":1,"pageTitle":"Creating Plugins","url":"/plugins.html#overwritable","content":" Some of RxDB's functions are not inside of a class-prototype but are static. You can set and overwrite them with the overwritable-object. You can see a list of all overwritables here. hooks Sometimes you don't want to overwrite an existing RxDB-method, but extend it. You can do this by adding hooks which will be called each time the code jumps into the hooks corresponding call. You can find a list of all hooks here. options RxDatabase and RxCollection have an additional options-parameter, which can be filled with any data required be the plugin. const collection = myDatabase.addCollections({ foo: { schema: mySchema, options: { // anything can be passed into the options foo: ()=>'bar' } } }) // Afterwards you can use these options in your plugin. collection.options.foo(); // 'bar' ","version":"Next","tagName":"h2"},{"title":"QueryCache","type":0,"sectionRef":"#","url":"/query-cache.html","content":"","keywords":"","version":"Next"},{"title":"Cache Replacement Policy","type":1,"pageTitle":"QueryCache","url":"/query-cache.html#cache-replacement-policy","content":" To not let RxDB fill up all the memory, a cache replacement policy is defined that clears up the cached queries. This is implemented as a function which runs regularly, depending on when queries are created and the database is idle. The default policy should be good enough for most use cases but defining custom ones can also make sense. ","version":"Next","tagName":"h2"},{"title":"The default policy","type":1,"pageTitle":"QueryCache","url":"/query-cache.html#the-default-policy","content":" The default policy starts cleaning up queries depending on how much queries are in the cache and how much document data they contain. It will never uncache queries that have subscribers to their resultsIt tries to always have less than 100 queries without subscriptions in the cache.It prefers to uncache queries that have never executed and are older then 30 secondsIt prefers to uncache queries that have not been used for longer time ","version":"Next","tagName":"h2"},{"title":"Other references to queries","type":1,"pageTitle":"QueryCache","url":"/query-cache.html#other-references-to-queries","content":" With JavaScript, it is not possible to count references to variables. Therefore it might happen that an uncached RxQuery is still referenced by the users code and used to get results. This should never be a problem, uncached queries must still work. Creating the same query again however, will result in having two RxQuery instances instead of one. ","version":"Next","tagName":"h2"},{"title":"Using a custom policy","type":1,"pageTitle":"QueryCache","url":"/query-cache.html#using-a-custom-policy","content":" A cache replacement policy is a normal JavaScript function according to the type RxCacheReplacementPolicy. It gets the RxCollection as first parameter and the QueryCache as second. Then it iterates over the cached RxQuery instances and uncaches the desired ones with uncacheRxQuery(rxQuery). When you create your custom policy, you should have a look at the default. To apply a custom policy to a RxCollection, add the function as attribute cacheReplacementPolicy. const collection = await myDatabase.addCollections({ humans: { schema: mySchema, cacheReplacementPolicy: function(){ /* ... */ } } }); ","version":"Next","tagName":"h2"},{"title":"Questions and answers","type":0,"sectionRef":"#","url":"/questions-answers.html","content":"","keywords":"","version":"Next"},{"title":"Can't change the schema of a collection","type":1,"pageTitle":"Questions and answers","url":"/questions-answers.html#cant-change-the-schema-of-a-collection","content":" When you make changes to the schema of a collection, you sometimes can get an error likeError: addCollections(): another instance created this collection with a different schema. This means you have created a collection before and added document-data to it. When you now just change the schema, it is likely that the new schema does not match the saved documents inside of the collection. This would cause strange bugs and would be hard to debug, so RxDB check's if your schema has changed and throws an error. To change the schema in production-mode, do the following steps: Increase the version by 1Add the appropriate migrationStrategies so the saved data will be modified to match the new schema In development-mode, the schema-change can be simplified by one of these strategies: Use the memory-storage so your db resets on restart and your schema is not saved permanentlyCall removeRxDatabase('mydatabasename', RxStorage); before creating a new RxDatabase-instanceAdd a timestamp as suffix to the database-name to create a new one each run like name: 'heroesDB' + new Date().getTime() ","version":"Next","tagName":"h2"},{"title":"Why is the PouchDB RxStorage deprecated?","type":1,"pageTitle":"Questions and answers","url":"/questions-answers.html#why-is-the-pouchdb-rxstorage-deprecated","content":" When I started developing RxDB in 2016, I had a specific use case to solve. Because there was no client-side database out there that fitted, I created RxDB as a wrapper around PouchDB. This worked great and all the PouchDB features like the query engine, the adapter system, CouchDB-replication and so on, came for free. But over the years, it became clear that PouchDB is not suitable for many applications, mostly because of its performance: To be compliant to CouchDB, PouchDB has to store all revision trees of documents which slows down queries. Also purging these document revisions is not possibleso the database storage size will only increase over time. Another problem was that many issues in PouchDB have never been fixed, but only closed by the issue-bot like this one. The whole PouchDB RxStorage code was full of workarounds and monkey patches to resolve these issues for RxDB users. Many these patches decreased performance even further. Sometimes it was not possible to fix things from the outside, for example queries with $gt operators return the wrong documents which is a no-go for a production database and hard to debug. In version 10.0.0 RxDB introduced the RxStorage layer which allows users to swap out the underlying storage engine where RxDB stores and queries documents from. This allowed to use alternatives from PouchDB, for example the Dexie RxStorage in browsers or even the FoundationDB RxStorage on the server side. There where not many use cases left where it was a good choice to use the PouchDB RxStorage. Only replicating with a CouchDB server, was only possible with PouchDB. But this has also changed. RxDB has a plugin that allows to replicate clients with any CouchDB server by using the RxDB replication protocol. This plugins work with any RxStorage so that it is not necessary to use the PouchDB storage. Removing PouchDB allows RxDB to add many awaited features like filtered change streams for easier replication and permission handling. It will also free up development time. If you are currently using the PouchDB RxStorage, you have these options: Migrate to another RxStorage (recommended)Never update RxDB to the next major version (stay on older 14.0.0)Fork the PouchDB RxStorage and maintain the plugin by yourself.Fix all the PouchDB problems so that we can add PouchDB to the RxDB Core again. ","version":"Next","tagName":"h2"},{"title":"Query Optimizer","type":0,"sectionRef":"#","url":"/query-optimizer.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"Query Optimizer","url":"/query-optimizer.html#usage","content":" import { findBestIndex } from 'rxdb-premium/plugins/query-optimizer'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/indexeddb'; const bestIndexes = await findBestIndex({ schema: myRxJsonSchema, /** * In this example we use the IndexedDB RxStorage, * but any other storage can be used for testing. */ storage: getRxStorageIndexedDB(), /** * Multiple queries can be optimized at the same time * which decreases the overall runtime. */ queries: { /** * Queries can be mapped by a query id, * here we use myFirstQuery as query id. */ myFirstQuery: { selector: { age: { $gt: 10 } }, }, mySecondQuery: { selector: { age: { $gt: 10 }, lastName: { $eq: 'Nakamoto' } }, } }, testData: [/** data for the documents. **/] }); ","version":"Next","tagName":"h2"},{"title":"Important details","type":1,"pageTitle":"Query Optimizer","url":"/query-optimizer.html#important-details","content":" This is a build time tool. You should use it to find the best indexes for your queries during build time. Then you store these results and you application can use the best indexes during run time. It makes no sense to run time optimization with a different RxStorage (+settings) that what you use in production. The result of the query optimizer is heavily dependent on the RxStorage and JavaScript runtime. For example it makes no sense to run the optimization in Node.js and then use the optimized indexes in the browser. It is very important the you use production liketestData. Finding the best index heavily depends on data distribution and amount of stored/queried documents. For example if you store and query users with an age field, it makes no sense to just use a random number for the age because in production the age of your users is not equally distributed. The higher you set runs, the more test cycles will be performed and the more significant will be the time measurements which leads to a better index selection. ","version":"Next","tagName":"h2"},{"title":"Population","type":0,"sectionRef":"#","url":"/population.html","content":"","keywords":"","version":"Next"},{"title":"Schema with ref","type":1,"pageTitle":"Population","url":"/population.html#schema-with-ref","content":" The ref-keyword in properties describes to which collection the field-value belongs to (has a relationship). export const refHuman = { title: 'human related to other human', version: 0, primaryKey: 'name', properties: { name: { type: 'string' }, bestFriend: { ref: 'human', // refers to collection human type: 'string' // ref-values must always be string or ['string','null'] (primary of foreign RxDocument) } } }; You can also have a one-to-many reference by using a string-array. export const schemaWithOneToManyReference = { version: 0, primaryKey: 'name', type: 'object', properties: { name: { type: 'string' }, friends: { type: 'array', ref: 'human', items: { type: 'string' } } } }; ","version":"Next","tagName":"h2"},{"title":"populate()","type":1,"pageTitle":"Population","url":"/population.html#populate","content":" ","version":"Next","tagName":"h2"},{"title":"via method","type":1,"pageTitle":"Population","url":"/population.html#via-method","content":" To get the referred RxDocument, you can use the populate()-method. It takes the field-path as attribute and returns a Promise which resolves to the foreign document or null if not found. await humansCollection.insert({ name: 'Alice', bestFriend: 'Carol' }); await humansCollection.insert({ name: 'Bob', bestFriend: 'Alice' }); const doc = await humansCollection.findOne('Bob').exec(); const bestFriend = await doc.populate('bestFriend'); console.dir(bestFriend); //> RxDocument[Alice] ","version":"Next","tagName":"h3"},{"title":"via getter","type":1,"pageTitle":"Population","url":"/population.html#via-getter","content":" You can also get the populated RxDocument with the direct getter. Therefore you have to add an underscore suffix _ to the fieldname. This works also on nested values. await humansCollection.insert({ name: 'Alice', bestFriend: 'Carol' }); await humansCollection.insert({ name: 'Bob', bestFriend: 'Alice' }); const doc = await humansCollection.findOne('Bob').exec(); const bestFriend = await doc.bestFriend_; // notice the underscore_ console.dir(bestFriend); //> RxDocument[Alice] ","version":"Next","tagName":"h3"},{"title":"Example with nested reference","type":1,"pageTitle":"Population","url":"/population.html#example-with-nested-reference","content":" const myCollection = await myDatabase.addCollections({ human: { schema: { version: 0, type: 'object', properties: { name: { type: 'string' }, family: { type: 'object', properties: { mother: { type: 'string', ref: 'human' } } } } } } }); const mother = await myDocument.family.mother_; console.dir(mother); //> RxDocument ","version":"Next","tagName":"h2"},{"title":"Example with array","type":1,"pageTitle":"Population","url":"/population.html#example-with-array","content":" const myCollection = await myDatabase.addCollections({ human: { schema: { version: 0, type: 'object', properties: { name: { type: 'string' }, friends: { type: 'array', ref: 'human', items: { type: 'string' } } } } } }); //[insert other humans here] await myCollection.insert({ name: 'Alice', friends: [ 'Bob', 'Carol', 'Dave' ] }); const doc = await humansCollection.findOne('Alice').exec(); const friends = await myDocument.friends_; console.dir(friends); //> Array.<RxDocument> ","version":"Next","tagName":"h2"},{"title":"RxDB Quickstart","type":0,"sectionRef":"#","url":"/quickstart.html","content":"","keywords":"","version":"Next"},{"title":"Installation","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#installation","content":" RxDB is distributed via npm and uses rxjs as a dependency. Install both with: npm install rxjs rxdb --save ","version":"Next","tagName":"h2"},{"title":"Enable dev-mode","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#enable-dev-mode","content":" When you use RxDB in development, you should enable the dev-mode plugin which adds helpful checks and validations and tells you if you do something wrong. import { addRxPlugin } from 'rxdb'; import { RxDBDevModePlugin } from 'rxdb/plugins/dev-mode'; addRxPlugin(RxDBDevModePlugin); ","version":"Next","tagName":"h2"},{"title":"Creating an RxDatabase","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#creating-an-rxdatabase","content":" ","version":"Next","tagName":"h2"},{"title":"Choose an RxStorage adapter","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#choose-an-rxstorage-adapter","content":" RxDB can be used in a range of JavaScript runtime environments, and depending on the runtime the appropriate RxStorage adapter must be used. For browser applications it is recommended to start with the Dexie.js RxStorage adapter which is bundled with RxDB. import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; ","version":"Next","tagName":"h3"},{"title":"Create the RxDatabase","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#create-the-rxdatabase","content":" You can now create the RxDatabase instance: import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const myDatabase = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageDexie() }); ","version":"Next","tagName":"h3"},{"title":"Create an RxCollection","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#create-an-rxcollection","content":" An RxDatabase contains RxCollections for storing and querying data. A collection is similar to a SQL table, and individual records are stored in the collection as JSON documents. An RxDatabase can have as many collections as you need. Creating a schema for a collection RxDB uses JSON Schema to describe the documents stored in each collection. For our example app we create a simple schema that describes a todo document: const todoSchema = { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, name: { type: 'string' }, done: { type: 'boolean' }, timestamp: { type: 'string', format: 'date-time' } }, required: ['id', 'name', 'done', 'timestamp'] } Adding an RxCollection to the RxDatabase With this schema we can now add the todos collection to the database: await myDatabase.addCollections({ todos: { schema: todoSchema } }); ","version":"Next","tagName":"h3"},{"title":"Write Operations","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#write-operations","content":" Now that we have an RxCollection we can store some documents in it. ","version":"Next","tagName":"h2"},{"title":"Inserting a document","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#inserting-a-document","content":" const myDocument = await myDatabase.todos.insert({ id: 'todo1', name: 'Learn RxDB', done: false, timestamp: new Date().toISOString() }); ","version":"Next","tagName":"h3"},{"title":"Updating a document","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#updating-a-document","content":" There are multiple ways to update an RxDocument. The simplest is with patch: await myDocument.patch({ done: true }); You can also use modify which takes a plain JavaScript function that mutates the document state and returns the mutated version. await myDocument.modify(docData => { docData.done = true; return docData; }); ","version":"Next","tagName":"h3"},{"title":"Delete a document","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#delete-a-document","content":" You can soft delete an RxDocument by calling myDocument.remove(). This will set the document's state to DELETED which ensures that it will not be returned in query results. RxDB keeps deleted documents in the database so that it is able to sync the deleted state to other instances during database replication. Deleted documents can be purged in a later point with the cleanup plugin if needed. ","version":"Next","tagName":"h3"},{"title":"Query Operations","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#query-operations","content":" ","version":"Next","tagName":"h2"},{"title":"Simple Query","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#simple-query","content":" Like many NoSQL databases, RxDB uses the Mango syntax for query operations. To run a query, you first create an RxQuery object with myCollection.find() and then call .exec() on that object to fetch the query results. const foundDocuments = await myDatabase.todos.find({ selector: { done: { $eq: false } } }).exec(); More Mango query examples can be found at the RxQuery Examples. In addition to the .find() RxQuery, RxDB has additional query methods for fetching the documents you need: findOne()findByIds() ","version":"Next","tagName":"h3"},{"title":"Observing data","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#observing-data","content":" You might want to subscribe to data changes so that your UI is always up-to-date with the data stored on disc. RxDB allows you to subscribe to data changes even when the change happens in another part of your application, another browser tab, or during database replication/synchronization. ","version":"Next","tagName":"h2"},{"title":"Observing queries","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#observing-queries","content":" To observe changes to records returned from a query, instead of calling .exec() you get the observable of the RxQuery object via .$ and then subscribe to it. const observable = myDatabase.todos.find({ selector: { done: { $eq: false } } }).$; observable.subscribe(notDone => { console.log('Currently have ' + notDone.length + 'things to do'); }); ","version":"Next","tagName":"h3"},{"title":"Subscribe to a document value","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#subscribe-to-a-document-value","content":" You can also subscribe to the fields of a single RxDocument. Add the $ sign to the desired field and then subscribe to the returned observable. myDocument.done$.subscribe(isDone => { console.log('done: ' + isDone); }); ","version":"Next","tagName":"h3"},{"title":"Replication","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#replication","content":" RxDB has multiple replication plugins to replicated database state with a server. The easiest way to replicate data between your clients devices it the WebRTC replication plugin that replicates data between devices without a centralized server. This makes it easy to try out replication without having to host anything. import { replicateWebRTC, getConnectionHandlerSimplePeer } from 'rxdb/plugins/replication-webrtc'; replicateWebRTC({ collection: myDatabase.todos, connectionHandlerCreator: getConnectionHandlerSimplePeer({}), topic: '', // <- set any app-specific room id here. secret: 'mysecret', pull: {}, push: {}, }) ","version":"Next","tagName":"h3"},{"title":"Next steps","type":1,"pageTitle":"RxDB Quickstart","url":"/quickstart.html#next-steps","content":" You are now ready to dive deeper into RxDB. There is a full implementation of the quickstart guide here so you can clone that repository and play with the code. Also please continue reading the documentation, join the community on our Discord chat, and star the GitHub repo. If you are using RxDB in a production environment and able to support its continued development, please take a look at the 👑 Premium package which includes additional plugins and utilities. ","version":"Next","tagName":"h2"},{"title":"React Native Database","type":0,"sectionRef":"#","url":"/react-native-database.html","content":"","keywords":"","version":"Next"},{"title":"Database Solutions for React-Native","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#database-solutions-for-react-native","content":" There are multiple database solutions that can be used with React Native. While I would recommend to use RxDB for most use cases, it is still helpful to learn about other alternatives. ","version":"Next","tagName":"h2"},{"title":"AsyncStorage","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#asyncstorage","content":" AsyncStorage is a key->value storage solution that works similar to the browsers localstorage API. The big difference is that access to the AsyncStorage is not a blocking operation but instead everything is Promise based. This is a big benefit because long running writes and reads will not block your JavaScript process which would cause a laggy user interface. /** * Because it is Promise-based, * you have to 'await' the call to getItem() */ await setItem('myKey', 'myValue'); const value = await AsyncStorage.getItem('myKey'); AsyncStorage was originally included in React Native itself. But it was deprecated by the React Native Team which recommends to use a community based package instead. There is a community fork of AsyncStorage that is actively maintained and open source. AsyncStorage is fine when only a small amount of data needs to be stored and when no query capabilities besides the key-access are required. Complex queries or features are not supported which makes AsyncStorage not suitable for anything more then storing simple user settings data. ","version":"Next","tagName":"h3"},{"title":"SQLite","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#sqlite","content":" SQLite is a SQL based relational database written in C that was crafted to be embed inside of applications. Operations are written in the SQL query language and SQLite generally follows the PostgreSQL syntax. To use SQLite in React Native, you first have to include the SQLite library itself as a plugin. There a different project out there that can be used, but I would recommend to use the react-native-quick-sqlite project. First you have to install the library into your React Native project via npm install react-native-quick-sqlite. In your code you can then import the library and create a database connection: import {open} from 'react-native-quick-sqlite'; const db = open('myDb.sqlite'); Notice that SQLite is a file based database where all data is stored directly in the filesystem of the OS. Therefore to create a connection, you have to provide a filename. With the open connection you can then run SQL queries: let { rows } = db.execute('SELECT somevalue FROM sometable'); If that does not work for you, you might want to try the react-native-sqlite-storage project instead which is also very popular. The downside of SQLite is that it is lacking many features that are handful when using a database together with an UI based application. For example it is not possible to observe queries or document fields. Also there is no replication method. This makes SQLite a good solution when you want to solely store data on the client, but not when you want to sync data with a server or other clients. ","version":"Next","tagName":"h3"},{"title":"PouchDB","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#pouchdb","content":" PouchDB is a JavaScript NoSQL database that follows the API of the Apache CouchDB server database. The core feature of PouchDB is the ability to do a two-way replication with any CouchDB compliant endpoint. While PouchDB is pretty mature, it has some drawbacks that blocks it from being used in a client-side React Native application. For example it has to store all documents states over time which is required to replicate with CouchDB. Also it is not easily possible to fully purge documents and so it will fill up disc space over time. A big problem is also that PouchDB is not really maintained and major bugs like wrong query results are not fixed anymore. The performance of PouchDB is a general bottleneck which is caused by how it has to store and fetch documents while being compliant to CouchDB. The only real reason to use PouchDB in React Native, is when you want to replicate with a CouchDB or Couchbase server. Because PouchDB is based on an adapter system for storage, there are two options to use it with React Native: Either use the pouchdb-adapter-react-native-sqlite adapteror the pouchdb-adapter-asyncstorage adapter. Because the asyncstorage adapter is no longer maintained, it is recommended to use the native-sqlite adapter: First you have to install the adapter and other dependencies via npm install pouchdb-adapter-react-native-sqlite react-native-quick-sqlite react-native-quick-websql. Then you have to craft a custom PouchDB class that combines these plugins: import 'react-native-get-random-values'; import PouchDB from 'pouchdb-core'; import HttpPouch from 'pouchdb-adapter-http'; import replication from 'pouchdb-replication'; import mapreduce from 'pouchdb-mapreduce'; import SQLiteAdapterFactory from 'pouchdb-adapter-react-native-sqlite'; import WebSQLite from 'react-native-quick-websql'; const SQLiteAdapter = SQLiteAdapterFactory(WebSQLite); export default PouchDB.plugin(HttpPouch) .plugin(replication) .plugin(mapreduce) .plugin(SQLiteAdapter); This can then be used to create a PouchDB database instance which can store and query documents: const db = new PouchDB('mydb.db', { adapter: 'react-native-sqlite' }); ","version":"Next","tagName":"h3"},{"title":"RxDB","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#rxdb","content":" RxDB is an local-first, NoSQL-database for JavaScript applications. It is reactive which means that you can not only query the current state, but subscribe to all state changes like the result of a query or even a single field of a document. This is great for UI-based realtime applications in a way that makes it easy to develop realtime applications like what you need in React Native. There are multiple ways to use RxDB in React Native: Use the memory RxStorage that stores the data inside of the JavaScript memory without persistenceUse the LokiJS RxStorage with the react-native-lokijs plugin or the loki-async-reference-adapter.Use the SQLite RxStorage with the react-native-quick-sqlite plugin. It is recommended to use the SQLite RxStorage because it has the best performance and is the easiest to set up. However it is part of the 👑 Premium Plugins which must be purchased, so to try out RxDB with React Native, you might want to use one of the other three options. First you have to install all dependencies via npm install rxdb rxjs rxdb-premium react-native-quick-sqlite. Then you can assemble the RxStorage and create a database with it: import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsQuickSQLite } from 'rxdb-premium/plugins/storage-sqlite'; import { open } from 'react-native-quick-sqlite'; // create database const myRxDatabase = await createRxDatabase({ // Instead of a simple name, you can use a folder path to determine the database location name: 'exampledb', multiInstance: false, // <- Set this to false when using RxDB in React Native storage: getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsQuickSQLite(open) }) }); // create collections const collections = await myRxDatabase.addCollections({ humans: { /* ... */ } }); // insert document await collections.humans.insert({id: 'foo', name: 'bar'}); // run a query const result = await collections.humans.find({ selector: { name: 'bar' } }).exec(); // observe a query await collections.humans.find({ selector: { name: 'bar' } }).$.subscribe(result => {/* ... */}); Using the SQLite RxStorage is pretty fast, which is shown in the performance comparison. To learn more about using RxDB with React Native, you might want to check out this example project. Also RxDB provides many other features like encryption or compression. You can even store binary data as attachments or use RxDB as an ORM in React Native. ","version":"Next","tagName":"h3"},{"title":"WatermelonDB","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#watermelondb","content":" WatermelonDB is a reactive and asynchronous database for React and React Native apps. It is based on SQLite in React Native and LokiJS when it is used in the browser. It supports schemas, observable queries, migrations and relations. The schema layout is handled by TypeScript decorators and looks like this: class Post extends Model { @field('name') name; @field('body') body; @children('comments') comments; // a relation to another table @relation('users', 'author_id') author; } WatermelonDB also supports replication but the sync protocol is pretty complex because on how it resolves conflicts. I recommend to watch this video to learn how the replication works. According to the roadmap, despite being essentially feature-complete, WatermelonDB is still on the 0.xx version and intends to switch to a 1.x.x version as once it reaches a long-term stable API. ","version":"Next","tagName":"h3"},{"title":"Firebase / Firestore","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#firebase--firestore","content":" Firestore is a cloud based database technology that stores data on clients devices and replicates it with the Firebase cloud service that is run by google. It has many features like observability and authentication. The main feature lacking is the non-complete offline first support because clients cannot start the application while being offline because then the authentication does not work. After they are authenticated, being offline is no longer a problem. Also using firestore creates a vendor lock-in because it is not possible to replicate with a custom self hosted backend. To get started with Firestore in React Native, it is recommended to use the React Native Firebase open-source project. ","version":"Next","tagName":"h3"},{"title":"Follow up","type":1,"pageTitle":"React Native Database","url":"/react-native-database.html#follow-up","content":" A good way to learn using RxDB database with React Native is to check out the RxDB React Native example and use that as a tutorial.If you haven't done so yet, you should start learning about RxDB with the Quickstart Tutorial.There is a followup list of other client side database alternatives that might work with React Native. ","version":"Next","tagName":"h2"},{"title":"Replication with Firestore from Firebase","type":0,"sectionRef":"#","url":"/replication-firestore.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"Replication with Firestore from Firebase","url":"/replication-firestore.html#usage","content":" First initialize your Firestore database like you would do without RxDB. import * as firebase from 'firebase/app'; import { getFirestore, collection } from 'firebase/firestore'; const projectId = 'my-project-id'; const app = firebase.initializeApp({ projectId, databaseURL: 'http://localhost:8080?ns=' + projectId, /* ... */ }); const firestoreDatabase = getFirestore(app); const firestoreCollection = collection(firestoreDatabase, 'my-collection-name'); Then you can start the replication by calling replicateFirestore() on your RxCollection. const replicationState = replicateFirestore( { collection: myRxCollection, firestore: { projectId, database: firestoreDatabase, collection: firestoreCollection }, pull: {}, push: {}, /** * Either do a live or a one-time replication * [default=true] */ live: true, /** * (optional) likely you should just use the default. * * In firestore it is not possible to read out * the internally used write timestamp of a document. * Even if we could read it out, it is not indexed which * is required for fetch 'changes-since-x'. * So instead we have to rely on a custom user defined field * that contains the server time which is set by firestore via serverTimestamp() * Notice that the serverTimestampField MUST NOT be part of the collections RxJsonSchema! * [default='serverTimestamp'] */ serverTimestampField: 'serverTimestamp' } ); To observe and cancel the replication, you can use any other methods from the ReplicationState like error$, cancel() and awaitInitialReplication(). ","version":"Next","tagName":"h2"},{"title":"Handling deletes","type":1,"pageTitle":"Replication with Firestore from Firebase","url":"/replication-firestore.html#handling-deletes","content":" RxDB requires you to never fully delete documents. This is needed to be able to replicate the deletion state of a document to other instances. The firestore replication will set a boolean _deleted field to all documents to indicate the deletion state. You can change this by setting a different deletedField in the sync options. ","version":"Next","tagName":"h2"},{"title":"Do not set enableIndexedDbPersistence()","type":1,"pageTitle":"Replication with Firestore from Firebase","url":"/replication-firestore.html#do-not-set-enableindexeddbpersistence","content":" Firestore has the enableIndexedDbPersistence() feature which caches document states locally to IndexedDB. This is not needed when you replicate your Firestore with RxDB because RxDB itself will store the data locally already. ","version":"Next","tagName":"h2"},{"title":"Using the replication with an already existing Firestore Database State","type":1,"pageTitle":"Replication with Firestore from Firebase","url":"/replication-firestore.html#using-the-replication-with-an-already-existing-firestore-database-state","content":" If you have not used RxDB before and you already have documents inside of your Firestore database, you have to manually set the _deleted field to false and the serverTimestamp to all existing documents. import { getDocs, query, serverTimestamp } from 'firebase/firestore'; const allDocsResult = await getDocs(query(firestoreCollection)); allDocsResult.forEach(doc => { doc.update({ _deleted: false, serverTimestamp: serverTimestamp() }) }); Also notice that if you do writes from non-RxDB applications, you have to keep these fields in sync. It is recommended to use the Firestore triggers to ensure that. ","version":"Next","tagName":"h2"},{"title":"Filtered Replication","type":1,"pageTitle":"Replication with Firestore from Firebase","url":"/replication-firestore.html#filtered-replication","content":" You might need to replicate only a subset of your collection, either to or from Firestore. You can achieve this using push.filter and pull.filter options. const replicationState = replicateFirestore( { collection: myRxCollection, firestore: { projectId, database: firestoreDatabase, collection: firestoreCollection }, pull: { filter: [ where('ownerId', '==', userId) ] }, push: { filter: (item) => item.syncEnabled === true } } ); Keep in mind that you can not use inequality operators (<, <=, !=, not-in, >, or >=) in pull.filter since that would cause a conflict with ordering by serverTimestamp. ","version":"Next","tagName":"h2"},{"title":"Signals & Co. - Custom reactivity adapters instead of RxJS Observables","type":0,"sectionRef":"#","url":"/reactivity.html","content":"","keywords":"","version":"Next"},{"title":"Adding a custom reactivity factory (in angular projects)","type":1,"pageTitle":"Signals & Co. - Custom reactivity adapters instead of RxJS Observables","url":"/reactivity.html#adding-a-custom-reactivity-factory-in-angular-projects","content":" If you have an angular project, to get custom reactivity objects out of RxDB, you have to pass a RxReactivityFactory during database creation. The RxReactivityFactory has the fromObservable() method that creates your custom reacitvity object based on an observable and an initial value. For example to use signals in angular, you can use the angular toSignal function: import { RxReactivityFactory } from 'rxdb/plugins/core'; import { Signal, untracked } from '@angular/core'; import { toSignal } from '@angular/core/rxjs-interop'; export function createReactivityFactory(injector: Injector): RxReactivityFactory<Signal<any>> { return { fromObservable(observable$, initialValue: any) { return untracked(() => toSignal(observable$, { initialValue, injector, rejectErrors: true }) ); } }; } Then you can pass this factory when you create the RxDatabase: import { createRxDatabase } from 'rxdb/plugins/core'; const database = await createRxDatabase({ name: 'mydb', storage: getRxStorageDexie(), reactivity: createReactivityFactory(inject(Injector)) }); An example of how signals are used in angular with RxDB, can be found at the RxDB Angular Example ","version":"Next","tagName":"h2"},{"title":"Adding reactivity for other Frameworks","type":1,"pageTitle":"Signals & Co. - Custom reactivity adapters instead of RxJS Observables","url":"/reactivity.html#adding-reactivity-for-other-frameworks","content":" When adding custom reactivity for other JavaScript frameworks or libraries, make sure to correctly unsubscribe whenever you call observable.subscribe() in the fromObservable() method. There are also some 👑 Premium Plugins that can be used with other (non-angular frameworks): ","version":"Next","tagName":"h2"},{"title":"Vue Shallow Refs","type":1,"pageTitle":"Signals & Co. - Custom reactivity adapters instead of RxJS Observables","url":"/reactivity.html#vue-shallow-refs","content":" // npm install vue --save import { VueRxReactivityFactory } from 'rxdb-premium/plugins/reactivity-vue'; import { createRxDatabase } from 'rxdb/plugins/core'; const database = await createRxDatabase({ name: 'mydb', storage: getRxStorageDexie(), reactivity: VueRxReactivityFactory }); ","version":"Next","tagName":"h3"},{"title":"Preact Signals","type":1,"pageTitle":"Signals & Co. - Custom reactivity adapters instead of RxJS Observables","url":"/reactivity.html#preact-signals","content":" // npm install @preact/signals-core --save import { PreactSignalsRxReactivityFactory } from 'rxdb-premium/plugins/reactivity-preact-signals'; import { createRxDatabase } from 'rxdb/plugins/core'; const database = await createRxDatabase({ name: 'mydb', storage: getRxStorageDexie(), reactivity: PreactSignalsRxReactivityFactory }); ","version":"Next","tagName":"h3"},{"title":"Accessing custom reactivity objects","type":1,"pageTitle":"Signals & Co. - Custom reactivity adapters instead of RxJS Observables","url":"/reactivity.html#accessing-custom-reactivity-objects","content":" All observable data in RxDB is marked by the single dollar sign $ like RxCollection.$ for events or RxDocument.myField$ to get the observable for a document field. To make custom reactivity objects distinguable, they are marked with double-dollar signs $$ instead. Here are some example on how to get custom reactivity objects from RxDB specific instances: // RxDocument const signal = myRxDocument.get$$('foobar'); // get signal that represents the document field 'foobar' const signal = myRxDocument.foobar$$; // same as above const signal = myRxDocument.$$; // get signal that represents whole document over time const signal = myRxDocument.deleted$$; // get signal that represents the deleted state of the document // RxQuery const signal = collection.find().$$; // get signal that represents the query result set over time const signal = collection.findOne().$$; // get signal that represents the query result set over time // RxLocalDocument const signal = myRxLocalDocument.$$; // get signal that represents the whole local document state const signal = myRxLocalDocument.get$$('foobar'); // get signal that represents the foobar field ","version":"Next","tagName":"h2"},{"title":"Limitations","type":1,"pageTitle":"Signals & Co. - Custom reactivity adapters instead of RxJS Observables","url":"/reactivity.html#limitations","content":" Custom reactivity is in beta mode, it might have breaking changes without a major RxDB release.TypeScript typings are not fully implemented, make a PR if something is missing or not working for you.Currently not all observables things in RxDB are implemented to work with custom reactivity. Please make a PR if you have the need for any missing one. ","version":"Next","tagName":"h2"},{"title":"Replication with CouchDB","type":0,"sectionRef":"#","url":"/replication-couchdb.html","content":"","keywords":"","version":"Next"},{"title":"Pros","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#pros","content":" Faster initial replication.Works with any RxStorage, not just PouchDB.Easier conflict handling because conflicts are handled during replication and not afterwards.Does not have to store all document revisions on the client, only stores the newest version. ","version":"Next","tagName":"h2"},{"title":"Cons","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#cons","content":" Does not support the replication of attachments.Like all CouchDB replication plugins, this one is also limited to replicating 6 collections in parallel. Read this for workarounds ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#usage","content":" Start the replication via replicateCouchDB(). import { replicateCouchDB } from 'rxdb/plugins/replication-couchdb'; const replicationState = replicateCouchDB( { replicationIdentifier: 'my-couchdb-replication', collection: myRxCollection, // url to the CouchDB endpoint (required) url: 'http://example.com/db/humans', /** * true for live replication, * false for a one-time replication. * [default=true] */ live: true, /** * A custom fetch() method can be provided * to add authentication or credentials. * Can be swapped out dynamically * by running 'replicationState.fetch = newFetchMethod;'. * (optional) */ fetch: myCustomFetchMethod, pull: { /** * Amount of documents to be fetched in one HTTP request * (optional) */ batchSize: 60, /** * Custom modifier to mutate pulled documents * before storing them in RxDB. * (optional) */ modifier: docData => {/* ... */}, /** * Heartbeat time in milliseconds * for the long polling of the changestream. * @link https://docs.couchdb.org/en/3.2.2-docs/api/database/changes.html * (optional, default=60000) */ heartbeat: 60000 }, push: { /** * How many local changes to process at once. * (optional) */ batchSize: 60, /** * Custom modifier to mutate documents * before sending them to the CouchDB endpoint. * (optional) */ modifier: docData => {/* ... */} } } ); When you call replicateCouchDB() it returns a RxCouchDBReplicationState which can be used to subscribe to events, for debugging or other functions. It extends the RxReplicationState so any other method that can be used there can also be used on the CouchDB replication state. ","version":"Next","tagName":"h2"},{"title":"Conflict handling","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#conflict-handling","content":" When conflicts appear during replication, the conflictHandler of the RxCollection is used, equal to the other replication plugins. Read more about conflict handling here. ","version":"Next","tagName":"h2"},{"title":"Auth example","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#auth-example","content":" Lets say for authentication you need to add a bearer token as HTTP header to each request. You can achieve that by crafting a custom fetch() method that add the header field. const myCustomFetch = (url, options) => { // flat clone the given options to not mutate the input const optionsWithAuth = Object.assign({}, options); // ensure the headers property exists if(!optionsWithAuth.headers) { optionsWithAuth.headers = {}; } // add bearer token to headers optionsWithAuth.headers['Authorization'] ='Basic S0VLU0UhIExFQ0...'; // call the original fetch function with our custom options. return fetch( url, optionsWithAuth ); }; const replicationState = replicateCouchDB( { replicationIdentifier: 'my-couchdb-replication', collection: myRxCollection, url: 'http://example.com/db/humans', /** * Add the custom fetch function here. */ fetch: myCustomFetch, pull: {}, push: {} } ); Also when your bearer token changes over time, you can set a new custom fetch method while the replication is running: replicationState.fetch = newCustomFetchMethod; Also there is a helper method getFetchWithCouchDBAuthorization() to create a fetch handler with authorization: import { replicateCouchDB, getFetchWithCouchDBAuthorization } from 'rxdb/plugins/replication-couchdb'; const replicationState = replicateCouchDB( { replicationIdentifier: 'my-couchdb-replication', collection: myRxCollection, url: 'http://example.com/db/humans', /** * Add the custom fetch function here. */ fetch: getFetchWithCouchDBAuthorization('myUsername', 'myPassword'), pull: {}, push: {} } ); ","version":"Next","tagName":"h2"},{"title":"Limitations","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#limitations","content":" Since CouchDB only allows synchronization through HTTP1.1 long polling requests there is a limitation of 6 active synchronization connections before the browser prevents sending any further request. This limitation is at the level of browser per tab per domain (some browser, especially older ones, might have a different limit, see here). Since this limitation is at the browser level there are several solutions: Use only a single database for all entities and set a "type" field for each of the documentsCreate multiple subdomains for CouchDB and use a max of 6 active synchronizations (or less) for eachUse a proxy (ex: HAProxy) between the browser and CouchDB and configure it to use HTTP2.0, since HTTP2.0 If you use nginx in front of your CouchDB, you can use these settings to enable http2-proxying to prevent the connection limit problem: server { http2 on; location /db { rewrite /db/(.*) /$1 break; proxy_pass http://172.0.0.1:5984; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded proxy_set_header Connection "keep_alive" } } ","version":"Next","tagName":"h2"},{"title":"Known problems","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#known-problems","content":" ","version":"Next","tagName":"h2"},{"title":"Database missing","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#database-missing","content":" In contrast to PouchDB, this plugin does NOT automatically create missing CouchDB databases. If your CouchDB server does not have a database yet, you have to create it by yourself by running a PUT request to the database name url: // create a 'humans' CouchDB database on the server const remoteDatabaseName = 'humans'; await fetch( 'http://example.com/db/' + remoteDatabaseName, { method: 'PUT' } ); ","version":"Next","tagName":"h3"},{"title":"React Native","type":1,"pageTitle":"Replication with CouchDB","url":"/replication-couchdb.html#react-native","content":" React Native does not have a global fetch method. You have to import fetch method with the cross-fetch package: import crossFetch from 'cross-fetch'; const replicationState = replicateCouchDB( { replicationIdentifier: 'my-couchdb-replication', collection: myRxCollection, url: 'http://example.com/db/humans', fetch: crossFetch, pull: {}, push: {} } ); ","version":"Next","tagName":"h2"},{"title":"Replication with NATS","type":0,"sectionRef":"#","url":"/replication-nats.html","content":"","keywords":"","version":"Next"},{"title":"Precondition","type":1,"pageTitle":"Replication with NATS","url":"/replication-nats.html#precondition","content":" For the replication endpoint the NATS cluster must have enabled JetStream and store all message data as structured JSON. The easiest way to start a compatible NATS server is to use the official docker image: docker run --rm --name rxdb-nats -p 4222:4222 nats:2.9.17 -js ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"Replication with NATS","url":"/replication-nats.html#usage","content":" To start the replication, import the replicateNats() method from the RxDB plugin and call it with the collection that must be replicated. The replication runs per RxCollection, you can replicate multiple RxCollections by starting a new replication for each of them. import { replicateNats } from 'rxdb/plugins/replication-nats'; const replicationState = replicateNats({ collection: myRxCollection, replicationIdentifier: 'my-nats-replication-collection-A', // in NATS, each stream need a name streamName: 'stream-for-replication-A', /** * The subject prefix determines how the documents are stored in NATS. * For example the document with id 'alice' will have the subject 'foobar.alice' */ subjectPrefix: 'foobar', connection: { servers: 'localhost:4222' }, live: true, pull: { batchSize: 30 }, push: { batchSize: 30 } }); ","version":"Next","tagName":"h2"},{"title":"Handling deletes","type":1,"pageTitle":"Replication with NATS","url":"/replication-nats.html#handling-deletes","content":" RxDB requires you to never fully delete documents. This is needed to be able to replicate the deletion state of a document to other instances. The NATS replication will set a boolean _deleted field to all documents to indicate the deletion state. You can change this by setting a different deletedField in the sync options. ","version":"Next","tagName":"h2"},{"title":"The RxDB Plugin replication-p2p has been renamed to replication-webrtc","type":0,"sectionRef":"#","url":"/replication-p2p.html","content":"The RxDB Plugin replication-p2p has been renamed to replication-webrtc The new documentation page has been moved to here","keywords":"","version":"Next"},{"title":"RxDB Server Replication","type":0,"sectionRef":"#","url":"/replication-server","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"RxDB Server Replication","url":"/replication-server#usage","content":" The replication server plugin is imported from the rxdb-server npm package. Then you start the replication with a given collection and endpoint url by calling replicateServer(). import { replicateServer } from 'rxdb-server/plugins/replication-server'; const replicationState = await replicateServer({ collection: usersCollection, replicationIdentifier: 'my-server-replication', url: 'http://localhost:80/users/0', // endpoint url with the servers collection schema version at the end headers: { Authorization: 'Bearer S0VLU0UhI...' }, push: {}, pull: {}, live: true }); ","version":"Next","tagName":"h2"},{"title":"outdatedClient$","type":1,"pageTitle":"RxDB Server Replication","url":"/replication-server#outdatedclient","content":" When you update your schema at the server and run a migration, you end up with a different replication url that has a new schema version number at the end. Your clients might still be running an old version of your application that will no longer be compatible with the endpoint. Therefore when the client tries to call a server endpoint with an outdated schema version, the outdatedClient$ observable emits to tell your client that the application must be updated. With that event you can tell the client to update the application. On browser application you might want to just reload the page on that event: replicationState.outdatedClient$.subscribe(() => { location.reload(); }); ","version":"Next","tagName":"h2"},{"title":"unauthorized$","type":1,"pageTitle":"RxDB Server Replication","url":"/replication-server#unauthorized","content":" When you clients auth data is not valid (or no longer valid), the server will no longer accept any requests from you client and inform the client that the auth headers must be updated. The unauthorized$ observable will emit and expects you to update the headers accordingly so that following requests will be accepted again. replicationState.unauthorized$.subscribe(() => { replicationState.setHeaders({ Authorization: 'Bearer S0VLU0UhI...' }); }); ","version":"Next","tagName":"h2"},{"title":"forbidden$","type":1,"pageTitle":"RxDB Server Replication","url":"/replication-server#forbidden","content":" When you client behaves wrong in any case, like update non-allowed values or changing documents that it is not allowed to, the server will drop the connection and the replication state will emit on the forbidden$ observable. It will also automatically stop the replication so that your client does not accidentally DOS attack the server. replicationState.forbidden$.subscribe(() => { console.log('Client is behaving wrong'); }); ","version":"Next","tagName":"h2"},{"title":"Custom EventSource implementation","type":1,"pageTitle":"RxDB Server Replication","url":"/replication-server#custom-eventsource-implementation","content":" For the server send events, the eventsource npm package is used instead of the native EventSource API. We need this because the native browser API does not support sending headers with the request which is required by the server to parse the auth data. If the eventsource package does not work for you, you can set an own implementation when creating the replication. const replicationState = await replicateServer({ /* ... */ eventSource: MyEventSourceConstructor /* ... */ }); ","version":"Next","tagName":"h2"},{"title":"Peer-to-Peer (P2P) WebRTC Replication with the RxDB JavaScript Database","type":0,"sectionRef":"#","url":"/replication-webrtc.html","content":"","keywords":"","version":"Next"},{"title":"Understanding P2P Replication","type":1,"pageTitle":"Peer-to-Peer (P2P) WebRTC Replication with the RxDB JavaScript Database","url":"/replication-webrtc.html#understanding-p2p-replication","content":" P2P replication is a paradigm shift in data synchronization. Instead of relying on a central server to manage data transfers between clients, it leverages the power of direct peer-to-peer connections. This approach offers several advantages: Reduced Latency: With no intermediary server, data can move directly between clients, significantly reducing latency and improving real-time interactions.Improved Scalability: P2P networks can easily scale as more clients join, without putting additional load on a central server.Enhanced Privacy: Data remains within the client devices, reducing privacy concerns associated with centralized data storage. ","version":"Next","tagName":"h2"},{"title":"Using the RxDB WebRTC Replication Plugin","type":1,"pageTitle":"Peer-to-Peer (P2P) WebRTC Replication with the RxDB JavaScript Database","url":"/replication-webrtc.html#using-the-rxdb-webrtc-replication-plugin","content":" Before you use this plugin, make sure that you understand how WebRTC works. First you have to add the plugin, then you can call RxCollection.syncWebRTC() to start the replication. As options you have to provide a topic and a connection handler function that implements the P2PConnectionHandlerCreator interface. As default you should start with the getConnectionHandlerSimplePeer method which uses the simple-peer library. In difference to the other replication plugins, the WebRTC replication returns a replicationPool instead of a single RxReplicationState. The replicationPool contains all replication states of the connected peers in the P2P network. import { replicateWebRTC, getConnectionHandlerSimplePeer } from 'rxdb/plugins/replication-webrtc'; const replicationPool = await replicateWebRTC( { collection: myRxCollection, // The topic is like a 'room-name'. All clients with the same topic // will replicate with each other. In most cases you want to use // a different topic string per user. topic: 'my-users-pool', /** * You need a collection handler to be able to create WebRTC connections. * Here we use the simple peer handler which uses the 'simple-peer' npm library. * To learn how to create a custom connection handler, read the source code, * it is pretty simple. */ connectionHandlerCreator: getConnectionHandlerSimplePeer({ // Set the signaling server url. // You can use the server provided by RxDB for tryouts, // but in production you should use your own server instead. signalingServerUrl: 'wss://signaling.rxdb.info/', // only in Node.js, we need the wrtc library // because Node.js does not contain the WebRTC API. wrtc: require('node-datachannel/polyfill'), // only in Node.js, we need the WebSocket library // because Node.js does not contain the WebSocket API. webSocketConstructor: require('ws').WebSocket }), pull: {}, push: {} } ); replicationPool.error$.subscribe(err => { /* ... */ }); replicationPool.cancel(); ","version":"Next","tagName":"h2"},{"title":"Polyfill the WebSocket and WebRTC API in Node.js","type":1,"pageTitle":"Peer-to-Peer (P2P) WebRTC Replication with the RxDB JavaScript Database","url":"/replication-webrtc.html#polyfill-the-websocket-and-webrtc-api-in-nodejs","content":" While all modern browsers support the WebRTC and WebSocket APIs, they is missing in Node.js which will throw the error No WebRTC support: Specify opts.wrtc option in this environment. Therefore you have to polyfill it with a compatible WebRTC and WebSocket polyfill. It is recommended to use the node-datachannel package for WebRTC which does not come with RxDB but has to be installed before via npm install node-datachannel --save. For the Websocket API use the ws package that is included into RxDB. import nodeDatachannelPolyfill from 'node-datachannel/polyfill'; import { WebSocket } from 'ws'; const replicationPool = await replicateWebRTC( { /* ... */ connectionHandlerCreator: getConnectionHandlerSimplePeer({ signalingServerUrl: 'wss://example.com:8080', wrtc: nodeDatachannelPolyfill, webSocketConstructor: WebSocket }), pull: {}, push: {} /* ... */ } ); ","version":"Next","tagName":"h3"},{"title":"Live replications","type":1,"pageTitle":"Peer-to-Peer (P2P) WebRTC Replication with the RxDB JavaScript Database","url":"/replication-webrtc.html#live-replications","content":" The WebRTC replication is always live because there can not be a one-time sync when it is always possible to have new Peers that join the connection pool. Therefore you cannot set the live: false option like in the other replication plugins. ","version":"Next","tagName":"h2"},{"title":"Signaling Server","type":1,"pageTitle":"Peer-to-Peer (P2P) WebRTC Replication with the RxDB JavaScript Database","url":"/replication-webrtc.html#signaling-server","content":" For P2P replication to work with the RxDB WebRTC Replication Plugin, a signaling server is required. The signaling server helps peers discover each other and establish connections. RxDB ships with a default signaling server that can be used with the simple-peer connection handler. This server is made for demonstration purposes and tryouts. It is not reliable and might be offline at any time. In production you must always use your own signaling server instead! Creating a basic signaling server is straightforward. The provided example uses 'socket.io' for WebSocket communication. However, in production, you'd want to create a more robust signaling server with authentication and additional logic to suit your application's needs. Here is a quick example implementation of a signaling server that can be used with the connection handler from getConnectionHandlerSimplePeer(): import { startSignalingServerSimplePeer } from 'rxdb/plugins/replication-webrtc'; const serverState = await startSignalingServerSimplePeer({ port: 8080 // <- port }); For custom signaling servers with more complex logic, you can check the source code of the default one. ","version":"Next","tagName":"h2"},{"title":"Peer Validation","type":1,"pageTitle":"Peer-to-Peer (P2P) WebRTC Replication with the RxDB JavaScript Database","url":"/replication-webrtc.html#peer-validation","content":" By default the replication will replicate with every peer the signaling server tells them about. You can prevent invalid peers from replication by passing a custom isPeerValid() function that either returns true on valid peers and false on invalid peers. const replicationPool = await replicateWebRTC( { /* ... */ isPeerValid: async (peer) => { return true; } pull: {}, push: {} /* ... */ } ); ","version":"Next","tagName":"h2"},{"title":"Conflict detection in WebRTC replication","type":1,"pageTitle":"Peer-to-Peer (P2P) WebRTC Replication with the RxDB JavaScript Database","url":"/replication-webrtc.html#conflict-detection-in-webrtc-replication","content":" RxDB's conflict handling works by detecting and resolving conflicts that may arise when multiple clients in a decentralized database system attempt to modify the same data concurrently. A custom conflict handler can be set up, which is a plain JavaScript function. The conflict handler is run on each replicated document write and resolves the conflict if required. Find out more about RxDB conflict handling here ","version":"Next","tagName":"h2"},{"title":"SimplePeer requires to have process.nextTick()","type":1,"pageTitle":"Peer-to-Peer (P2P) WebRTC Replication with the RxDB JavaScript Database","url":"/replication-webrtc.html#simplepeer-requires-to-have-processnexttick","content":" In the browser you might not have a process variable or process.nextTick() method. But the simple peer uses that so you have to polyfill it. In webpack you can use the process/browser package to polyfill it: const plugins = [ /* ... */ new webpack.ProvidePlugin({ process: 'process/browser', }) /* ... */ ]; In angular or other libraries you can add the polyfill manually: window.process = { nextTick: (fn, ...args) => setTimeout(() => fn(...args)), }; ","version":"Next","tagName":"h2"},{"title":"Storing replicated data encrypted on client device","type":1,"pageTitle":"Peer-to-Peer (P2P) WebRTC Replication with the RxDB JavaScript Database","url":"/replication-webrtc.html#storing-replicated-data-encrypted-on-client-device","content":" Storing replicated data encrypted on client devices using the RxDB Encryption Plugin is a pivotal step towards bolstering data security and user privacy. The WebRTC replication plugin seamlessly integrates with the RxDB encryption plugins, providing a robust solution for encrypting sensitive information before it's stored locally. By doing so, it ensures that even if unauthorized access to the device occurs, the data remains protected and unintelligible without the encryption key (or password). This approach is particularly vital in scenarios where user-generated content or confidential data is replicated across devices, as it empowers users with control over their own data while adhering to stringent security standards. Read more about the encryption plugins here. ","version":"Next","tagName":"h2"},{"title":"Websocket Replication","type":0,"sectionRef":"#","url":"/replication-websocket.html","content":"","keywords":"","version":"Next"},{"title":"Starting the Websocket Server","type":1,"pageTitle":"Websocket Replication","url":"/replication-websocket.html#starting-the-websocket-server","content":" import { createRxDatabase } from 'rxdb'; import { startWebsocketServer } from 'rxdb/plugins/replication-websocket'; // create a RxDatabase like normal const myDatabase = await createRxDatabase({/* ... */}); // start a websocket server const serverState = await startWebsocketServer({ database: myDatabase, port: 1337, path: '/socket' }); // stop the server await serverState.close(); ","version":"Next","tagName":"h2"},{"title":"Connect to the Websocket Server","type":1,"pageTitle":"Websocket Replication","url":"/replication-websocket.html#connect-to-the-websocket-server","content":" The replication has to be started once for each collection that you want to replicate. import { replicateWithWebsocketServer } from 'rxdb/plugins/replication-websocket'; // start the replication const replicationState = await replicateWithWebsocketServer({ /** * To make the replication work, * the client collection name must be equal * to the server collection name. */ collection: myRxCollection, url: 'ws://localhost:1337/socket' }); // stop the replication await replicationState.cancel(); ","version":"Next","tagName":"h2"},{"title":"Customize","type":1,"pageTitle":"Websocket Replication","url":"/replication-websocket.html#customize","content":" We use the ws npm library, so you can use all optional configuration provided by it. This is especially important to improve performance by opting in of some optional settings. ","version":"Next","tagName":"h2"},{"title":"HTTP Replication from a custom server to RxDB clients","type":0,"sectionRef":"#","url":"/replication-http.html","content":"","keywords":"","version":"Next"},{"title":"Setup","type":1,"pageTitle":"HTTP Replication from a custom server to RxDB clients","url":"/replication-http.html#setup","content":" RxDB does not have a specific HTTP-replication plugin because the replication primitives plugin is simple enough to start a HTTP replication on top of it. We import the replicateRxCollection function and start the replication from there for a single RxCollection. // > client.ts import { replicateRxCollection } from 'rxdb/plugins/replication'; const replicationState = await replicateRxCollection({ collection: myRxCollection, replicationIdentifier: 'my-http-replication', push: { /* add settings from below */ }, pull: { /* add settings from below */ } }); On the server side, we start an express server that has a MongoDB connection and serves the HTTP requests of the client. // > server.ts import { MongoClient } from 'mongodb'; import express from 'express'; const mongoClient = new MongoClient('mongodb://localhost:27017/'); const mongoConnection = await mongoClient.connect(); const mongoDatabase = mongoConnection.db('myDatabase'); const mongoCollection = await mongoDatabase.collection('myDocs'); const app = express(); app.use(express.json()); /* ... add routes from below */ app.listen(80, () => { console.log(`Example app listening on port 80`) }); ","version":"Next","tagName":"h2"},{"title":"Pull from the server to the client","type":1,"pageTitle":"HTTP Replication from a custom server to RxDB clients","url":"/replication-http.html#pull-from-the-server-to-the-client","content":" First we need to implement the pull handler. This is used by the RxDB replication to fetch all documents writes that happened after a given checkpoint. The checkpoint format is not determined by RxDB, instead the server can use any type of changepoint that can be used to iterate across document writes. Here we will just use a unix timestamp updatedAt and a string id. On the client we add the pull.handler to the replication setting. The handler request the correct server url and fetches the documents. // > client.ts const replicationState = await replicateRxCollection({ /* ... */ pull: { async handler(checkpointOrNull, batchSize){ const updatedAt = checkpointOrNull ? checkpointOrNull.updatedAt : 0; const id = checkpointOrNull ? checkpointOrNull.id : ''; const response = await fetch(`https://localhost/pull?updatedAt=${updatedAt}&id=${id}&limit=${batchSize}`); const data = await response.json(); return { documents: data.documents, checkpoint: data.checkpoint }; } } /* ... */ }); The server responds with an array of document data based on the given checkpoint and a new checkpoint. Also the server has to respect the batchSize so that RxDB knows when there are no more new documents and the server returns a non-full array. // > server.ts import { lastOfArray } from 'rxdb/plugins/core'; app.get('/pull', (req, res) => { const id = req.query.id; const updatedAt = parseFloat(req.query.updatedAt); const documents = await mongoCollection.find({ $or: [ /** * Notice that we have to compare the updatedAt AND the id field * because the updateAt field is not unique and when two documents have * the same updateAt, we can still "sort" them by their id. */ { updateAt: { $gt: updatedAt } }, { updateAt: { $eq: updatedAt } id: { $gt: id } } ] }).limit(parseInt(req.query.batchSize, 10)).toArray(); const newCheckpoint = documents.length === 0 ? { id, updatedAt } : { id: lastOfArray(documents).id, updatedAt: lastOfArray(documents).updatedAt }; res.setHeader('Content-Type', 'application/json'); res.end(JSON.stringify({ documents, checkpoint: newCheckpoint })); }); ","version":"Next","tagName":"h2"},{"title":"Push from the Client to the Server","type":1,"pageTitle":"HTTP Replication from a custom server to RxDB clients","url":"/replication-http.html#push-from-the-client-to-the-server","content":" To send client side writes to the server, we have to implement the push.handler. It gets an array of change rows as input and has to return only the conflicting documents that did not have been written to the server. Each change row contains a newDocumentState and an optional assumedMasterState. // > client.ts const replicationState = await replicateRxCollection({ /* ... */ push: { async handler(changeRows){ const rawResponse = await fetch('https://localhost/push', { method: 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify(changeRows) }); const conflictsArray = await rawResponse.json(); return conflictsArray; } } /* ... */ }); On the server we first have to detect if the assumedMasterState is correct for each row. If yes, we have to write the new document state to the database, otherwise we have to return the "real" master state in the conflict array. note For simplicity in this tutorial, we do not use transactions. In reality you should run the full push function inside of a MongoDB transaction to ensure that no other process can mix up the document state while the writes are processed. Also you should call batch operations on MongoDB instead of running the operations for each change row. The server also creates an event that is emitted to the pullStream$ which is later used in the pull.stream$. // > server.ts import { lastOfArray } from 'rxdb/plugins/core'; import { Subject } from 'rxjs'; // used in the pull.stream$ below let lastEventId = 0; const pullStream$ = new Subject(); app.get('/push', (req, res) => { const changeRows = req.body; const conflicts = []; const event = { id: lastEventId++, documents: [], checkpoint: null }; for(const changeRow of changeRows){ const realMasterState = mongoCollection.findOne({id: changeRow.newDocumentState.id}); if( realMasterState && !changeRow.assumedMasterState || ( realMasterState && changeRow.assumedMasterState && /* * For simplicity we detect conflicts on the server by only compare the updateAt value. * In reality you might want to do a more complex check or do a deep-equal comparison. */ realMasterState.updatedAt !== changeRow.assumedMasterState.updatedAt ) ) { // we have a conflict conflicts.push(realMasterState); } else { // no conflict -> write the document mongoCollection.updateOne( {id: changeRow.newDocumentState.id}, changeRow.newDocumentState ); event.documents.push(changeRow.newDocumentState); event.checkpoint = { id: changeRow.newDocumentState.id, updatedAt: changeRow.newDocumentState.updatedAt }; } } if(event.documents.length > 0){ myPullStream$.next(event); } res.setHeader('Content-Type', 'application/json'); res.end(JSON.stringify(conflicts)); }); ","version":"Next","tagName":"h2"},{"title":"pullStream$ for ongoing changes","type":1,"pageTitle":"HTTP Replication from a custom server to RxDB clients","url":"/replication-http.html#pullstream-for-ongoing-changes","content":" While the normal pull handler is used when the replication is in iteration mode, we also need a stream of ongoing changes when the replication is in event observation mode. The pull.stream$ is implemented with server send events that are send from the server to the client. The client connects to an url and receives server-sent-events that contain all ongoing writes. // > client.ts import { Subject } from 'rxjs'; const myPullStream$ = new Subject(); const eventSource = new EventSource('http://localhost/pullStream', { withCredentials: true }); eventSource.onmessage = event => { const eventData = JSON.parse(event.data); myPullStream$.next({ documents: eventData.documents, checkpoint: eventData.checkpoint }); }; const replicationState = await replicateRxCollection({ /* ... */ pull: { /* ... */ stream$: myPullStream$.asObservable() } /* ... */ }); On the server we have to implement the pullStream route and emit the events. We use the pullStream$ observable from above to fetch all ongoing events and respond them to the client. // > server.ts app.get('/pullStream', (req, res) => { res.writeHead(200, { 'Content-Type': 'text/event-stream', 'Connection': 'keep-alive', 'Cache-Control': 'no-cache' }); const subscription = pullStream$.subscribe(event => res.write('data: ' + JSON.stringify(event) + '\\n\\n')); req.on('close', () => subscription.unsubscribe()); }); ","version":"Next","tagName":"h2"},{"title":"pullStream$ RESYNC flag","type":1,"pageTitle":"HTTP Replication from a custom server to RxDB clients","url":"/replication-http.html#pullstream-resync-flag","content":" In case the client looses the connection, the EventSource will automatically reconnect but there might have been some changes that have been missed out in the meantime. The replication has to be informed that it might have missed events by emitting a RESYNC flag from the pull.stream$. The replication will then catch up by switching to the iteration mode until it is in sync with the server again. // > client.ts eventSource.onerror = () => myPullStream$.next('RESYNC'); The purpose of the RESYNC flag is to tell the client that "something might have changed" and then the client can react on that information without having to run operations in an interval. If your backend is not capable of emitting the actual documents and checkpoint in the pull stream, you could just map all events to the RESYNC flag. This would make the replication work with a slight performance drawback: // > client.ts import { Subject } from 'rxjs'; const myPullStream$ = new Subject(); const eventSource = new EventSource('http://localhost/pullStream', { withCredentials: true }); eventSource.onmessage = () => myPullStream$.next('RESYNC'); const replicationState = await replicateRxCollection({ pull: { stream$: myPullStream$.asObservable() } }); ","version":"Next","tagName":"h3"},{"title":"Missing implementation details","type":1,"pageTitle":"HTTP Replication from a custom server to RxDB clients","url":"/replication-http.html#missing-implementation-details","content":" Here we only covered the basics of doing a HTTP replication between RxDB clients and a server. We did not cover the following aspects of the implementation: Authentication: To authenticate the client on the server, you might want to send authentication headers with the HTTP requestsSkip events on the pull.stream$ for the client that caused the changes to improve performance. ","version":"Next","tagName":"h2"},{"title":"Replication with GraphQL","type":0,"sectionRef":"#","url":"/replication-graphql.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"Replication with GraphQL","url":"/replication-graphql.html#usage","content":" Before you use the GraphQL replication, make sure you've learned how the RxDB replication works. ","version":"Next","tagName":"h2"},{"title":"Creating a compatible GraphQL Server","type":1,"pageTitle":"Replication with GraphQL","url":"/replication-graphql.html#creating-a-compatible-graphql-server","content":" At the server-side, there must exist an endpoint which returns newer rows when the last checkpoint is used as input. For example lets say you create a QuerypullHuman which returns a list of document writes that happened after the given checkpoint. For the push-replication, you also need a MutationpushHuman which lets RxDB update data of documents by sending the previous document state and the new client document state. Also for being able to stream all ongoing events, we need a Subscription called streamHuman. input HumanInput { id: ID!, name: String!, lastName: String!, updatedAt: Float!, deleted: Boolean! } type Human { id: ID!, name: String!, lastName: String!, updatedAt: Float!, deleted: Boolean! } input Checkpoint { id: String!, updatedAt: Float! } type HumanPullBulk { documents: [Human]! checkpoint: Checkpoint } type Query { pullHuman(checkpoint: Checkpoint, limit: Int!): HumanPullBulk! } input HumanInputPushRow { assumedMasterState: HeroInputPushRowT0AssumedMasterStateT0 newDocumentState: HeroInputPushRowT0NewDocumentStateT0! } type Mutation { # Returns a list of all conflicts # If no document write caused a conflict, return an empty list. pushHuman(rows: [HumanInputPushRow!]): [Human] } # headers are used to authenticate the subscriptions # over websockets. input Headers { AUTH_TOKEN: String!; } type Subscription { streamHuman(headers: Headers): HumanPullBulk! } The GraphQL resolver for the pullHuman would then look like: const rootValue = { pullHuman: args => { const minId = args.checkpoint ? args.checkpoint.id : ''; const minUpdatedAt = args.checkpoint ? args.checkpoint.updatedAt : 0; // sorted by updatedAt first and the id as second const sortedDocuments = documents.sort((a, b) => { if (a.updatedAt > b.updatedAt) return 1; if (a.updatedAt < b.updatedAt) return -1; if (a.updatedAt === b.updatedAt) { if (a.id > b.id) return 1; if (a.id < b.id) return -1; else return 0; } }); // only return documents newer than the input document const filterForMinUpdatedAtAndId = sortedDocuments.filter(doc => { if (doc.updatedAt < minUpdatedAt) return false; if (doc.updatedAt > minUpdatedAt) return true; if (doc.updatedAt === minUpdatedAt) { // if updatedAt is equal, compare by id if (doc.id > minId) return true; else return false; } }); // only return some documents in one batch const limitedDocs = filterForMinUpdatedAtAndId.slice(0, args.limit); // use the last document for the checkpoint const lastDoc = limitedDocs[limitedDocs.length - 1]; const retCheckpoint = { id: lastDoc.id, updatedAt: lastDoc.updatedAt } return { documents: limitedDocs, checkpoint: retCheckpoint } return limited; } } For examples for the other resolvers, consult the GraphQL Example Project. ","version":"Next","tagName":"h3"},{"title":"RxDB Client","type":1,"pageTitle":"Replication with GraphQL","url":"/replication-graphql.html#rxdb-client","content":" Pull replication For the pull-replication, you first need a pullQueryBuilder. This is a function that gets the last replication checkpoint and a limit as input and returns an object with a GraphQL-query and its variables (or a promise that resolves to the same object). RxDB will use the query builder to construct what is later sent to the GraphQL endpoint. const pullQueryBuilder = (checkpoint, limit) => { /** * The first pull does not have a checkpoint * so we fill it up with defaults */ if (!checkpoint) { checkpoint = { id: '', updatedAt: 0 }; } const query = `query PullHuman($checkpoint: CheckpointInput, $limit: Int!) { pullHuman(checkpoint: $checkpoint, limit: $limit) { documents { id name age updatedAt deleted } checkpoint { id updatedAt } } }`; return { query, operationName: 'PullHuman', variables: { checkpoint, limit } }; }; With the queryBuilder, you can then setup the pull-replication. import { replicateGraphQL } from 'rxdb/plugins/replication-graphql'; const replicationState = replicateGraphQL( { collection: myRxCollection, // urls to the GraphQL endpoints url: { http: 'http://example.com/graphql' }, pull: { queryBuilder: pullQueryBuilder, // the queryBuilder from above modifier: doc => doc, // (optional) modifies all pulled documents before they are handled by RxDB dataPath: undefined, // (optional) specifies the object path to access the document(s). Otherwise, the first result of the response data is used. /** * Amount of documents that the remote will send in one request. * If the response contains less then [batchSize] documents, * RxDB will assume there are no more changes on the backend * that are not replicated. * This value is the same as the limit in the pullHuman() schema. * [default=100] */ batchSize: 50 }, // headers which will be used in http requests against the server. headers: { Authorization: 'Bearer abcde...' }, /** * Options that have been inherited from the RxReplication */ deletedField: 'deleted', live: true, retryTime = 1000 * 5, waitForLeadership = true, autoStart = true, } ); Push replication For the push-replication, you also need a queryBuilder. Here, the builder receives a changed document as input which has to be send to the server. It also returns a GraphQL-Query and its data. const pushQueryBuilder = rows => { const query = ` mutation PushHuman($writeRows: [HumanInputPushRow!]) { pushHuman(writeRows: $writeRows) { id name age updatedAt deleted } } `; const variables = { writeRows: rows }; return { query, operationName: 'PushHuman', variables }; }; With the queryBuilder, you can then setup the push-replication. const replicationState = replicateGraphQL( { collection: myRxCollection, // urls to the GraphQL endpoints url: { http: 'http://example.com/graphql' }, push: { queryBuilder: pushQueryBuilder, // the queryBuilder from above /** * batchSize (optional) * Amount of document that will be pushed to the server in a single request. */ batchSize: 5, /** * modifier (optional) * Modifies all pushed documents before they are send to the GraphQL endpoint. * Returning null will skip the document. */ modifier: doc => doc }, headers: { Authorization: 'Bearer abcde...' }, pull: { /* ... */ }, /* ... */ } ); Pull Stream To create a realtime replication, you need to create a pull stream that pulls ongoing writes from the server. The pull stream gets the headers of the RxReplicationState as input, so that it can be authenticated on the backend. const pullStreamQueryBuilder = (headers) => { const query = `subscription onStream($headers: Headers) { streamHero(headers: $headers) { documents { id, name, age, updatedAt, deleted }, checkpoint { id updatedAt } } }`; return { query, variables: { headers } }; }; With the pullStreamQueryBuilder you can then start a realtime replication. const replicationState = replicateGraphQL( { collection: myRxCollection, // urls to the GraphQL endpoints url: { http: 'http://example.com/graphql', ws: 'ws://example.com/subscriptions' // <- The websocket has to use a different url. }, push: { batchSize: 100, queryBuilder: pushQueryBuilder }, headers: { Authorization: 'Bearer abcde...' }, pull: { batchSize: 100, queryBuilder: pullQueryBuilder, streamQueryBuilder: pullStreamQueryBuilder, includeWsHeaders: false, // Includes headers as connection parameter to Websocket. }, deletedField: 'deleted' } ); note If it is not possible to create a websocket server on your backend, you can use any other method of pull out the ongoing events from the backend and then you can send them into RxReplicationState.emitEvent(). ","version":"Next","tagName":"h3"},{"title":"Transforming null to undefined in optional fields","type":1,"pageTitle":"Replication with GraphQL","url":"/replication-graphql.html#transforming-null-to-undefined-in-optional-fields","content":" GraphQL fills up non-existent optional values with null while RxDB required them to be undefined. Therefore, if your schema contains optional properties, you have to transform the pulled data to switch out null to undefined const replicationState: RxGraphQLReplicationState<RxDocType> = replicateGraphQL( { collection: myRxCollection, url: {/* ... */}, headers: {/* ... */}, push: {/* ... */}, pull: { queryBuilder: pullQueryBuilder, modifier: (doc => { // We have to remove optional non-existent field values // they are set as null by GraphQL but should be undefined Object.entries(doc).forEach(([k, v]) => { if (v === null) { delete doc[k]; } }); return doc; }) }, /* ... */ } ); ","version":"Next","tagName":"h3"},{"title":"pull.responseModifier","type":1,"pageTitle":"Replication with GraphQL","url":"/replication-graphql.html#pullresponsemodifier","content":" With the pull.responseModifier you can modify the whole response from the GraphQL endpoint before it is processed by RxDB. For example if your endpoint is not capable of returning a valid checkpoint, but instead only returns the plain document array, you can use the responseModifier to aggregate the checkpoint from the returned documents. import { } from 'rxdb'; const replicationState: RxGraphQLReplicationState<RxDocType> = replicateGraphQL( { collection: myRxCollection, url: {/* ... */}, headers: {/* ... */}, push: {/* ... */}, pull: { responseModifier: async function( plainResponse, // the exact response that was returned from the server origin, // either 'handler' if plainResponse came from the pull.handler, or 'stream' if it came from the pull.stream requestCheckpoint // if origin==='handler', the requestCheckpoint contains the checkpoint that was send to the backend ) { /** * In this example we aggregate the checkpoint from the documents array * that was returned from the graphql endpoint. */ const docs = plainResponse; return { documents: docs, checkpoint: docs.length === 0 ? requestCheckpoint : { name: lastOfArray(docs).name, updatedAt: lastOfArray(docs).updatedAt } }; } }, /* ... */ } ); ","version":"Next","tagName":"h3"},{"title":"push.responseModifier","type":1,"pageTitle":"Replication with GraphQL","url":"/replication-graphql.html#pushresponsemodifier","content":" It's also possible to modify the response of a push mutation. For example if your server returns more than the just conflicting docs: type PushResponse { conflicts: [Human] conflictMessages: [ReplicationConflictMessage] } type Mutation { # Returns a PushResponse type that contains the conflicts along with other information pushHuman(rows: [HumanInputPushRow!]): PushResponse! } import {} from "rxdb"; const replicationState: RxGraphQLReplicationState<RxDocType> = replicateGraphQL( { collection: myRxCollection, url: {/* ... */}, headers: {/* ... */}, push: { responseModifier: async function (plainResponse) { /** * In this example we aggregate the conflicting documents from a response object */ return plainResponse.conflicts; }, }, pull: {/* ... */}, /* ... */ } ); Helper Functions RxDB provides the helper functions graphQLSchemaFromRxSchema(), pullQueryBuilderFromRxSchema(), pullStreamBuilderFromRxSchema() and pushQueryBuilderFromRxSchema() that can be used to generate handlers and schemas from the RxJsonSchema. To learn how to use them, please inspect the GraphQL Example. ","version":"Next","tagName":"h3"},{"title":"RxGraphQLReplicationState","type":1,"pageTitle":"Replication with GraphQL","url":"/replication-graphql.html#rxgraphqlreplicationstate","content":" When you call myCollection.syncGraphQL() it returns a RxGraphQLReplicationState which can be used to subscribe to events, for debugging or other functions. It extends the RxReplicationState with some GraphQL specific methods. .setHeaders() Changes the headers for the replication after it has been set up. replicationState.setHeaders({ Authorization: `...` }); Sending Cookies The underlying fetch framework uses a same-origin policy for credentials per default. That means, cookies and session data is only shared if you backend and frontend run on the same domain and port. Pass the credential parameter to include cookies in requests to servers from different origins via: replicationState.setCredentials('include'); or directly pass it in the the syncGraphQL: replicateGraphQL( { collection: myRxCollection, /* ... */ credentials: 'include', /* ... */ } ); See the fetch spec for more information about available options. note To play around, check out the full example of the RxDB GraphQL replication with server and client ","version":"Next","tagName":"h3"},{"title":"Attachments","type":0,"sectionRef":"#","url":"/rx-attachment.html","content":"","keywords":"","version":"Next"},{"title":"Add the attachments plugin","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#add-the-attachments-plugin","content":" To enable the attachments, you have to add the attachments plugin. import { addRxPlugin } from 'rxdb'; import { RxDBAttachmentsPlugin } from 'rxdb/plugins/attachments'; addRxPlugin(RxDBAttachmentsPlugin); ","version":"Next","tagName":"h2"},{"title":"Enable attachments in the schema","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#enable-attachments-in-the-schema","content":" Before you can use attachments, you have to ensure that the attachments-object is set in the schema of your RxCollection. const mySchema = { version: 0, type: 'object', properties: { // . // . // . }, attachments: { encrypted: true // if true, the attachment-data will be encrypted with the db-password } }; const myCollection = await myDatabase.addCollections({ humans: { schema: mySchema } }); ","version":"Next","tagName":"h2"},{"title":"putAttachment()","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#putattachment","content":" Adds an attachment to a RxDocument. Returns a Promise with the new attachment. import { createBlob } from 'rxdb'; const attachment = await myDocument.putAttachment( { id: 'cat.txt', // (string) name of the attachment data: createBlob('meowmeow', 'text/plain'), // (string|Blob) data of the attachment type: 'text/plain' // (string) type of the attachment-data like 'image/jpeg' } ); ","version":"Next","tagName":"h2"},{"title":"getAttachment()","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#getattachment","content":" Returns an RxAttachment by its id. Returns null when the attachment does not exist. const attachment = myDocument.getAttachment('cat.jpg'); ","version":"Next","tagName":"h2"},{"title":"allAttachments()","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#allattachments","content":" Returns an array of all attachments of the RxDocument. const attachments = myDocument.allAttachments(); ","version":"Next","tagName":"h2"},{"title":"allAttachments$","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#allattachments-1","content":" Gets an Observable which emits a stream of all attachments from the document. Re-emits each time an attachment gets added or removed from the RxDocument. const all = []; myDocument.allAttachments$.subscribe( attachments => all = attachments ); ","version":"Next","tagName":"h2"},{"title":"RxAttachment","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#rxattachment","content":" The attachments of RxDB are represented by the type RxAttachment which has the following attributes/methods. ","version":"Next","tagName":"h2"},{"title":"doc","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#doc","content":" The RxDocument which the attachment is assigned to. ","version":"Next","tagName":"h3"},{"title":"id","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#id","content":" The id as string of the attachment. ","version":"Next","tagName":"h3"},{"title":"type","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#type","content":" The type as string of the attachment. ","version":"Next","tagName":"h3"},{"title":"length","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#length","content":" The length of the data of the attachment as number. ","version":"Next","tagName":"h3"},{"title":"digest","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#digest","content":" The hash of the attachments data as string. note The digest is NOT calculated by RxDB, instead it is calculated by the RxStorage. The only guarantee is that the digest will change when the attachments data changes. ","version":"Next","tagName":"h3"},{"title":"rev","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#rev","content":" The revision-number of the attachment as number. ","version":"Next","tagName":"h3"},{"title":"remove()","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#remove","content":" Removes the attachment. Returns a Promise that resolves when done. const attachment = myDocument.getAttachment('cat.jpg'); await attachment.remove(); ","version":"Next","tagName":"h3"},{"title":"getData()","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#getdata","content":" Returns a Promise which resolves the attachment's data as Blob. (async) const attachment = myDocument.getAttachment('cat.jpg'); const blob = await attachment.getData(); ","version":"Next","tagName":"h2"},{"title":"getStringData()","type":1,"pageTitle":"Attachments","url":"/rx-attachment.html#getstringdata","content":" Returns a Promise which resolves the attachment's data as string. const attachment = await myDocument.getAttachment('cat.jpg'); const data = await attachment.getStringData(); Attachment compression Storing many attachments can be a problem when the disc space of the device is exceeded. Therefore it can make sense to compress the attachments before storing them in the RxStorage. With the attachments-compression plugin you can compress the attachments data on write and decompress it on reads. This happens internally and will now change on how you use the api. The compression is run with the Compression Streams API which is only supported on newer browsers. import { wrappedAttachmentsCompressionStorage } from 'rxdb/plugins/attachments-compression'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; // create a wrapped storage with attachment-compression. const storageWithAttachmentsCompression = wrappedAttachmentsCompressionStorage({ storage: getRxStorageIndexedDB() }); const db = await createRxDatabase({ name: 'mydatabase', storage: storageWithAttachmentsCompression }); // set the compression mode at the schema level const mySchema = { version: 0, type: 'object', properties: { // . // . // . }, attachments: { compression: 'deflate' // <- Specify the compression mode here. OneOf ['deflate', 'gzip'] } }; /* ... create your collections as usual and store attachments in them. */ ","version":"Next","tagName":"h2"},{"title":"RxCollection","type":0,"sectionRef":"#","url":"/rx-collection.html","content":"","keywords":"","version":"Next"},{"title":"Creating a Collection","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#creating-a-collection","content":" To create one or more collections you need a RxDatabase object which has the .addCollections()-method. Every collection needs a collection name and a valid RxJsonSchema. Other attributes are optional. const myCollections = await myDatabase.addCollections({ // key = collectionName humans: { schema: mySchema, statics: {}, // (optional) ORM-functions for this collection methods: {}, // (optional) ORM-functions for documents attachments: {}, // (optional) ORM-functions for attachments options: {}, // (optional) Custom parameters that might be used in plugins migrationStrategies: {}, // (optional) autoMigrate: true, // (optional) [default=true] cacheReplacementPolicy: function(){}, // (optional) custom cache replacement policy conflictHandler: function(){} // (optional) a custom conflict handler can be used }, // you can create multiple collections at once animals: { // ... } }); ","version":"Next","tagName":"h2"},{"title":"name","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#name","content":" The name uniquely identifies the collection and should be used to refine the collection in the database. Two different collections in the same database can never have the same name. Collection names must match the following regex: ^[a-z][a-z0-9]*$. ","version":"Next","tagName":"h3"},{"title":"schema","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#schema","content":" The schema defines how the documents of the collection are structured. RxDB uses a schema format, similar to JSON schema. Read more about the RxDB schema format here. ","version":"Next","tagName":"h3"},{"title":"ORM-functions","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#orm-functions","content":" With the parameters statics, methods and attachments, you can define ORM-functions that are applied to each of these objects that belong to this collection. See ORM/DRM. ","version":"Next","tagName":"h3"},{"title":"Migration","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#migration","content":" With the parameters migrationStrategies and autoMigrate you can specify how migration between different schema-versions should be done. See Migration. ","version":"Next","tagName":"h3"},{"title":"Get a collection from the database","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#get-a-collection-from-the-database","content":" To get an existing collection from the database, call the collection name directly on the database: // newly created collection const collections = await db.addCollections({ heroes: { schema: mySchema } }); const collection2 = db.heroes; console.log(collections.heroes === collection2); //> true ","version":"Next","tagName":"h2"},{"title":"Functions","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#functions","content":" ","version":"Next","tagName":"h2"},{"title":"Observe $","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#observe-","content":" Calling this will return an rxjs-Observable which streams every change to data of this collection. myCollection.$.subscribe(changeEvent => console.dir(changeEvent)); // you can also observe single event-types with insert$ update$ remove$ myCollection.insert$.subscribe(changeEvent => console.dir(changeEvent)); myCollection.update$.subscribe(changeEvent => console.dir(changeEvent)); myCollection.remove$.subscribe(changeEvent => console.dir(changeEvent)); ","version":"Next","tagName":"h3"},{"title":"insert()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#insert","content":" Use this to insert new documents into the database. The collection will validate the schema and automatically encrypt any encrypted fields. Returns the new RxDocument. const doc = await myCollection.insert({ name: 'foo', lastname: 'bar' }); ","version":"Next","tagName":"h3"},{"title":"bulkInsert()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#bulkinsert","content":" When you have to insert many documents at once, use bulk insert. This is much faster than calling .insert() multiple times. Returns an object with a success- and error-array. const result = await myCollection.bulkInsert([{ name: 'foo1', lastname: 'bar1' }, { name: 'foo2', lastname: 'bar2' }]); // > { // success: [RxDocument, RxDocument], // error: [] // } note bulkInsert will not fail on update conflicts and you cannot expect that on failure the other documents are not inserted. Also the call to bulkInsert() it will not throw if a single document errors because of validation errors. Instead it will return the error in the .error property of the returned object. ","version":"Next","tagName":"h3"},{"title":"bulkRemove()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#bulkremove","content":" When you want to remove many documents at once, use bulk remove. Returns an object with a success- and error-array. const result = await myCollection.bulkRemove([ 'primary1', 'primary2' ]); // > { // success: [RxDocument, RxDocument], // error: [] // } ","version":"Next","tagName":"h3"},{"title":"upsert()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#upsert","content":" Inserts the document if it does not exist within the collection, otherwise it will overwrite it. Returns the new or overwritten RxDocument. const doc = await myCollection.upsert({ name: 'foo', lastname: 'bar2' }); ","version":"Next","tagName":"h3"},{"title":"bulkUpsert()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#bulkupsert","content":" Same as upsert() but runs over multiple documents. Improves performance compared to running many upsert() calls. Returns an error and a success array. const docs = await myCollection.bulkUpsert([ { name: 'foo', lastname: 'bar2' }, { name: 'bar', lastname: 'foo2' } ]); /** * { * success: [RxDocument, RxDocument] * error: [], * } */ ","version":"Next","tagName":"h3"},{"title":"incrementalUpsert()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#incrementalupsert","content":" When you run many upsert operations on the same RxDocument in a very short timespan, you might get a 409 Conflict error. This means that you tried to run a .upsert() on the document, while the previous upsert operation was still running. To prevent these types of errors, you can run incremental upsert operations. The behavior is similar to RxDocument.incrementalModify. const docData = { name: 'Bob', // primary lastName: 'Kelso' }; myCollection.upsert(docData); myCollection.upsert(docData); // -> throws because of parallel update to the same document myCollection.incrementalUpsert(docData); myCollection.incrementalUpsert(docData); myCollection.incrementalUpsert(docData); // wait until last upsert finished await myCollection.incrementalUpsert(docData); // -> works ","version":"Next","tagName":"h3"},{"title":"find()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#find","content":" To find documents in your collection, use this method. See RxQuery.find(). // find all that are older than 18 const olderDocuments = await myCollection .find() .where('age') .gt(18) .exec(); // execute ","version":"Next","tagName":"h3"},{"title":"findOne()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#findone","content":" This does basically what find() does, but it returns only a single document. You can pass a primary value to find a single document more easily. To find documents in your collection, use this method. See RxQuery.find(). // get document with name:foobar myCollection.findOne({ selector: { name: 'foo' } }).exec().then(doc => console.dir(doc)); // get document by primary, functionally identical to above query myCollection.findOne('foo') .exec().then(doc => console.dir(doc)); ","version":"Next","tagName":"h3"},{"title":"findByIds()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#findbyids","content":" Find many documents by their id (primary value). This has a way better performance than running multiple findOne() or a find() with a big $or selector. Returns a Map where the primary key of the document is mapped to the document. Documents that do not exist or are deleted, will not be inside of the returned Map. const ids = [ 'alice', 'bob', /* ... */ ]; const docsMap = await myCollection.findByIds(ids); console.dir(docsMap); // Map(2) note The Map returned by findByIds is not guaranteed to return elements in the same order as the list of ids passed to it. ","version":"Next","tagName":"h3"},{"title":"exportJSON()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#exportjson","content":" Use this function to create a json export from every document in the collection. Before exportJSON() and importJSON() can be used, you have to add the json-dump plugin. import { addRxPlugin } from 'rxdb'; import { RxDBJsonDumpPlugin } from 'rxdb/plugins/json-dump'; addRxPlugin(RxDBJsonDumpPlugin); myCollection.exportJSON() .then(json => console.dir(json)); ","version":"Next","tagName":"h3"},{"title":"importJSON()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#importjson","content":" To import the json dump into your collection, use this function. // import the dump to the database myCollection.importJSON(json) .then(() => console.log('done')); Note that importing will fire events for each inserted document. ","version":"Next","tagName":"h3"},{"title":"remove()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#remove","content":" Removes all known data of the collection and its previous versions. This removes the documents, the schemas, and older schemaVersions. await myCollection.remove(); // collection is now removed and can be re-created ","version":"Next","tagName":"h3"},{"title":"destroy()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#destroy","content":" Destroys the collection's object instance. This is to free up memory and stop all observers and replications. await myCollection.destroy(); ","version":"Next","tagName":"h3"},{"title":"onDestroy / onRemove()","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#ondestroy--onremove","content":" With these you can add a function that is run when the collection was destroyed or removed. This works even across multiple browser tabs so you can detect when another tab removes the collection and you application can behave accordingly. await myCollection.onDestroy(() => console.log('I am destroyed')); await myCollection.onRemove(() => console.log('I am removed')); ","version":"Next","tagName":"h3"},{"title":"isRxCollection","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#isrxcollection","content":" Returns true if the given object is an instance of RxCollection. Returns false if not. const is = isRxCollection(myObj); ","version":"Next","tagName":"h3"},{"title":"FAQ","type":1,"pageTitle":"RxCollection","url":"/rx-collection.html#faq","content":" When I reload the browser window, will my collections still be in the database? No, the javascript instance of the collections will not automatically load into the database on page reloads. You have to call the addCollections() method each time you create your database. This will create the JavaScript object instance of the RxCollection so that you can use it in the RxDatabase. The persisted data will be automatically in your RxCollection each time you create it. ","version":"Next","tagName":"h2"},{"title":"RxDocument","type":0,"sectionRef":"#","url":"/rx-document.html","content":"","keywords":"","version":"Next"},{"title":"insert","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#insert","content":" To insert a document into a collection, you have to call the collection's .insert()-function. myCollection.insert({ name: 'foo', lastname: 'bar' }); ","version":"Next","tagName":"h2"},{"title":"find","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#find","content":" To find documents in a collection, you have to call the collection's .find()-function. See RxQuery. myCollection.find().exec() // <- find all documents .then(documents => console.dir(documents)); ","version":"Next","tagName":"h2"},{"title":"Functions","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#functions","content":" ","version":"Next","tagName":"h2"},{"title":"get()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#get","content":" This will get a single field of the document. If the field is encrypted, it will be automatically decrypted before returning. var name = myDocument.get('name'); // returns the name ","version":"Next","tagName":"h3"},{"title":"get$()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#get-1","content":" This function returns an observable of the given paths-value. The current value of this path will be emitted each time the document changes. // get the live-updating value of 'name' var isName; myDocument.get$('name') .subscribe(newName => { isName = newName; }); await myDocument.incrementalPatch({name: 'foobar2'}); console.dir(isName); // isName is now 'foobar2' ","version":"Next","tagName":"h3"},{"title":"proxy-get","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#proxy-get","content":" All properties of a RxDocument are assigned as getters so you can also directly access values instead of using the get()-function. // Identical to myDocument.get('name'); var name = myDocument.name; // Can also get nested values. var nestedValue = myDocument.whatever.nestedfield; // Also usable with observables: myDocument.firstName$.subscribe(newName => console.log('name is: ' + newName)); // > 'name is: Stefe' await myDocument.incrementalPatch({firstName: 'Steve'}); // > 'name is: Steve' ","version":"Next","tagName":"h3"},{"title":"update()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#update","content":" Updates the document based on the mongo-update-syntax, based on the mingo library. /** * If not done before, you have to add the update plugin. */ import { addRxPlugin } from 'rxdb'; import { RxDBUpdatePlugin } from 'rxdb/plugins/update'; addRxPlugin(RxDBUpdatePlugin); await myDocument.update({ $inc: { age: 1 // increases age by 1 }, $set: { firstName: 'foobar' // sets firstName to foobar } }); ","version":"Next","tagName":"h3"},{"title":"modify()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#modify","content":" Updates a documents data based on a function that mutates the current data and returns the new value. const changeFunction = (oldData) => { oldData.age = oldData.age + 1; oldData.name = 'foooobarNew'; return oldData; } await myDocument.modify(changeFunction); console.log(myDocument.name); // 'foooobarNew' ","version":"Next","tagName":"h3"},{"title":"patch()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#patch","content":" Overwrites the given attributes over the documents data. await myDocument.patch({ name: 'Steve', age: undefined // setting an attribute to undefined will remove it }); console.log(myDocument.name); // 'Steve' ","version":"Next","tagName":"h3"},{"title":"Prevent conflicts with the incremental methods","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#prevent-conflicts-with-the-incremental-methods","content":" Making a normal change to the non-latest version of a RxDocument will lead to a 409 CONFLICT error because RxDB uses revision checks instead of transactions. To make a change to a document, no matter what the current state is, you can use the incremental methods: // update await myDocument.incrementalUpdate({ $inc: { age: 1 // increases age by 1 } }); // modify await myDocument.incrementalModify(docData => { docData.age = docData.age + 1; return docData; }); // patch await myDocument.incrementalPatch({ age: 100 }); // remove await myDocument.incrementalRemove({ age: 100 }); ","version":"Next","tagName":"h3"},{"title":"getLatest()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#getlatest","content":" Returns the latest known state of the RxDocument. const myDocument = await myCollection.findOne('foobar').exec(); const docAfterEdit = await myDocument.incrementalPatch({ age: 10 }); const latestDoc = myDocument.getLatest(); console.log(docAfterEdit === latestDoc); // > true ","version":"Next","tagName":"h3"},{"title":"Observe $","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#observe-","content":" Calling this will return an rxjs-Observable which the current newest state of the RxDocument. // get all changeEvents myDocument.$ .subscribe(currentRxDocument => console.dir(currentRxDocument)); ","version":"Next","tagName":"h3"},{"title":"remove()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#remove","content":" This removes the document from the collection. Notice that this will not purge the document from the store but set _deleted:true so that it will be no longer returned on queries. To fully purge a document, use the cleanup plugin. myDocument.remove(); ","version":"Next","tagName":"h3"},{"title":"deleted$","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#deleted","content":" Emits a boolean value, depending on whether the RxDocument is deleted or not. let lastState = null; myDocument.deleted$.subscribe(state => lastState = state); console.log(lastState); // false await myDocument.remove(); console.log(lastState); // true ","version":"Next","tagName":"h3"},{"title":"get deleted","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#get-deleted","content":" A getter to get the current value of deleted$. console.log(myDocument.deleted); // false await myDocument.remove(); console.log(myDocument.deleted); // true ","version":"Next","tagName":"h3"},{"title":"toJSON()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#tojson","content":" Returns the document's data as plain json object. This will return an immutable object. To get something that can be modified, use toMutableJSON() instead. const json = myDocument.toJSON(); console.dir(json); /* { passportId: 'h1rg9ugdd30o', firstName: 'Carolina', lastName: 'Gibson', age: 33 ... */ You can also set withMetaFields: true to get additional meta fields like the revision, attachments or the deleted flag. const json = myDocument.toJSON(true); console.dir(json); /* { passportId: 'h1rg9ugdd30o', firstName: 'Carolina', lastName: 'Gibson', _deleted: false, _attachments: { ... }, _rev: '1-aklsdjfhaklsdjhf...' */ ","version":"Next","tagName":"h3"},{"title":"toMutableJSON()","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#tomutablejson","content":" Same as toJSON() but returns a deep cloned object that can be mutated afterwards. Remember that deep cloning is performance expensive and should only be done when necessary. const json = myDocument.toMutableJSON(); json.firstName = 'Alice'; // The returned document can be mutated All methods of RxDocument are bound to the instance When you get a method from a RxDocument, the method is automatically bound to the documents instance. This means you do not have to use things like myMethod.bind(myDocument) like you would do in jsx. ","version":"Next","tagName":"h3"},{"title":"isRxDocument","type":1,"pageTitle":"RxDocument","url":"/rx-document.html#isrxdocument","content":" Returns true if the given object is an instance of RxDocument. Returns false if not. const is = isRxDocument(myObj); ","version":"Next","tagName":"h3"},{"title":"RxDB Database Replication Protocol","type":0,"sectionRef":"#","url":"/replication.html","content":"","keywords":"","version":"Next"},{"title":"Replication protocol on the document level","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#replication-protocol-on-the-document-level","content":" On the RxDocument level, the replication works like git, where the fork/client contains all new writes and must be merged with the master/server before it can push its new state to the master/server. A---B-----------D master/server state \\ / B---C---D fork/client state The client pulls the latest state B from the master.The client does some changes C+D.The client pushes these changes to the master by sending the latest known master state B and the new client state D of the document.If the master state is equal to the latest master B state of the client, the new client state D is set as the latest master state.If the master also had changes and so the latest master change is different then the one that the client assumes, we have a conflict that has to be resolved on the client. ","version":"Next","tagName":"h2"},{"title":"Replication protocol on the transfer level","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#replication-protocol-on-the-transfer-level","content":" When document states are transferred, all handlers use batches of documents for better performance. The server must implement the following methods to be compatible with the replication: pullHandler Get the last checkpoint (or null) as input. Returns all documents that have been written after the given checkpoint. Also returns the checkpoint of the latest written returned document.pushHandler a method that can be called by the client to send client side writes to the master. It gets an array with the assumedMasterState and the newForkState of each document write as input. It must return an array that contains the master document states of all conflicts. If there are no conflicts, it must return an empty array.pullStream an observable that emits batches of all master writes and the latest checkpoint of the write batches. +--------+ +--------+ | | pullHandler() | | | |---------------------> | | | | | | | | | | | Client | pushHandler() | Server | | |---------------------> | | | | | | | | pullStream$ | | | | <-------------------------| | +--------+ +--------+ The replication runs in two different modes: ","version":"Next","tagName":"h2"},{"title":"Checkpoint iteration","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#checkpoint-iteration","content":" On first initial replication, or when the client comes online again, a checkpoint based iteration is used to catch up with the server state. A checkpoint is a subset of the fields of the last pulled document. When the checkpoint is send to the backend via pullHandler(), the backend must be able to respond with all documents that have been written after the given checkpoint. For example if your documents contain an id and an updatedAt field, these two can be used as checkpoint. When the checkpoint iteration reaches the last checkpoint, where the backend returns an empty array because there are no newer documents, the replication will automatically switch to the event observation mode. ","version":"Next","tagName":"h3"},{"title":"Event observation","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#event-observation","content":" While the client is connected to the backend, the events from the backend are observed via pullStream$ and persisted to the client. If your backend for any reason is not able to provide a full pullStream$ that contains all events and the checkpoint, you can instead only emit RESYNC events that tell RxDB that anything unknown has changed on the server and it should run the pull replication via checkpoint iteration. When the client goes offline and online again, it might happen that the pullStream$ has missed out some events. Therefore the pullStream$ should also emit a RESYNC event each time the client reconnects, so that the client can become in sync with the backend via the checkpoint iteration mode. ","version":"Next","tagName":"h3"},{"title":"Data layout on the server","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#data-layout-on-the-server","content":" To use the replication you first have to ensure that: documents are deterministic sortable by their last write time deterministic means that even if two documents have the same last write time, they have a predictable sort order. This is most often ensured by using the primaryKey as second sort parameter as part of the checkpoint. documents are never deleted, instead the _deleted field is set to true. This is needed so that the deletion state of a document exists in the database and can be replicated with other instances. If your backend uses a different field to mark deleted documents, you have to transform the data in the push/pull handlers or with the modifiers. For example if your documents look like this: const docData = { "id": "foobar", "name": "Alice", "lastName": "Wilson", /** * Contains the last write timestamp * so all documents writes can be sorted by that value * when they are fetched from the remote instance. */ "updatedAt": 1564483474, /** * Instead of physically deleting documents, * a deleted document gets replicated. */ "_deleted": false } Then your data is always sortable by updatedAt. This ensures that when RxDB fetches 'new' changes via pullHandler(), it can send the latest updatedAt+id checkpoint to the remote endpoint and then receive all newer documents. By default, the field is _deleted. If your remote endpoint uses a different field to mark deleted documents, you can set the deletedField in the replication options which will automatically map the field on all pull and push requests. ","version":"Next","tagName":"h2"},{"title":"Conflict handling","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#conflict-handling","content":" When multiple clients (or the server) modify the same document at the same time (or when they are offline), it can happen that a conflict arises during the replication. A---B1---C1---X master/server state \\ / B1---C2 fork/client state In the case above, the client would tell the master to move the document state from B1 to C2 by calling pushHandler(). But because the actual master state is C1 and not B1, the master would reject the write by sending back the actual master state C1.RxDB resolves all conflicts on the client so it would call the conflict handler of the RxCollection and create a new document state D that can then be written to the master. A---B1---C1---X---D master/server state \\ / \\ / B1---C2---D fork/client state The default conflict handler will always drop the fork state and use the master state. This ensures that clients that are offline for a very long time, do not accidentally overwrite other peoples changes when they go online again. You can specify a custom conflict handler by setting the property conflictHandler when calling addCollection(). Learn how to create a custom conflict handler. ","version":"Next","tagName":"h2"},{"title":"replicateRxCollection()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#replicaterxcollection","content":" You can start the replication of a single RxCollection by calling replicateRxCollection() like in the following: import { replicateRxCollection } from 'rxdb/plugins/replication'; import { lastOfArray } from 'rxdb'; const replicationState = await replicateRxCollection({ collection: myRxCollection, /** * An id for the replication to identify it * and so that RxDB is able to resume the replication on app reload. * If you replicate with a remote server, it is recommended to put the * server url into the replicationIdentifier. */ replicationIdentifier: 'my-rest-replication-to-https://example.com/api/sync', /** * By default it will do an ongoing realtime replication. * By settings live: false the replication will run once until the local state * is in sync with the remote state, then it will cancel itself. * (optional), default is true. */ live: true, /** * Time in milliseconds after when a failed backend request * has to be retried. * This time will be skipped if a offline->online switch is detected * via navigator.onLine * (optional), default is 5 seconds. */ retryTime: 5 * 1000, /** * When multiInstance is true, like when you use RxDB in multiple browser tabs, * the replication should always run in only one of the open browser tabs. * If waitForLeadership is true, it will wait until the current instance is leader. * If waitForLeadership is false, it will start replicating, even if it is not leader. * [default=true] */ waitForLeadership: true, /** * If this is set to false, * the replication will not start automatically * but will wait for replicationState.start() being called. * (optional), default is true */ autoStart: true, /** * Custom deleted field, the boolean property of the document data that * marks a document as being deleted. * If your backend uses a different fieldname then '_deleted', set the fieldname here. * RxDB will still store the documents internally with '_deleted', setting this field * only maps the data on the data layer. * * If a custom deleted field contains a non-boolean value, the deleted state * of the documents depends on if the value is truthy or not. So instead of providing a boolean * * deleted value, you could also work with using a 'deletedAt' timestamp instead. * * [default='_deleted'] */ deletedField: 'deleted', /** * Optional, * only needed when you want to replicate local changes to the remote instance. */ push: { /** * Push handler */ async handler(docs) { /** * Push the local documents to a remote REST server. */ const rawResponse = await fetch('https://example.com/api/sync/push', { method: 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify({ docs }) }); /** * Contains an array with all conflicts that appeared during this push. * If there were no conflicts, return an empty array. */ const response = await rawResponse.json(); return response; }, /** * Batch size, optional * Defines how many documents will be given to the push handler at once. */ batchSize: 5, /** * Modifies all documents before they are given to the push handler. * Can be used to swap out a custom deleted flag instead of the '_deleted' field. * If the push modifier return null, the document will be skipped and not send to the remote. * Notice that the modifier can be called multiple times and should not contain any side effects. * (optional) */ modifier: d => d }, /** * Optional, * only needed when you want to replicate remote changes to the local state. */ pull: { /** * Pull handler */ async handler(lastCheckpoint, batchSize) { const minTimestamp = lastCheckpoint ? lastCheckpoint.updatedAt : 0; /** * In this example we replicate with a remote REST server */ const response = await fetch( `https://example.com/api/sync/?minUpdatedAt=${minTimestamp}&limit=${batchSize}` ); const documentsFromRemote = await response.json(); return { /** * Contains the pulled documents from the remote. * Not that if documentsFromRemote.length < batchSize, * then RxDB assumes that there are no more un-replicated documents * on the backend, so the replication will switch to 'Event observation' mode. */ documents: documentsFromRemote, /** * The last checkpoint of the returned documents. * On the next call to the pull handler, * this checkpoint will be passed as 'lastCheckpoint' */ checkpoint: documentsFromRemote.length === 0 ? lastCheckpoint : { id: lastOfArray(documentsFromRemote).id, updatedAt: lastOfArray(documentsFromRemote).updatedAt } }; }, batchSize: 10, /** * Modifies all documents after they have been pulled * but before they are used by RxDB. * Notice that the modifier can be called multiple times and should not contain any side effects. * (optional) */ modifier: d => d, /** * Stream of the backend document writes. * See below. * You only need a stream$ when you have set live=true */ stream$: pullStream$.asObservable() }, }); /** * Creating the pull stream for realtime replication. * Here we use a websocket but any other way of sending data to the client can be used, * like long polling or server-sent events. */ const pullStream$ = new Subject<RxReplicationPullStreamItem<any, any>>(); let firstOpen = true; function connectSocket() { const socket = new WebSocket('wss://example.com/api/sync/stream'); /** * When the backend sends a new batch of documents+checkpoint, * emit it into the stream$. * * event.data must look like this * { * documents: [ * { * id: 'foobar', * _deleted: false, * updatedAt: 1234 * } * ], * checkpoint: { * id: 'foobar', * updatedAt: 1234 * } * } */ socket.onmessage = event => pullStream$.next(event.data); /** * Automatically reconnect the socket on close and error. */ socket.onclose = () => connectSocket(); socket.onerror = () => socket.close(); socket.onopen = () => { if(firstOpen) { firstOpen = false; } else { /** * When the client is offline and goes online again, * it might have missed out events that happened on the server. * So we have to emit a RESYNC so that the replication goes * into 'Checkpoint iteration' mode until the client is in sync * and then it will go back into 'Event observation' mode again. */ pullStream$.next('RESYNC'); } } } ","version":"Next","tagName":"h2"},{"title":"Multi Tab support","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#multi-tab-support","content":" For better performance, the replication runs only in one instance when RxDB is used in multiple browser tabs or Node.js processes. By setting waitForLeadership: false you can enforce that each tab runs its own replication cycles. If used in a multi instance setting, so when at database creation multiInstance: false was not set, you need to import the leader election plugin so that RxDB can know how many instances exist and which browser tab should run the replication. ","version":"Next","tagName":"h2"},{"title":"Error handling","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#error-handling","content":" When sending a document to the remote fails for any reason, RxDB will send it again in a later point in time. This happens for all errors. The document write could have already reached the remote instance and be processed, while only the answering fails. The remote instance must be designed to handle this properly and to not crash on duplicate data transmissions. Depending on your use case, it might be ok to just write the duplicate document data again. But for a more resilient error handling you could compare the last write timestamps or add a unique write id field to the document. This field can then be used to detect duplicates and ignore re-send data. Also the replication has an .error$ stream that emits all RxError objects that arise during replication. Notice that these errors contain an inner .parameters.errors field that contains the original error. Also they contain a .parameters.direction field that indicates if the error was thrown during pull or push. You can use these to properly handle errors. For example when the client is outdated, the server might respond with a 426 Upgrade Required error code that can then be used to force a page reload. replicationState.error$.subscribe((error) => { if( error.parameters.errors && error.parameters.errors[0] && error.parameters.errors[0].code === 426 ) { // client is outdated -> enforce a page reload location.reload(); } }); ","version":"Next","tagName":"h2"},{"title":"Security","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#security","content":" Be aware that client side clocks can never be trusted. When you have a client-backend replication, the backend should overwrite the updatedAt timestamp or use another field, when it receives the change from the client. ","version":"Next","tagName":"h2"},{"title":"RxReplicationState","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#rxreplicationstate","content":" The function replicateRxCollection() returns a RxReplicationState that can be used to manage and observe the replication. ","version":"Next","tagName":"h2"},{"title":"Observable","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#observable","content":" To observe the replication, the RxReplicationState has some Observable properties: // emits each document that was received from the remote myRxReplicationState.received$.subscribe(doc => console.dir(doc)); // emits each document that was send to the remote myRxReplicationState.sent$.subscribe(doc => console.dir(doc)); // emits all errors that happen when running the push- & pull-handlers. myRxReplicationState.error$.subscribe(error => console.dir(error)); // emits true when the replication was canceled, false when not. myRxReplicationState.canceled$.subscribe(bool => console.dir(bool)); // emits true when a replication cycle is running, false when not. myRxReplicationState.active$.subscribe(bool => console.dir(bool)); ","version":"Next","tagName":"h3"},{"title":"awaitInitialReplication()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#awaitinitialreplication","content":" With awaitInitialReplication() you can await the initial replication that is done when a full replication cycle was successful finished for the first time. The returned promise will never resolve if you cancel the replication before the initial replication can be done. await myRxReplicationState.awaitInitialReplication(); ","version":"Next","tagName":"h3"},{"title":"awaitInSync()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#awaitinsync","content":" Returns a Promise that resolves when: awaitInitialReplication() has emitted.All local data is replicated with the remote.No replication cycle is running or in retry-state. warning When multiInstance: true and waitForLeadership: true and another tab is already running the replication, awaitInSync() will not resolve until the other tab is closed and the replication starts in this tab. await myRxReplicationState.awaitInSync(); warning awaitInitialReplication() and awaitInSync() should not be used to block the application A common mistake in RxDB usage is when developers want to block the app usage until the application is in sync. Often they just await the promise of awaitInitialReplication() or awaitInSync() and show a loading spinner until they resolve. This is dangerous and should not be done because: When multiInstance: true and waitForLeadership: true (default) and another tab is already running the replication, awaitInitialReplication() will not resolve until the other tab is closed and the replication starts in this tab.Your app can no longer be started when the device is offline because there the awaitInitialReplication() will never resolve and the app cannot be used. Instead you should store the last in-sync time in a local document and observe its value on all instances. For example if you want to block clients from using the app if they have not been in sync for the last 24 hours, you could use this code: // update last-in-sync-flag each time replication is in sync await myCollection.insertLocal('last-in-sync', { time: 0 }).catch(); // ensure flag exists myReplicationState.active$.pipe( mergeMap(async() => { await myReplicationState.awaitInSync(); await myCollection.upsertLocal('last-in-sync', { time: Date.now() }) }) ); // observe the flag and toggle loading spinner await showLoadingSpinner(); const oneDay = 1000 * 60 * 60 *24; await firstValueFrom( myCollection.getLocal$('last-in-sync').pipe( filter(d => d.get('time') > (Date.now() - oneDay)) ) ); await hideLoadingSpinner(); ","version":"Next","tagName":"h3"},{"title":"reSync()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#resync","content":" Triggers a RESYNC cycle where the replication goes into checkpoint iteration until the client is in sync with the backend. Used in unit tests or when no proper pull.stream$ can be implemented so that the client only knows that something has been changed but not what. myRxReplicationState.reSync(); If your backend is not capable of sending events to the client at all, you could run reSync() in an interval so that the client will automatically fetch server changes after some time at least. // trigger RESYNC each 10 seconds. setInterval(() => myRxReplicationState.reSync(), 10 * 1000); ","version":"Next","tagName":"h3"},{"title":"cancel()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#cancel","content":" Cancels the replication. Returns a promise that resolved when everything has been cleaned up. await myRxReplicationState.cancel() ","version":"Next","tagName":"h3"},{"title":"remove()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#remove","content":" Cancels the replication and deletes the metadata of the replication state. This can be used to restart the replication "from scratch". Calling .remove() will only delete the replication metadata, it will NOT delete the documents from the collection of the replication. await myRxReplicationState.remove() ","version":"Next","tagName":"h3"},{"title":"isStopped()","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#isstopped","content":" Returns true if the replication is stopped. This can be if a non-live replication is finished or a replication got canceled. replicationState.isStopped(); // true/false ","version":"Next","tagName":"h3"},{"title":"Setting a custom initialCheckpoint (beta)","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#setting-a-custom-initialcheckpoint-beta","content":" By default, the push replication will start from the beginning of time and push all documents from there to the remote. By setting a custom push.initialCheckpoint, you can tell the replication to only push writes that are newer than the given checkpoint. // store the latest checkpoint of a collection let lastLocalCheckpoint: any; myCollection.checkpoint$.subscribe(checkpoint => lastLocalCheckpoint = checkpoint); // start the replication but only push documents that are newer than the lastLocalCheckpoint const replicationState = replicateRxCollection({ collection: myCollection, replicationIdentifier: 'my-custom-replication-with-init-checkpoint', /* ... */ push: { handler: /* ... */, initialCheckpoint: lastLocalCheckpoint } }); The same can be done for the other direction by setting a pull.initialCheckpoint. Notice that here we need the remote checkpoint from the backend instead of the one from the RxDB storage. // get the last pull checkpoint from the server const lastRemoteCheckpoint = await (await fetch('http://example.com/pull-checkpoint')).json(); // start the replication but only pull documents that are newer than the lastRemoteCheckpoint const replicationState = replicateRxCollection({ collection: myCollection, replicationIdentifier: 'my-custom-replication-with-init-checkpoint', /* ... */ pull: { handler: /* ... */, initialCheckpoint: lastRemoteCheckpoint } }); ","version":"Next","tagName":"h3"},{"title":"Attachment replication (beta)","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#attachment-replication-beta","content":" Attachment replication is supported in the RxDB replication protocol itself. However not all replication plugins support it. If you start the replication with a collection which has enabled RxAttachments attachments data will be added to all push- and write data. The pushed documents will contain an _attachments object which contains: The attachment meta data (id, length, digest) of all non-attachmentsThe full attachment data of all attachments that have been updated/added from the client.Deleted attachments are spared out in the pushed document. With this data, the backend can decide onto which attachments must be deleted, added or overwritten. Accordingly, the pulled document must contain the same data, if the backend has a new document state with updated attachments. ","version":"Next","tagName":"h3"},{"title":"FAQ","type":1,"pageTitle":"RxDB Database Replication Protocol","url":"/replication.html#faq","content":" I have infinite loops in my replication, how to debug? When you have infinite loops in your replication or random re-runs of http requests after some time, the reason is likely that your pull-handler is crashing. The debug this, add a log to the error$ handler to debug it. myRxReplicationState.error$.subscribe(err => console.log('error$', err)). ","version":"Next","tagName":"h2"},{"title":"RxDatabase","type":0,"sectionRef":"#","url":"/rx-database.html","content":"","keywords":"","version":"Next"},{"title":"Creation","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#creation","content":" The database is created by the asynchronous .createRxDatabase() function of the core RxDB module. It has the following parameters: import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const db = await createRxDatabase({ name: 'heroesdb', // <- name storage: getRxStorageDexie(), // <- RxStorage /* Optional parameters: */ password: 'myPassword', // <- password (optional) multiInstance: true, // <- multiInstance (optional, default: true) eventReduce: true, // <- eventReduce (optional, default: false) cleanupPolicy: {} // <- custom cleanup policy (optional) }); ","version":"Next","tagName":"h2"},{"title":"name","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#name","content":" The database-name is a string which uniquely identifies the database. When two RxDatabases have the same name and use the same RxStorage, their data can be assumed as equal and they will share events between each other. Depending on the storage or adapter this can also be used to define the filesystem folder of your data. ","version":"Next","tagName":"h3"},{"title":"storage","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#storage","content":" RxDB works on top of an implementation of the RxStorage interface. This interface is an abstraction that allows you to use different underlying databases that actually handle the documents. Depending on your use case you might use a different storage with different tradeoffs in performance, bundle size or supported runtimes. There are many RxStorage implementations that can be used depending on the JavaScript environment and performance requirements. For example you can use the Dexie RxStorage in the browser or use the LokiJS storage with the filesystem adapter in Node.js. List of RxStorage implementations // use the Dexie.js RxStorage that stores data in IndexedDB. import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const dbDexie = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageDexie() }); // ...or use the LokiJS RxStorage with the indexeddb adapter. import { getRxStorageLoki } from 'rxdb/plugins/storage-lokijs'; const LokiIncrementalIndexedDBAdapter = require('lokijs/src/incremental-indexeddb-adapter'); const dbLoki = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageLoki({ adapter: new LokiIncrementalIndexedDBAdapter() }) }); ","version":"Next","tagName":"h3"},{"title":"password","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#password","content":" (optional)If you want to use encrypted fields in the collections of a database, you have to set a password for it. The password must be a string with at least 12 characters. Read more about encryption here. ","version":"Next","tagName":"h3"},{"title":"multiInstance","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#multiinstance","content":" (optional=true)When you create more than one instance of the same database in a single javascript-runtime, you should set multiInstance to true. This will enable the event sharing between the two instances. For example when the user has opened multiple browser windows, events will be shared between them so that both windows react to the same changes.multiInstance should be set to false when you have single-instances like a single Node.js-process, a react-native-app, a cordova-app or a single-window electron app which can decrease the startup time because no instance coordination has to be done. ","version":"Next","tagName":"h3"},{"title":"eventReduce","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#eventreduce","content":" (optional=false) One big benefit of having a realtime database is that big performance optimizations can be done when the database knows a query is observed and the updated results are needed continuously. RxDB uses the EventReduce Algorithm to optimize observer or recurring queries. For better performance, you should always set eventReduce: true. This will also be the default in the next major RxDB version. ","version":"Next","tagName":"h3"},{"title":"ignoreDuplicate","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#ignoreduplicate","content":" (optional=false)If you create multiple RxDatabase-instances with the same name and same adapter, it's very likely that you have done something wrong. To prevent this common mistake, RxDB will throw an error when you do this. In some rare cases like unit-tests, you want to do this intentional by setting ignoreDuplicate to true. const db1 = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageDexie(), ignoreDuplicate: true }); const db2 = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageDexie(), ignoreDuplicate: true // this create-call will not throw because you explicitly allow it }); ","version":"Next","tagName":"h3"},{"title":"Methods","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#methods","content":" ","version":"Next","tagName":"h2"},{"title":"Observe with $","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#observe-with-","content":" Calling this will return an rxjs-Observable which streams all write events of the RxDatabase. myDb.$.subscribe(changeEvent => console.dir(changeEvent)); ","version":"Next","tagName":"h3"},{"title":"exportJSON()","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#exportjson","content":" Use this function to create a json-export from every piece of data in every collection of this database. You can pass true as a parameter to decrypt the encrypted data-fields of your document. Before exportJSON() and importJSON() can be used, you have to add the json-dump plugin. import { addRxPlugin } from 'rxdb'; import { RxDBJsonDumpPlugin } from 'rxdb/plugins/json-dump'; addRxPlugin(RxDBJsonDumpPlugin); myDatabase.exportJSON() .then(json => console.dir(json)); ","version":"Next","tagName":"h3"},{"title":"importJSON()","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#importjson","content":" To import the json-dumps into your database, use this function. // import the dump to the database emptyDatabase.importJSON(json) .then(() => console.log('done')); ","version":"Next","tagName":"h3"},{"title":"backup()","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#backup","content":" Writes the current (or ongoing) database state to the filesystem. Read more ","version":"Next","tagName":"h3"},{"title":"waitForLeadership()","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#waitforleadership","content":" Returns a Promise which resolves when the RxDatabase becomes elected leader. ","version":"Next","tagName":"h3"},{"title":"requestIdlePromise()","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#requestidlepromise","content":" Returns a promise which resolves when the database is in idle. This works similar to requestIdleCallback but tracks the idle-ness of the database instead of the CPU. Use this for semi-important tasks like cleanups which should not affect the speed of important tasks. myDatabase.requestIdlePromise().then(() => { // this will run at the moment the database has nothing else to do myCollection.customCleanupFunction(); }); // with timeout myDatabase.requestIdlePromise(1000 /* time in ms */).then(() => { // this will run at the moment the database has nothing else to do // or the timeout has passed myCollection.customCleanupFunction(); }); ","version":"Next","tagName":"h3"},{"title":"destroy()","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#destroy","content":" Destroys the databases object-instance. This is to free up memory and stop all observers and replications. Returns a Promise that resolves when the database is destroyed. await myDatabase.destroy(); ","version":"Next","tagName":"h3"},{"title":"remove()","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#remove","content":" Wipes all documents from the storage. Use this to free up disc space. await myDatabase.remove(); // database instance is now gone // You can also clear a database without removing its instance import { removeRxDatabase } from 'rxdb'; removeRxDatabase('mydatabasename', 'localstorage'); ","version":"Next","tagName":"h3"},{"title":"isRxDatabase","type":1,"pageTitle":"RxDatabase","url":"/rx-database.html#isrxdatabase","content":" Returns true if the given object is an instance of RxDatabase. Returns false if not. import { isRxDatabase } from 'rxdb'; const is = isRxDatabase(myObj); ","version":"Next","tagName":"h3"},{"title":"Local Documents","type":0,"sectionRef":"#","url":"/rx-local-document.html","content":"","keywords":"","version":"Next"},{"title":"Add the local documents plugin","type":1,"pageTitle":"Local Documents","url":"/rx-local-document.html#add-the-local-documents-plugin","content":" To enable the local documents, you have to add the local-documents plugin. import { addRxPlugin } from 'rxdb'; import { RxDBLocalDocumentsPlugin } from 'rxdb/plugins/local-documents'; addRxPlugin(RxDBLocalDocumentsPlugin); ","version":"Next","tagName":"h2"},{"title":"Activate the plugin for a RxDatabase or RxCollection","type":1,"pageTitle":"Local Documents","url":"/rx-local-document.html#activate-the-plugin-for-a-rxdatabase-or-rxcollection","content":" For better performance, the local document plugin does not create a storage for every database or collection that is created. Instead you have to set localDocuments: true when you want to store local documents in the instance. // activate local documents on a RxDatabase const myDatabase = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageDexie(), localDocuments: true // <- activate this to store local documents in the database }); myDatabase.addCollections({ messages: { schema: messageSchema, localDocuments: true // <- activate this to store local documents in the collection } }); note If you want to store local documents in a RxCollection but NOT in the RxDatabase, you MUST NOT set localDocuments: true in the RxDatabase because it will only slow down the initial database creation. ","version":"Next","tagName":"h2"},{"title":"insertLocal()","type":1,"pageTitle":"Local Documents","url":"/rx-local-document.html#insertlocal","content":" Creates a local document for the database or collection. Throws if a local document with the same id already exists. Returns a Promise which resolves the new RxLocalDocument. const localDoc = await myCollection.insertLocal( 'foobar', // id { // data foo: 'bar' } ); // you can also use local-documents on a database const localDoc = await myDatabase.insertLocal( 'foobar', // id { // data foo: 'bar' } ); ","version":"Next","tagName":"h2"},{"title":"upsertLocal()","type":1,"pageTitle":"Local Documents","url":"/rx-local-document.html#upsertlocal","content":" Creates a local document for the database or collection if not exists. Overwrites the if exists. Returns a Promise which resolves the RxLocalDocument. const localDoc = await myCollection.upsertLocal( 'foobar', // id { // data foo: 'bar' } ); ","version":"Next","tagName":"h2"},{"title":"getLocal()","type":1,"pageTitle":"Local Documents","url":"/rx-local-document.html#getlocal","content":" Find a RxLocalDocument by its id. Returns a Promise which resolves the RxLocalDocument or null if not exists. const localDoc = await myCollection.getLocal('foobar'); ","version":"Next","tagName":"h2"},{"title":"getLocal$()","type":1,"pageTitle":"Local Documents","url":"/rx-local-document.html#getlocal-1","content":" Like getLocal() but returns an Observable that emits the document or null if not exists. const subscription = myCollection.getLocal$('foobar').subscribe(documentOrNull => { console.dir(documentOrNull); // > RxLocalDocument or null }); ","version":"Next","tagName":"h2"},{"title":"RxLocalDocument","type":1,"pageTitle":"Local Documents","url":"/rx-local-document.html#rxlocaldocument","content":" A RxLocalDocument behaves like a normal RxDocument. const localDoc = await myCollection.getLocal('foobar'); // access data const foo = localDoc.get('foo'); // change data localDoc.set('foo', 'bar2'); await localDoc.save(); // observe data localDoc.get$('foo').subscribe(value => { /* .. */ }); // remove it await localDoc.remove(); note Because the local document does not have a schema, accessing the documents data-fields via pseudo-proxy will not work. const foo = localDoc.foo; // undefined const foo = localDoc.get('foo'); // works! localDoc.foo = 'bar'; // does not work! localDoc.set('foo', 'bar'); // works For the usage with typescript, you can have access to the typed data of the document over toJSON() declare type MyLocalDocumentType = { foo: string } const localDoc = await myCollection.upsertLocal<MyLocalDocumentType>( 'foobar', // id { // data foo: 'bar' } ); // typescript will know that foo is a string const foo: string = localDoc.toJSON().foo; ","version":"Next","tagName":"h2"},{"title":"RxSchema","type":0,"sectionRef":"#","url":"/rx-schema.html","content":"","keywords":"","version":"Next"},{"title":"Example","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#example","content":" In this example-schema we define a hero-collection with the following settings: the version-number of the schema is 0the name-property is the primaryKey. This means its an unique, indexed, required string which can be used to definitely find a single document.the color-field is required for every documentthe healthpoints-field must be a number between 0 and 100the secret-field stores an encrypted valuethe birthyear-field is final which means it is required and cannot be changedthe skills-attribute must be an array with objects which contain the name and the damage-attribute. There is a maximum of 5 skills per hero.Allows adding attachments and store them encrypted { "title": "hero schema", "version": 0, "description": "describes a simple hero", "primaryKey": "name", "type": "object", "properties": { "name": { "type": "string", "maxLength": 100 // <- the primary key must have set maxLength }, "color": { "type": "string" }, "healthpoints": { "type": "number", "minimum": 0, "maximum": 100 }, "secret": { "type": "string" }, "birthyear": { "type": "number", "final": true, "minimum": 1900, "maximum": 2050 }, "skills": { "type": "array", "maxItems": 5, "uniqueItems": true, "items": { "type": "object", "properties": { "name": { "type": "string" }, "damage": { "type": "number" } } } } }, "required": [ "name", "color" ], "encrypted": ["secret"], "attachments": { "encrypted": true } } ","version":"Next","tagName":"h2"},{"title":"Create a collection with the schema","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#create-a-collection-with-the-schema","content":" await myDatabase.addCollections({ heroes: { schema: myHeroSchema } }); console.dir(myDatabase.heroes.name); // heroes ","version":"Next","tagName":"h2"},{"title":"version","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#version","content":" The version field is a number, starting with 0. When the version is greater than 0, you have to provide the migrationStrategies to create a collection with this schema. ","version":"Next","tagName":"h2"},{"title":"primaryKey","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#primarykey","content":" The primaryKey field contains the fieldname of the property that will be used as primary key for the whole collection. The value of the primary key of the document must be a string, unique, final and is required. ","version":"Next","tagName":"h2"},{"title":"composite primary key","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#composite-primary-key","content":" You can define a composite primary key which gets composed from multiple properties of the document data. const mySchema = { keyCompression: true, // set this to true, to enable the keyCompression version: 0, title: 'human schema with composite primary', primaryKey: { // where should the composed string be stored key: 'id', // fields that will be used to create the composed key fields: [ 'firstName', 'lastName' ], // separator which is used to concat the fields values. separator: '|' }, type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, firstName: { type: 'string' }, lastName: { type: 'string' } }, required: [ 'id', 'firstName', 'lastName' ] }; You can then find a document by using the relevant parts to create the composite primaryKey: // inserting with composite primary await myRxCollection.insert({ // id, <- do not set the id, it will be filled by RxDB firstName: 'foo', lastName: 'bar' }); // find by composite primary const id = myRxCollection.schema.getPrimaryOfDocumentData({ firstName: 'foo', lastName: 'bar' }); const myRxDocument = myRxCollection.findOne(id).exec(); ","version":"Next","tagName":"h3"},{"title":"Indexes","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#indexes","content":" RxDB supports secondary indexes which are defined at the schema-level of the collection. Index is only allowed on field types string, integer and number. Some RxStorages allow to use boolean fields as index. Depending on the field type, you must have set some meta attributes like maxLength or minimum. This is required so that RxDB is able to know the maximum string representation length of a field, which is needed to craft custom indexes on several RxStorage implementations. note RxDB will always append the primaryKey to all indexes to ensure a deterministic sort order of query results. You do not have to add the primaryKey to any index. ","version":"Next","tagName":"h2"},{"title":"Index-example","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#index-example","content":" const schemaWithIndexes = { version: 0, title: 'human schema with indexes', keyCompression: true, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, firstName: { type: 'string', maxLength: 100 // <- string-fields that are used as an index, must have set maxLength. }, lastName: { type: 'string' }, active: { type: 'boolean' }, familyName: { type: 'string' }, balance: { type: 'number', // number fields that are used in an index, must have set minimum, maximum and multipleOf minimum: 0, maximum: 100000, multipleOf: 0.01 }, creditCards: { type: 'array', items: { type: 'object', properties: { cvc: { type: 'number' } } } } }, required: [ 'id', 'active' // <- boolean fields that are used in an index, must be required. ], indexes: [ 'firstName', // <- this will create a simple index for the `firstName` field ['active', 'firstName'], // <- this will create a compound-index for these two fields 'active' ] }; internalIndexes When you use RxDB on the server-side, you might want to use internalIndexes to speed up internal queries. Read more ","version":"Next","tagName":"h3"},{"title":"attachments","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#attachments","content":" To use attachments in the collection, you have to add the attachments-attribute to the schema. See RxAttachment. ","version":"Next","tagName":"h2"},{"title":"default","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#default","content":" Default values can only be defined for first-level fields. Whenever you insert a document unset fields will be filled with default-values. const schemaWithDefaultAge = { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, firstName: { type: 'string' }, lastName: { type: 'string' }, age: { type: 'integer', default: 20 // <- default will be used } }, required: ['id'] }; ","version":"Next","tagName":"h2"},{"title":"final","type":1,"pageTitle":"RxSchema","url":"/rx-schema.html#final","content":" By setting a field to final, you make sure it cannot be modified later. Final fields are always required. Final fields cannot be observed because they will not change. Advantages: With final fields you can ensure that no-one accidentally modifies the data.When you enable the eventReduce algorithm, some performance-improvements are done. const schemaWithFinalAge = { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, firstName: { type: 'string' }, lastName: { type: 'string' }, age: { type: 'integer', final: true } }, required: ['id'] }; Not everything within the jsonschema-spec is allowed The schema is not only used to validate objects before they are written into the database, but also used to map getters to observe and populate single fieldnames, keycompression and other things. Therefore you can not use every schema which would be valid for the spec of json-schema.org. For example, fieldnames must match the regex ^[a-zA-Z][[a-zA-Z0-9_]*]?[a-zA-Z0-9]$ and additionalProperties is always set to false. But don't worry, RxDB will instantly throw an error when you pass an invalid schema into it. ","version":"Next","tagName":"h2"},{"title":"Scaling the RxServer","type":0,"sectionRef":"#","url":"/rx-server-scaling.html","content":"","keywords":"","version":"Next"},{"title":"Vertical Scaling","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#vertical-scaling","content":" Vertical Scaling aka "scaling up" has the goal to get more power out of a single server by utilizing more of the servers compute. Vertical scaling should be the first step when you decide it is time to scale. ","version":"Next","tagName":"h2"},{"title":"Run multiple JavaScript processes","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#run-multiple-javascript-processes","content":" To utilize more compute power of your server, the first step is to scale vertically by running the RxDB server on multiple processes in parallel. RxDB itself is already build to support multiInstance-usage on the client, like when the user has opened multiple browser tabs at once. The same method works also on the server side in Node.js. You can spawn multiple JavaScript processes that use the same RxDatabase and the instances will automatically communicate with each other and distribute their data and events with the BroadcastChannel. By default the multiInstance param is set to true when calling createRxDatabase(), so you do not have to change anything. To make all processes accessible through the same endpoint, you can put a load-balancer like nginx in front of them. ","version":"Next","tagName":"h3"},{"title":"Using workers to split up the load","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#using-workers-to-split-up-the-load","content":" Another way to increases the server capacity is to put the storage into a Worker thread so that the "main" thread with the webserver can handle more requests. This might be easier to set up compared to using multiple JavaScript processes and a load balancer. ","version":"Next","tagName":"h3"},{"title":"Use an in-memory storage at the user facing level","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#use-an-in-memory-storage-at-the-user-facing-level","content":" Another way to serve more requests to your end users, is to use an in-memory storage that has the best read- and write performance. It outperformans persistend storages by a factor of 10x. So instead of directly serving requests from the persistence layer, you add an in-memory layer on top of that. You could either do a replication from your memory database to the persistend one, or you use the memory synced storage which has this build in. import { getRxStorageMemory } from 'rxdb/plugins/storage-memory'; import { replicateRxCollection } from 'rxdb/plugins/replication'; import { getRxStorageFilesystemNode } from 'rxdb-premium/plugins/storage-filesystem-node'; import { getMemorySyncedRxStorage } from 'rxdb-premium/plugins/storage-memory-synced'; const myRxDatabase = await createRxDatabase({ name: 'mydb', storage: getMemorySyncedRxStorage({ storage: getRxStorageFilesystemNode({ basePath: path.join(__dirname, 'my-database-folder') }) }) }); await myDatabase.addCollections({/* ... */}); const myServer = await startRxServer({ database: myRxDatabase, port: 443 }); But notice that you have to check your persistence requirements. When a write happens to the memory layer and the server crashes while it has not persisted, in rare cases the write operation might get lost. You can remove that risk by setting awaitWritePersistence: true on the memory synced storage settings. ","version":"Next","tagName":"h3"},{"title":"Horizontal Scaling","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#horizontal-scaling","content":" To scale the RxDB Server above a single physical hardware unit, there are different solutions where the decision depends on the exact use case. ","version":"Next","tagName":"h2"},{"title":"Single Datastore with multiple branches","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#single-datastore-with-multiple-branches","content":" Thke most common way to use multiple servers with RxDB is to split up the server into a tree with a root "datastore" and multiple "branches". The datastore contains the persisted data and only servers as a replication endpoint for the branches. The branches themself will replicate data to and from the datastore and server requests to the end users. This is mostly useful on read-heavy applications because reads will directly run on the branches without ever reaching the main datastore and you can always add more branches to scale up. Even adding additional layers of "datastores" is possible so the tree can grow (or shrink) with the demand. ","version":"Next","tagName":"h3"},{"title":"Moving the branches to \"the edge\"","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#moving-the-branches-to-the-edge","content":" Instead of running the "branches" of the tree on the same physical location as the datastore, it often makes sense to move the branches into a datacenter near the end users. Because the RxDB replication algorithm is made to work with slow and even partially offline users, using it for physically separated servers will work the same way. Latency is not that important because writes and reads will not decrease performance by blocking each other and the replication can run in the background without blocking other servers during transaction. ","version":"Next","tagName":"h3"},{"title":"Replicate Databases for Microservices","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#replicate-databases-for-microservices","content":" If your application is build with a microservice architecture and your microservices are also build in Node.js, you can scale the database horizontally by moving the database into the microservices and use the RxDB replication to do a realtime sync between the microservices and a main "datastore" server. The "datastore" server would then only handle the replication requests or do some additional things like logging or backups. The compute for reads and writes will then mainly be done on the microservices themself. This simplifies setting up more and more microservices without decreasing the performance of the whole system. ","version":"Next","tagName":"h3"},{"title":"Use a self-scaling RxStorage","type":1,"pageTitle":"Scaling the RxServer","url":"/rx-server-scaling.html#use-a-self-scaling-rxstorage","content":" An alternative to scaling up the RxDB servers themself, you can also switch to a RxStorage which scales up internally. For example the FoundationDB storage or MongoDB can work on top of a cluster that can increase load by adding more servers to itself. With that you can always add more Node.js RxDB processes that connect to the same cluster and server requests from it. ","version":"Next","tagName":"h3"},{"title":"RxDB Server","type":0,"sectionRef":"#","url":"/rx-server.html","content":"","keywords":"","version":"Next"},{"title":"Starting a RxServer","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#starting-a-rxserver","content":" To create an RxServer, you have to install the rxdb-server package with npm install rxdb-server --save and then you can import the createRxServer() function and create a server on a given RxDatabase and adapter. After adding the endpoints to the server, do not forget to call myServer.start() to start the actually http-server. import { createRxServer } from 'rxdb-server/plugins/server'; /** * We use the express adapter which is the one that comes with RxDB core */ import { RxServerAdapterExpress } from 'rxdb-server/plugins/adapter-express'; const myServer = await createRxServer({ database: myRxDatabase, adapter: RxServerAdapterExpress, port: 443 }); // add endpoints here (see below) // after adding the endpoints, start the server await myServer.start(); ","version":"Next","tagName":"h2"},{"title":"Using RxServer with Fastify","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#using-rxserver-with-fastify","content":" There is also a RxDB Premium 👑 adapter to use the RxServer with Fastify instead of express. Fastify has shown to have better performance and in general is more modern. import { createRxServer } from 'rxdb-server/plugins/server'; import { RxServerAdapterFastify } from 'rxdb-premium/plugins/server-adapter-fastify'; const myServer = await createRxServer({ database: myRxDatabase, adapter: RxServerAdapterFastify, port: 443 }); await myServer.start(); ","version":"Next","tagName":"h3"},{"title":"Using RxServer with Koa","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#using-rxserver-with-koa","content":" There is also a RxDB Premium 👑 adapter to use the RxServer with Koa instead of express. Koa has shown to have better compared to express. import { createRxServer } from 'rxdb-server/plugins/server'; import { RxServerAdapterKoa } from 'rxdb-premium/plugins/server-adapter-koa'; const myServer = await createRxServer({ database: myRxDatabase, adapter: RxServerAdapterKoa, port: 443 }); await myServer.start(); ","version":"Next","tagName":"h3"},{"title":"RxServer Endpoints","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#rxserver-endpoints","content":" On top of the RxServer you can add different types of endpoints. An endpoint is always connected to exactly one RxCollection and it only serves data from that single collection. For now there are only two endpoints implemented, the replication endpoint and the REST endpoint. Others will be added in the future. An endpoint is added to the server by calling the add endpoint method like myRxServer.addReplicationEndpoint(). Each needs a different name string as input which will define the resulting endpoint url. The endpoint urls is a combination of the given name and schema version of the collection, like /my-endpoint/0. const myEndpoint = server.addReplicationEndpoint({ name: 'my-endpoint', collection: myServerCollection }); console.log(myEndpoint.urlPath) // > 'my-endpoint/0' Notice that it is not required that the server side schema version is equal to the client side schema version. You might want to change server schemas more often and then only do a migration on the server, not on the clients. ","version":"Next","tagName":"h2"},{"title":"Replication Endpoint","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#replication-endpoint","content":" The replication endpoint allows clients that connect to it to replicate data with the server via the RxDB replication protocol. There is also the Replication Server plugin that is used on the client side to connect to the endpoint. The endpoint is added to the server with the addReplicationEndpoint() method. It requires a specific collection and the endpoint will only provided replication for documents inside of that collection. // > server.ts const endpoint = server.addReplicationEndpoint({ name: 'my-endpoint', collection: myServerCollection }); Then you can start the Server Replication on the client: // > client.ts const replicationState = await replicateServer({ collection: usersCollection, replicationIdentifier: 'my-server-replication', url: 'http://localhost:80/my-endpoint/0', push: {}, pull: {} }); ","version":"Next","tagName":"h2"},{"title":"REST endpoint","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#rest-endpoint","content":" The REST endpoint exposes various methods to access the data from the RxServer with non-RxDB tools via plain HTTP operations. You can use it to connect apps that are programmed in different programming languages than JavaScript or to access data from other third party tools. Creating a REST endpoint on a RxServer: const endpoint = await server.addRestEndpoint({ name: 'my-endpoint', collection: myServerCollection }); // plain http request with fetch const request = await fetch('http://localhost:80/' + endpoint.urlPath + '/query', { method: 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify({ selector: {} }) }); const response = await request.json(); There is also the client-rest plugin that provides typesave interactions with the REST endpoint: // using the client (optional) import { createRestClient } from 'rxdb-server/plugins/client-rest'; const client = createRestClient('http://localhost:80/' + endpoint.urlPath, {/* headers */}); const response = await client.query({ selector: {} }); The REST endpoint exposes the following paths: query [POST]: Fetch the results of a NoSQL query.query/observe [GET]: Observe a query's results via Server Send Events.get [POST]: Fetch multiple documents by their primary key.set [POST]: Write multiple documents at once.delete [POST]: Delete multiple documents by their primary key. ","version":"Next","tagName":"h2"},{"title":"CORS","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#cors","content":" When creating a server or adding endpoints, you can specify a CORS string. Endpoint cors always overwrite server cors. The default is the wildcard * which allows all requests. const myServer = await startRxServer({ database: myRxDatabase, cors: 'http://example.com' port: 443 }); const endpoint = await server.addReplicationEndpoint({ name: 'my-endpoint', collection: myServerCollection, cors: 'http://example.com' }); ","version":"Next","tagName":"h2"},{"title":"Auth handler","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#auth-handler","content":" To authenticate users and to make user-specific data available on server requests, an authHandler must be provided that parses the headers and returns the actual auth data that is used to authenticate the client and in the queryModifier and changeValidator. An auth handler gets the given headers object as input and returns the auth data in the format { data: {}, validUntil: 1706579817126}. The data field can contain any data that can be used afterwards in the queryModifier and changeValidator. The validUntil field contains the unix timestamp in milliseconds at which the authentication is no longer valid and the client will get disconnected. For example your authHandler could get the Authorization header and parse the JSON web token to identify the user and store the user id in the data field for later use. ","version":"Next","tagName":"h2"},{"title":"Query modifier","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#query-modifier","content":" The query modifier is a JavaScript function that is used to restrict which documents a client can fetch or replicate from the server. It gets the auth data and the actual NoSQL query as input parameter and returns a modified NoSQL query that is then used internally by the server. You can pass a different query modifier to each endpoint so that you can have different endpoints for different use cases on the same server. For example you could use a query modifier that get the userId from the auth data and then restricts the query to only return documents that have the same userId set. function myQueryModifier(authData, query) { query.selector.userId = { $eq: authData.data.userid }; return query; } const endpoint = await server.addReplicationEndpoint({ name: 'my-endpoint', collection: myServerCollection, queryModifier: myQueryModifier }); The RxServer will use the queryModifier at many places internally to determine which queries to run or if a document is allowed to be seen/edited by a client. note For performance reasons the queryModifier and changeValidatorMUST NOT be async and return a promise. If you need async data to run them, you should gather that data in the RxServerAuthHandler and store it in the auth data to access it later. ","version":"Next","tagName":"h2"},{"title":"Change validator","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#change-validator","content":" The change validator is a JavaScript function that is used to restrict which document writes are allowed to be done by a client. For example you could restrict clients to only change specific document fields or to not do any document writes at all. It can also be used to validate change document data before storing it at the server. In this example we restrict clients from doing inserts and only allow updates. For that we check if the change contains an assumedMasterState property and return false to block the write. function myChangeValidator(authData, change) { if(change.assumedMasterState) { return false; } else { return true; } } const endpoint = await server.addReplicationEndpoint({ name: 'my-endpoint', collection: myServerCollection, changeValidator: myChangeValidator }); Server-only indexes Normal RxDB schema indexes get the _deleted field prepended because all RxQueries automatically only search for documents with _deleted=false. When you use RxDB on a server, this might not be optimal because there can be the need to query for documents where the value of _deleted does not mather. Mostly this is required in the pull.stream$ of a replication when a queryModifier is used to add an additional field to the query. To set indexes without _deleted, you can use the internalIndexes field of the schema like the following: { "version": 0, "primaryKey": "id", "type": "object", "properties": { "id": { "type": "string", "maxLength": 100 }, "name": { "type": "string", "maxLength": 100 } }, "internalIndexes": [ ["name", "id"] ] } note Indexes come with a performance burden. You should only use the indexes you need and make sure you do not accidentally set the internalIndexes in your client side RxCollections. ","version":"Next","tagName":"h2"},{"title":"Server-only fields","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#server-only-fields","content":" All endpoints can be created with the serverOnlyFields set which defines some fields to only exist on the server, not on the clients. Clients will not see that fields and cannot do writes where one of the serverOnlyFields is set. Notice that when you use serverOnlyFields you likely need to have a different schema on the server than the schema that is used on the clients. const endpoint = await server.addReplicationEndpoint({ name: 'my-endpoint', collection: col, // here the field 'my-secretss' is defined to be server-only serverOnlyFields: ['my-secrets'] }); note For performance reasons, only top-level fields can be used as serverOnlyFields. Otherwise the server would have to deep-clone all document data which is too expensive. ","version":"Next","tagName":"h2"},{"title":"Readonly fields","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#readonly-fields","content":" When you have fields that should only be modified by the server, but not by the client, you can ensure that by comparing the fields value in the changeValidator. const myChangeValidator = function(authData, change){ if(change.newDocumentState.myReadonlyField !== change.assumedMasterState.myReadonlyField){ throw new Error('myReadonlyField is readonly'); } } ","version":"Next","tagName":"h2"},{"title":"$regex queries not allowed","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#regex-queries-not-allowed","content":" $regex queries are not allowed to run at the server to prevent ReDos Attacks. ","version":"Next","tagName":"h2"},{"title":"Conflict handling","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#conflict-handling","content":" To detect and handle conflicts, the conflict handler from the endpoints RxCollection is used. ","version":"Next","tagName":"h2"},{"title":"Missing features","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#missing-features","content":" The server plugin is in beta mode and some features are still missing. Make a Pull Request when you need them. ","version":"Next","tagName":"h2"},{"title":"FAQ","type":1,"pageTitle":"RxDB Server","url":"/rx-server.html#faq","content":" Why are the server plugins in a different github repo and npm package? The RxServer and its other plugins are in a different github repository because: It has too many dependencies that you do not want to install if you only use RxDB at the client side It has a different license (SSPL) to prevent large cloud vendors from "stealing" the revenue, similar to MongoDB's license. Why can't endpoits be added dynamically? After RxServer.start() is called, you can no longer add endpoints. This is because many of the supported server libraries do not allow dynamic routing for performance and security reasons. ","version":"Next","tagName":"h2"},{"title":"RxQuery","type":0,"sectionRef":"#","url":"/rx-query.html","content":"","keywords":"","version":"Next"},{"title":"find()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#find","content":" To create a basic RxQuery, call .find() on a collection and insert selectors. The result-set of normal queries is an array with documents. // find all that are older then 18 const query = myCollection .find({ selector: { age: { $gt: 18 } } }); ","version":"Next","tagName":"h2"},{"title":"findOne()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#findone","content":" A findOne-query has only a single RxDocument or null as result-set. // find alice const query = myCollection .findOne({ selector: { name: 'alice' } }); // find the youngest one const query = myCollection .findOne({ selector: {}, sort: [ {age: 'asc'} ] }); // find one document by the primary key const query = myCollection.findOne('foobar'); ","version":"Next","tagName":"h2"},{"title":"exec()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#exec","content":" Returns a Promise that resolves with the result-set of the query. const query = myCollection.find(); const results = await query.exec(); console.dir(results); // > [RxDocument,RxDocument,RxDocument..] ","version":"Next","tagName":"h2"},{"title":"Query Builder","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#query-builder","content":" To use chained query methods, you can use the query-builder plugin. // add the query builder plugin import { addRxPlugin } from 'rxdb'; import { RxDBQueryBuilderPlugin } from 'rxdb/plugins/query-builder'; addRxPlugin(RxDBQueryBuilderPlugin); // now you can use chained query methods const query = myCollection.find().where('age').gt(18); ","version":"Next","tagName":"h2"},{"title":"Observe $","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#observe-","content":" An BehaviorSubjectsee that always has the current result-set as value. This is extremely helpful when used together with UIs that should always show the same state as what is written in the database. const query = myCollection.find(); const querySub = query.$.subscribe(results => { console.log('got results: ' + results.length); }); // > 'got results: 5' // BehaviorSubjects emit on subscription await myCollection.insert({/* ... */}); // insert one // > 'got results: 6' // $.subscribe() was called again with the new results // stop watching this query querySub.unsubscribe() ","version":"Next","tagName":"h2"},{"title":"update()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#update","content":" Runs an update on every RxDocument of the query-result. // to use the update() method, you need to add the update plugin. import { RxDBUpdatePlugin } from 'rxdb/plugins/update'; addRxPlugin(RxDBUpdatePlugin); const query = myCollection.find({ selector: { age: { $gt: 18 } } }); await query.update({ $inc: { age: 1 // increases age of every found document by 1 } }); ","version":"Next","tagName":"h2"},{"title":"patch() / incrementalPatch()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#patch--incrementalpatch","content":" Runs the RxDocument.patch() function on every RxDocument of the query result. const query = myCollection.find({ selector: { age: { $gt: 18 } } }); await query.patch({ age: 12 // set the age of every found to 12 }); ","version":"Next","tagName":"h2"},{"title":"modify() / incrementalModify()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#modify--incrementalmodify","content":" Runs the RxDocument.modify() function on every RxDocument of the query result. const query = myCollection.find({ selector: { age: { $gt: 18 } } }); await query.modify((docData) => { docData.age = docData.age + 1; // increases age of every found document by 1 return docData; }); ","version":"Next","tagName":"h2"},{"title":"remove() / incrementalRemove()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#remove--incrementalremove","content":" Deletes all found documents. Returns a promise which resolves to the deleted documents. // All documents where the age is less than 18 const query = myCollection.find({ selector: { age: { $lt: 18 } } }); // Remove the documents from the collection const removedDocs = await query.remove(); ","version":"Next","tagName":"h2"},{"title":"doesDocumentDataMatch()","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#doesdocumentdatamatch","content":" Returns true if the given document data matches the query. const documentData = { id: 'foobar', age: 19 }; myCollection.find({ selector: { age: { $gt: 18 } } }).doesDocumentDataMatch(documentData); // > true myCollection.find({ selector: { age: { $gt: 20 } } }).doesDocumentDataMatch(documentData); // > false ","version":"Next","tagName":"h2"},{"title":"Query Examples","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#query-examples","content":" Here some examples to fast learn how to write queries without reading the docs. Pouch-find-docs - learn how to use mango-queriesmquery-docs - learn how to use chained-queries // directly pass search-object myCollection.find({ selector: { name: { $eq: 'foo' } } }) .exec().then(documents => console.dir(documents)); /* * find by using sql equivalent '%like%' syntax * This example will fe: match 'foo' but also 'fifoo' or 'foofa' or 'fifoofa' * Notice that in RxDB queries, a regex is represented as a $regex string with the $options parameter for flags. * Using a RegExp instance is not allowed because they are not JSON.stringify()-able and also * RegExp instances are mutable which could cause undefined behavior when the RegExp is mutated * after the query was parsed. */ myCollection.find({ selector: { name: { $regex: '.*foo.*' } } }) .exec().then(documents => console.dir(documents)); // find using a composite statement eg: $or // This example checks where name is either foo or if name is not existent on the document myCollection.find({ selector: { $or: [ { name: { $eq: 'foo' } }, { name: { $exists: false } }] } }) .exec().then(documents => console.dir(documents)); // do a case insensitive search // This example will match 'foo' or 'FOO' or 'FoO' etc... myCollection.find({ selector: { name: { $regex: '^foo$', $options: 'i' } } }) .exec().then(documents => console.dir(documents)); // chained queries myCollection.find().where('name').eq('foo') .exec().then(documents => console.dir(documents)); ","version":"Next","tagName":"h2"},{"title":"Setting a specific index","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#setting-a-specific-index","content":" By default, the query will be send to the RxStorage, where a query planner will determine which one of the available indexes must be used. But the query planner cannot know everything and sometimes will not pick the most optimal index. To improve query performance, you can specify which index must be used, when running the query. const query = myCollection .findOne({ selector: { age: { $gt: 18 }, gender: { $eq: 'm' } }, /** * Because the developer knows that 50% of the documents are 'male', * but only 20% are below age 18, * it makes sense to enforce using the ['gender', 'age'] index to improve performance. * This could not be known by the query planer which might have chosen ['age', 'gender'] instead. */ index: ['gender', 'age'] }); ","version":"Next","tagName":"h2"},{"title":"Count","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#count","content":" When you only need the amount of documents that match a query, but you do not need the document data itself, you can use a count query for better performance. The performance difference compared to a normal query differs depending on which RxStorage implementation is used. const query = myCollection.count({ selector: { age: { $gt: 18 } } // 'limit' and 'skip' MUST NOT be set for count queries. }); // get the count result once const matchingAmount = await query.exec(); // > number // observe the result query.$.subscribe(amount => { console.log('Currently has ' + amount + ' documents'); }); note Count queries have a better performance than normal queries because they do not have to fetch the full document data out of the storage. Therefore it is not possible to run a count() query with a selector that requires to fetch and compare the document data. So if your query selector does not fully match an index of the schema, it is not allowed to run it. These queries would have no performance benefit compared to normal queries but have the tradeoff of not using the fetched document data for caching. /** * The following will throw an error because * the count operation cannot run on any specific index range * because the $regex operator is used. */ const query = myCollection.count({ selector: { age: { $regex: 'foobar' } } }); /** * The following will throw an error because * the count operation cannot run on any specific index range * because there is no ['age' ,'otherNumber'] index * defined in the schema. */ const query = myCollection.count({ selector: { age: { $gt: 20 }, otherNumber: { $gt: 10 } } }); If you want to count these kind of queries, you should do a normal query instead and use the length of the result set as counter. This has the same performance as running a non-fully-indexed count which has to fetch all document data from the database and run a query matcher. // get count manually once const resultSet = await myCollection.find({ selector: { age: { $regex: 'foobar' } } }).exec(); const count = resultSet.length; // observe count manually const count$ = myCollection.find({ selector: { age: { $regex: 'foobar' } } }).$.pipe( map(result => result.length) ); /** * To allow non-fully-indexed count queries, * you can also specify that by setting allowSlowCount=true * when creating the database. */ const database = await createRxDatabase({ name: 'mydatabase', allowSlowCount: true, // set this to true [default=false] /* ... */ }); ","version":"Next","tagName":"h2"},{"title":"allowSlowCount","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#allowslowcount","content":" To allow non-fully-indexed count queries, you can also specify that by setting allowSlowCount: true when creating the database. Doing this is mostly not wanted, because it would run the counting on the storage without having the document stored in the RxDB document cache. This is only recommended if the RxStorage is running remotely like in a WebWorker and you not always want to send the document-data between the worker and the main thread. In this case you might only need the count-result instead to save performance. ","version":"Next","tagName":"h3"},{"title":"RxDB will always append the primary key to the sort parameters","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#rxdb-will-always-append-the-primary-key-to-the-sort-parameters","content":" For several performance optimizations, like the EventReduce algorithm, RxDB expects all queries to return a deterministic sort order that does not depend on the insert order of the documents. To ensure a deterministic ordering, RxDB will always append the primary key as last sort parameter to all queries and to all indexes. This works in contrast to most other databases where a query without sorting would return the documents in the order in which they had been inserted to the database. ::: ","version":"Next","tagName":"h2"},{"title":"RxQuery's are immutable","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#rxquerys-are-immutable","content":" Because RxDB is a reactive database, we can do heavy performance-optimisation on query-results which change over time. To be able to do this, RxQuery's have to be immutable. This means, when you have a RxQuery and run a .where() on it, the original RxQuery-Object is not changed. Instead the where-function returns a new RxQuery-Object with the changed where-field. Keep this in mind if you create RxQuery's and change them afterwards. Example: const queryObject = myCollection.find().where('age').gt(18); // Creates a new RxQuery object, does not modify previous one queryObject.sort('name'); const results = await queryObject.exec(); console.dir(results); // result-documents are not sorted by name const queryObjectSort = queryObject.sort('name'); const results = await queryObjectSort.exec(); console.dir(results); // result-documents are now sorted ","version":"Next","tagName":"h2"},{"title":"isRxQuery","type":1,"pageTitle":"RxQuery","url":"/rx-query.html#isrxquery","content":" Returns true if the given object is an instance of RxQuery. Returns false if not. const is = isRxQuery(myObj); ","version":"Next","tagName":"h3"},{"title":"RxState","type":0,"sectionRef":"#","url":"/rx-state.html","content":"","keywords":"","version":"Next"},{"title":"Creating a RxState","type":1,"pageTitle":"RxState","url":"/rx-state.html#creating-a-rxstate","content":" A RxState instance is created on top of a RxDatabase. The state will automatically be persisted with the storage that was used when setting up the RxDatabase. To use it you first have to import the RxDBStatePlugin and add it to RxDB with addRxPlugin(). To create a state call the addState() method on the database instance. Calling addState multiple times will automatically de-duplicated and only create a single RxState object. import { createRxDatabase, addRxPlugin } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // first add the RxState plugin to RxDB import { RxDBStatePlugin } from 'rxdb/plugins/state'; addRxPlugin(RxDBStatePlugin); const database = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageDexie(), }); // create a state instance const myState = await database.addState(); // you can also create states with a given namespace const myChildState = await database.addState('myNamepsace'); ","version":"Next","tagName":"h2"},{"title":"Writing data and Persistense","type":1,"pageTitle":"RxState","url":"/rx-state.html#writing-data-and-persistense","content":" Writing data to the state happen by a so called modifier. It is a simple JavaScript function that gets the current value as input and returns the new, modified value. For example to increase the value of myField by one, you would use a modifier that increases the current value: // initially set value to zero await myState.set('myField', v => 0); // increase value by one await myState.set('myField', v => v + 1); // update value to be 42 await myState.set('myField', v => 42); The modifier is used instead of a direct assignment to ensure correct behavior when other JavaScript realms write to the state at the same time, like other browser tabs or webworkers. On conflicts, the modifier will just be run again to ensure deterministic and correct behavior. Therefore mutation is async, you have to await the call to the set function when you care about the moment when the change actually happened. ","version":"Next","tagName":"h2"},{"title":"Get State Data","type":1,"pageTitle":"RxState","url":"/rx-state.html#get-state-data","content":" The state stored inside of a RxState instance can be seen as a big single JSON object that contains all data. You can fetch the whole object or partially get a single properties or nested ones. Fetching data can either happen with the .get() method or by accessing the field directly like myRxState.myField. // get root state data const val = myState.get(); // get single property const val = myState.get('myField'); const val = myState.myField; // get nested property const val = myState.get('myField.childfield'); const val = myState.myField.childfield; // get nested array property const val = myState.get('myArrayField[0].foobar'); const val = myState.myArrayField[0].foobar; ","version":"Next","tagName":"h2"},{"title":"Observability","type":1,"pageTitle":"RxState","url":"/rx-state.html#observability","content":" Instead of fetching the state once, you can also observe the state with either rxjs observables or custom reactivity handlers like signals or hooks. Rxjs observables can be created by either using the .get$() method or by accessing the top level property suffixed with a dollar sign like myState.myField$. const observable = myState.get$('myField'); const observable = myState.myField$; // then you can subscribe to that observable observable.subscribe(newValue => { // update the UI }); Subscription works across multiple JavaScript realms like browser tabs or Webworkers. ","version":"Next","tagName":"h2"},{"title":"RxState with signals and hooks","type":1,"pageTitle":"RxState","url":"/rx-state.html#rxstate-with-signals-and-hooks","content":" With the double-dollar sign you can also access custom reactivity instances like signals or hooks. These are easier to use compared to rxjs, depending on which JavaScript framework you are using. For example in angular to use signals, you would first add a reactivity factory to your database and then access the signals of the RxState: import { RxReactivityFactory, createRxDatabase } from 'rxdb/plugins/core'; import { toSignal } from '@angular/core/rxjs-interop'; const reactivityFactory: RxReactivityFactory<ReactivityType> = { fromObservable(obs, initialValue) { return toSignal(obs, { initialValue }); } }; const database = await createRxDatabase({ name: 'mydb', storage: getRxStorageDexie(), reactivity: reactivityFactory }); const myState = await database.addState(); const mySignal = myState.get$$('myField'); const mySignal = myState.myField$$; ","version":"Next","tagName":"h2"},{"title":"Cleanup RxState operations","type":1,"pageTitle":"RxState","url":"/rx-state.html#cleanup-rxstate-operations","content":" For faster writes, changes to the state are only written as list of operations to disc. After some time you might have too many operations written which would delay the initial state creation. To automatically merge the state operations into a single operation and clear the old operations, you should add the Cleanup Plugin before creating the RxDatabase: import { addRxPlugin } from 'rxdb'; import { RxDBCleanupPlugin } from 'rxdb/plugins/cleanup'; addRxPlugin(RxDBCleanupPlugin); ","version":"Next","tagName":"h2"},{"title":"Correctness over Performance","type":1,"pageTitle":"RxState","url":"/rx-state.html#correctness-over-performance","content":" RxState is optimized for correctness, not for performance. Compared to other state libraries, RxState directly persists data to storage and ensures write conflicts are handled properly. Other state libraries are handles mainly in-memory and lazily persist to disc without caring about conflicts or multiple browser tabs which can cause problems and hard to reproduce bugs. RxState still uses RxDB which has a range of great performing storages so the write speed is more then sufficient. Also to further improve write performance you can use more RxState instances (with an different namespace) to split writes across multiple storage instances. Reads happen directly in-memory which makes RxState read performance comparable to other state libraries. ","version":"Next","tagName":"h2"},{"title":"RxState Replication","type":1,"pageTitle":"RxState","url":"/rx-state.html#rxstate-replication","content":" Because the state data is stored inside of an internal RxCollection you can easily use the RxDB Replication to sync data between users or devices of the same user. For example with the P2P WebRTC replication you can start the replication on the collection and automatically sync the RxState operations between users directly: import { replicateWebRTC, getConnectionHandlerSimplePeer } from 'rxdb/plugins/replication-webrtc'; const database = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageDexie(), }); const myState = await database.addState(); const replicationPool = await replicateWebRTC( { collection: myState.collection, topic: 'my-state-replication-pool', connectionHandlerCreator: getConnectionHandlerSimplePeer({}), pull: {}, push: {} } ); ","version":"Next","tagName":"h2"},{"title":"Limitations","type":1,"pageTitle":"RxState","url":"/rx-state.html#limitations","content":" RxState is in beta mode, it might get breaking changes without having a major RxDB version release. ","version":"Next","tagName":"h2"},{"title":"RxDB Database on top of Deno Key Value Store (beta)","type":0,"sectionRef":"#","url":"/rx-storage-denokv.html","content":"","keywords":"","version":"Next"},{"title":"What is DenoKV","type":1,"pageTitle":"RxDB Database on top of Deno Key Value Store (beta)","url":"/rx-storage-denokv.html#what-is-denokv","content":" DenoKV is a strongly consistent key-value storage, globally replicated for low-latency reads across 35 worldwide regions via Deno Deploy. When you release your Deno application on Deno Deploy, it will start a instance on each of the 35 worldwide regions. This edge deployment guarantees minimal latency when serving requests to end users devices around the world. DenoKV is a shared storage which shares its state across all instances. But, because DenoKV is "only" a Key-Value storage, it only supports basic CRUD operations on datasets and indexes. Complex features like queries, encryption, compression or client-server replication, are missing. Using RxDB on top of DenoKV fills this gap and makes it easy to build realtime offline-first application on top of Deno backend. ","version":"Next","tagName":"h2"},{"title":"Use cases","type":1,"pageTitle":"RxDB Database on top of Deno Key Value Store (beta)","url":"/rx-storage-denokv.html#use-cases","content":" Using RxDB-DenoKV instead of plain DenoKV, can have a wide range of benefits depending on your use case. Reduce vendor lock-in: RxDB has a swappable storage layer which allows you to swap out the underlying storage of your database. If you ever decide to move away from DenoDeploy or Deno at all, you do not have to refactor your whole application and instead just swap the storage plugin. For example if you decide migrate to Node.js, you can use the FoundationDB RxStorage and store your data there. DenoKV is also implemented on top of FoundationDB so you can get similar performance. Alternatively RxDB supports a wide range of storage plugins you can decide from. Add reactiveness: DenoKV is a plain request-response datastore. While it supports observation of single rows by id, it does not allow to observe row-ranges or events. This makes it hard to impossible to build realtime applications with it because polling would be the only way to watch ranges of key-value pairs. With RxDB on top of DenoKV, changes to the database are shared between DenoDeploy instances so when you observe a query you can be sure that it is always up to date, no mather which instance has changed the document. Internally RxDB uses the Deno BroadcastChannel API to share events between instances. Reuse Client and Server Code: When you use RxDB on the server and on the client side, many parts of your code can be reused on both sides which decreases development time significantly. Replicate from DenoKV to a local RxDB state: Instead of running all operations against the global DenoKV, you can run a realtime-replication between a DenoKV-RxDatabase and a locally stored dataset or maybe even an in-memory stored one. This improves query performance and can reduce your Deno Deploy cloud costs because less operations run against the DenoKV, they only locally instead. Replicate with other backends: The RxDB replication protocol is pretty simple and allows you to easily build a replication with any backend architecture. For example if you already have your data stored in a self-hosted MySQL server, you can use RxDB to do a realtime replication of that data into a DenoKV RxDatabase instance. RxDB also has many plugins for replication with backend/protocols like GraphQL, Websocket, CouchDB, WebRTC, Firestore and NATS. ","version":"Next","tagName":"h2"},{"title":"Using the DenoKV RxStorage","type":1,"pageTitle":"RxDB Database on top of Deno Key Value Store (beta)","url":"/rx-storage-denokv.html#using-the-denokv-rxstorage","content":" To use the DenoKV RxStorage with RxDB, you import the getRxStorageDenoKV function from the plugin and set it as storage when calling createRxDatabase import { createRxDatabase } from 'rxdb'; import { getRxStorageDenoKV } from 'rxdb/plugins/storage-denokv'; const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageDenoKV({ /** * Consistency level, either 'strong' or 'eventual' * (Optional) default='strong' */ consistencyLevel: 'strong', /** * Path which is used in the first argument of Deno.openKv(settings.openKvPath) * (Optional) default='' */ openKvPath: './foobar', /** * Some operations have to run in batches, * you can test different batch sizes to improve performance. * (Optional) default=100 */ batchSize: number }) }); On top of that RxDatabase you can then create your collections and run operations. Follow the quickstart to learn more about how to use RxDB. ","version":"Next","tagName":"h2"},{"title":"Using non-DenoKV storages in Deno","type":1,"pageTitle":"RxDB Database on top of Deno Key Value Store (beta)","url":"/rx-storage-denokv.html#using-non-denokv-storages-in-deno","content":" When you use other storages than the DenoKV storage inside of a Deno app, make sure you set multiInstance: false when creating the database. Also you should only run one process per Deno-Deploy instance. This ensures your events are not mixed up by the BroadcastChannel across instances which would lead to wrong behavior. // DenoKV based database const db = await createRxDatabase({ name: 'denokvdatabase', storage: getRxStorageDenoKV(), /** * Use multiInstance: true so that the Deno Broadcast Channel * emits event across DenoDeploy instances * (true is also the default, so you can skip this setting) */ multiInstance: true }); // Non-DenoKV based database const db = await createRxDatabase({ name: 'denokvdatabase', storage: getRxStorageFilesystemNode(), /** * Use multiInstance: false so that it does not share events * across instances because the stored data is anyway not shared * between them. */ multiInstance: false }); ","version":"Next","tagName":"h2"},{"title":"Limitations","type":1,"pageTitle":"RxDB Database on top of Deno Key Value Store (beta)","url":"/rx-storage-denokv.html#limitations","content":" The DenoKV RxStorage is in currently in beta mode. There might be breaking changes without a major RxDB version release. ","version":"Next","tagName":"h2"},{"title":"RxStorage Dexie.js","type":0,"sectionRef":"#","url":"/rx-storage-dexie.html","content":"","keywords":"","version":"Next"},{"title":"Pros","type":1,"pageTitle":"RxStorage Dexie.js","url":"/rx-storage-dexie.html#pros","content":" Can use Dexie.js addons. ","version":"Next","tagName":"h2"},{"title":"Cons","type":1,"pageTitle":"RxStorage Dexie.js","url":"/rx-storage-dexie.html#cons","content":" Does not use a Batched Cursor or custom indexes which makes queries slower compared to the IndexedDB RxStorage. ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"RxStorage Dexie.js","url":"/rx-storage-dexie.html#usage","content":" import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageDexie() }); ","version":"Next","tagName":"h2"},{"title":"Overwrite/Polyfill the native IndexedDB","type":1,"pageTitle":"RxStorage Dexie.js","url":"/rx-storage-dexie.html#overwritepolyfill-the-native-indexeddb","content":" Node.js has no IndexedDB API. To still run the Dexie RxStorage in Node.js, for example to run unit tests, you have to polyfill it. You can do that by using the fake-indexeddb module and pass it to the getRxStorageDexie() function. import { createRxDatabase } from 'rxdb'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; //> npm install fake-indexeddb --save const fakeIndexedDB = require('fake-indexeddb'); const fakeIDBKeyRange = require('fake-indexeddb/lib/FDBKeyRange'); const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageDexie({ indexedDB: fakeIndexedDB, IDBKeyRange: fakeIDBKeyRange }) }); ","version":"Next","tagName":"h2"},{"title":"Using addons","type":1,"pageTitle":"RxStorage Dexie.js","url":"/rx-storage-dexie.html#using-addons","content":" Dexie.js has its own plugin system with many plugins for encryption, replication or other use cases. With the Dexie.js RxStorage you can use the same plugins by passing them to the getRxStorageDexie() function. const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageDexie({ addons: [ /* Your Dexie.js plugins */ ] }) }); ","version":"Next","tagName":"h2"},{"title":"Disabling the non-premium console log","type":1,"pageTitle":"RxStorage Dexie.js","url":"/rx-storage-dexie.html#disabling-the-non-premium-console-log","content":" We want to be transparent with our community, and you'll notice a console message when using the free Dexie.js based RxStorage implementation. This message serves to inform you about the availability of faster storage solutions within our 👑 Premium Plugins. We understand that this might be a minor inconvenience, and we sincerely apologize for that. However, maintaining and improving RxDB requires substantial resources, and our premium users help us ensure its sustainability. If you find value in RxDB and wish to remove this message, we encourage you to explore our premium storage options, which are optimized for professional use and production environments. Thank you for your understanding and support. If you already have premium access and want to use the Dexie.js RxStorage without the log, you can call the setPremiumFlag() function to disable the log. import { setPremiumFlag } from 'rxdb-premium/plugins/shared'; setPremiumFlag(); ","version":"Next","tagName":"h2"},{"title":"Filesystem Node RxStorage (beta)","type":0,"sectionRef":"#","url":"/rx-storage-filesystem-node.html","content":"","keywords":"","version":"Next"},{"title":"Pros","type":1,"pageTitle":"Filesystem Node RxStorage (beta)","url":"/rx-storage-filesystem-node.html#pros","content":" Easier setup compared to SQLiteFast ","version":"Next","tagName":"h3"},{"title":"Cons","type":1,"pageTitle":"Filesystem Node RxStorage (beta)","url":"/rx-storage-filesystem-node.html#cons","content":" It is part of the RxDB Premium 👑 plugin that must be purchased.It is in beta mode at the moment which means it can include breaking changes without a RxDB major version increment. ","version":"Next","tagName":"h3"},{"title":"Usage","type":1,"pageTitle":"Filesystem Node RxStorage (beta)","url":"/rx-storage-filesystem-node.html#usage","content":" import { createRxDatabase } from 'rxdb'; import { getRxStorageFilesystemNode } from 'rxdb-premium/plugins/storage-filesystem-node'; const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageFilesystemNode({ basePath: path.join(__dirname, 'my-database-folder'), /** * Set inWorker=true if you use this RxStorage * together with the WebWorker plugin. */ inWorker: false }) }); /* ... */ ","version":"Next","tagName":"h2"},{"title":"RxDB Database on top of FoundationDB","type":0,"sectionRef":"#","url":"/rx-storage-foundationdb.html","content":"","keywords":"","version":"Next"},{"title":"Features of RxDB+FoundationDB","type":1,"pageTitle":"RxDB Database on top of FoundationDB","url":"/rx-storage-foundationdb.html#features-of-rxdbfoundationdb","content":" Using RxDB on top of FoundationDB, gives you many benefits compare to using the plain FoundationDB API: Indexes: In RxDB with a FoundationDB storage layer, indexes are used to optimize query performance, allowing for fast and efficient data retrieval even in large datasets. You can define single and compound indexes with the RxDB schema.Schema Based Data Model: Utilizing a jsonschema based data model, the system offers a highly structured and versatile approach to organizing and validating data, ensuring consistency and clarity in database interactions.Complex Queries: The system supports complex NoSQL queries, allowing for advanced data manipulation and retrieval, tailored to specific needs and intricate data relationships. For example you can do $regex or $or queries which is hardy possible with the plain key-value access of FoundationDB.Observable Queries & Documents: RxDB's observable queries and documents feature ensures real-time updates and synchronization, providing dynamic and responsive data interactions in applications.Compression: RxDB employs data compression techniques to reduce storage requirements and enhance transmission efficiency, making it more cost-effective and faster, especially for large volumes of data. You can compress the NoSQL document data, but also the binary attachments data.Attachments: RxDB supports the storage and management of attachments which allowing for the seamless inclusion of binary data like images or documents alongside structured data within the database. ","version":"Next","tagName":"h2"},{"title":"Installation","type":1,"pageTitle":"RxDB Database on top of FoundationDB","url":"/rx-storage-foundationdb.html#installation","content":" Install the FoundationDB client cli which is used to communicate with the FoundationDB cluster.Install the FoundationDB node bindings npm module via npm install foundationdb --save. If the latest version does not work for you, you should use the same version as stated in the storage-foundationdb job of the RxDB CI main.yml. ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"RxDB Database on top of FoundationDB","url":"/rx-storage-foundationdb.html#usage","content":" import { createRxDatabase } from 'rxdb'; import { getRxStorageFoundationDB } from 'rxdb/plugins/storage-foundationdb'; const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageFoundationDB({ /** * Version of the API of the FoundationDB cluster.. * FoundationDB is backwards compatible across a wide range of versions, * so you have to specify the api version. * If in doubt, set it to 620. */ apiVersion: 620, /** * Path to the FoundationDB cluster file. * (optional) * If in doubt, leave this empty to use the default location. */ clusterFile: '/path/to/fdb.cluster', /** * Amount of documents to be fetched in batch requests. * You can change this to improve performance depending on * your database access patterns. * (optional) * [default=50] */ batchSize: 50 }) }); ","version":"Next","tagName":"h2"},{"title":"Multi Instance","type":1,"pageTitle":"RxDB Database on top of FoundationDB","url":"/rx-storage-foundationdb.html#multi-instance","content":" Because FoundationDB does not offer a changestream, it is not possible to use the same cluster from more then one Node.js process at the same time. For example you cannot spin up multiple servers with RxDB databases that all use the same cluster. There might be workarounds to create something like a FoundationDB changestream and you can make a Pull Request if you need that feature. ","version":"Next","tagName":"h2"},{"title":"IndexedDB RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-indexeddb.html","content":"","keywords":"","version":"Next"},{"title":"IndexedDB performance comparison","type":1,"pageTitle":"IndexedDB RxStorage","url":"/rx-storage-indexeddb.html#indexeddb-performance-comparison","content":" Here is some performance comparison with other storages. Compared to the non-memory storages like OPFS and Dexie.js, it has the smallest build size and fastest write speed. Only OPFS is faster on queries over big datasets. See performance comparison page for a comparison with all storages. ","version":"Next","tagName":"h2"},{"title":"Using the IndexedDB RxStorage","type":1,"pageTitle":"IndexedDB RxStorage","url":"/rx-storage-indexeddb.html#using-the-indexeddb-rxstorage","content":" To use the indexedDB storage you import it from the RxDB Premium 👑 npm module and use getRxStorageIndexedDB() when creating the RxDatabase. import { createRxDatabase } from 'rxdb'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageIndexedDB({ /** * For better performance, queries run with a batched cursor. * You can change the batchSize to optimize the query time * for specific queries. * You should only change this value when you are also doing performance measurements. * [default=300] */ batchSize: 300 }) }); ","version":"Next","tagName":"h2"},{"title":"Overwrite/Polyfill the native IndexedDB","type":1,"pageTitle":"IndexedDB RxStorage","url":"/rx-storage-indexeddb.html#overwritepolyfill-the-native-indexeddb","content":" Node.js has no IndexedDB API. To still run the IndexedDB RxStorage in Node.js, for example to run unit tests, you have to polyfill it. You can do that by using the fake-indexeddb module and pass it to the getRxStorageDexie() function. import { createRxDatabase } from 'rxdb'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; //> npm install fake-indexeddb --save const fakeIndexedDB = require('fake-indexeddb'); const fakeIDBKeyRange = require('fake-indexeddb/lib/FDBKeyRange'); const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageIndexedDB({ indexedDB: fakeIndexedDB, IDBKeyRange: fakeIDBKeyRange }) }); ","version":"Next","tagName":"h2"},{"title":"Limitations of the IndexedDB RxStorage","type":1,"pageTitle":"IndexedDB RxStorage","url":"/rx-storage-indexeddb.html#limitations-of-the-indexeddb-rxstorage","content":" It is part of the RxDB Premium 👑 plugin that must be purchased. If you just need a storage that works in the browser and you do not have to care about performance, you can use the Dexie.js storage instead.The IndexedDB storage requires support for IndexedDB v2, it does not work on Internet Explorer. ","version":"Next","tagName":"h2"},{"title":"RxStorage Localstorage Meta Optimizer","type":0,"sectionRef":"#","url":"/rx-storage-localstorage-meta-optimizer.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"RxStorage Localstorage Meta Optimizer","url":"/rx-storage-localstorage-meta-optimizer.html#usage","content":" The meta optimizer gets wrapped around any other RxStorage. It will then automatically detect if an RxDB internal storage instance is created, and replace that with a localstorage based instance. import { getLocalstorageMetaOptimizerRxStorage } from 'rxdb-premium/plugins/storage-localstorage-meta-optimizer'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; /** * First wrap the original RxStorage with the optimizer. */ const optimizedRxStorage = getLocalstorageMetaOptimizerRxStorage({ /** * Here we use the dexie.js RxStorage, * it is also possible to use any other RxStorage instead. */ storage: getRxStorageDexie() }); /** * Create the RxDatabase with the wrapped RxStorage. */ const database = await createRxDatabase({ name: 'mydatabase', storage: optimizedRxStorage }); ","version":"Next","tagName":"h2"},{"title":"RxStorage LokiJS","type":0,"sectionRef":"#","url":"/rx-storage-lokijs.html","content":"","keywords":"","version":"Next"},{"title":"Pros","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#pros","content":" Queries can run faster because all data is processed in memory.It has a much faster initial load time because it loads all data from IndexedDB in a single request. But this is only true for small datasets. If much data must is stored, the initial load time can be higher than on other RxStorage implementations. ","version":"Next","tagName":"h3"},{"title":"Cons","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#cons","content":" It does not support attachments.Data can be lost when the JavaScript process is killed ungracefully like when the browser crashes or the power of the PC is terminated.All data must fit into the memory.Slow initialisation time when used with multiInstance: true because it has to await the leader election process.Slow initialisation time when really much data is stored inside of the database because it has to parse a big JSON string. ","version":"Next","tagName":"h3"},{"title":"Usage","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#usage","content":" import { createRxDatabase } from 'rxdb'; import { getRxStorageLoki } from 'rxdb/plugins/storage-lokijs'; // in the browser, we want to persist data in IndexedDB, so we use the indexeddb adapter. const LokiIncrementalIndexedDBAdapter = require('lokijs/src/incremental-indexeddb-adapter'); const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageLoki({ adapter: new LokiIncrementalIndexedDBAdapter(), /* * Do not set lokiJS persistence options like autoload and autosave, * RxDB will pick proper defaults based on the given adapter */ }) }); ","version":"Next","tagName":"h2"},{"title":"Adapters","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#adapters","content":" LokiJS is based on adapters that determine where to store persistent data. For LokiJS there are adapters for IndexedDB, AWS S3, the NodeJS filesystem or NativeScript. Find more about the possible adapters at the LokiJS docs. For react native there is also the loki-async-reference-adapter. ","version":"Next","tagName":"h2"},{"title":"Multi-Tab support","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#multi-tab-support","content":" When you use plain LokiJS, you cannot build an app that can be used in multiple browser tabs. The reason is that LokiJS loads data in bulk and then only regularly persists the in-memory state to disc. When opened in multiple tabs, it would happen that the LokiJS instances overwrite each other and data is lost. With the RxDB LokiJS-plugin, this problem is fixed with the LeaderElection module. Between all open tabs, a leading tab is elected and only in this tab a database is created. All other tabs do not run queries against their own database, but instead call the leading tab to send and retrieve data. When the leading tab is closed, a new leader is elected that reopens the database and processes queries. You can disable this by setting multiInstance: false when creating the RxDatabase. ","version":"Next","tagName":"h2"},{"title":"Autosave and autoload","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#autosave-and-autoload","content":" When using plain LokiJS, you could set the autosave option to true to make sure that LokiJS persists the database state after each write into the persistence adapter. Same goes to autoload which loads the persisted state on database creation. But RxDB knows better when to persist the database state and when to load it, so it has its own autosave logic. This will ensure that running the persistence handler does not affect the performance of more important tasks. Instead RxDB will always wait until the database is idle and then runs the persistence handler. A load of the persisted state is done on database or collection creation and it is ensured that multiple load calls do not run in parallel and interfere with each other or with saveDatabase() calls. ","version":"Next","tagName":"h2"},{"title":"Known problems","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#known-problems","content":" When you bundle the LokiJS Plugin with webpack, you might get the error Cannot find module "fs". This is because LokiJS uses a require('fs') statement that cannot work in the browser. You can fix that by telling webpack to not resolve the fs module with the following block in your webpack config: // in your webpack.config.js { /* ... */ resolve: { fallback: { fs: false } } /* ... */ } // Or if you do not have a webpack.config.js like you do with angular, // you might fix it by setting the browser field in the package.json { /* ... */ "browser": { "fs": false } /* ... */ } ","version":"Next","tagName":"h2"},{"title":"Using the internal LokiJS database","type":1,"pageTitle":"RxStorage LokiJS","url":"/rx-storage-lokijs.html#using-the-internal-lokijs-database","content":" For custom operations, you can access the internal LokiJS database. This is dangerous because you might do changes that are not compatible with RxDB. Only use this when there is no way to achieve your goals via the RxDB API. const storageInstance = myRxCollection.storageInstance; const localState = await storageInstance.internals.localState; localState.collection.insert({ key: 'foo', value: 'bar', _deleted: false, _attachments: {}, _rev: '1-62080c42d471e3d2625e49dcca3b8e3e', _meta: { lwt: new Date().getTime() } }); // manually trigger the save queue because we did a write to the internal loki db. await localState.databaseState.saveQueue.addWrite(); ","version":"Next","tagName":"h2"},{"title":"Memory Mapped RxStorage (beta)","type":0,"sectionRef":"#","url":"/rx-storage-memory-mapped.html","content":"","keywords":"","version":"Next"},{"title":"Pros","type":1,"pageTitle":"Memory Mapped RxStorage (beta)","url":"/rx-storage-memory-mapped.html#pros","content":" Improves read/write performance because these operations run against the in-memory storage.Decreases initial page load because it load all data in a single bulk request. It even detects if the database is used for the first time and then it does not have to await the creation of the persistent storage.Can store encrypted data on disc while still being able to run queries on the non-encrypted in-memory state. ","version":"Next","tagName":"h2"},{"title":"Cons","type":1,"pageTitle":"Memory Mapped RxStorage (beta)","url":"/rx-storage-memory-mapped.html#cons","content":" It does not support attachments because storing big attachments data in-memory should not be done.When the JavaScript process is killed ungracefully like when the browser crashes or the power of the PC is terminated, it might happen that some memory writes are not persisted to the parent storage. This can be prevented with the awaitWritePersistence flag.The memory-mapped storage can only be used if all data fits into the memory of the JavaScript process. This is normally not a problem because a browser has much memory these days and plain json document data is not that big.Because it has to await an initial data loading from the parent storage into the memory, initial page load time can increase when much data is already stored. This is likely not a problem when you store less then 10k documents.The memory-mapped storage is part of RxDB Premium 👑. It is not part of the default RxDB core module. beta The Memory-Mapped RxStorage is in beta mode and it might get breaking changes without a major RxDB release. ","version":"Next","tagName":"h2"},{"title":"Using the Memory-Mapped RxStorage","type":1,"pageTitle":"Memory Mapped RxStorage (beta)","url":"/rx-storage-memory-mapped.html#using-the-memory-mapped-rxstorage","content":" import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; import { getMemoryMappedRxStorage } from 'rxdb-premium/plugins/storage-memory-mapped'; /** * Here we use the IndexedDB RxStorage as persistence storage. * Any other RxStorage can also be used. */ const parentStorage = getRxStorageIndexedDB(); // wrap the persistent storage with the memory-mapped storage. const storage = getMemoryMappedRxStorage({ storage: parentStorage }); // create the RxDatabase like you would do with any other RxStorage const db = await createRxDatabase({ name: 'myDatabase, storage, }); /** ... **/ ","version":"Next","tagName":"h2"},{"title":"Multi-Tab Support","type":1,"pageTitle":"Memory Mapped RxStorage (beta)","url":"/rx-storage-memory-mapped.html#multi-tab-support","content":" By how the memory-mapped storage works, it is not possible to have the same storage open in multiple JavaScript processes. So when you use this in a browser application, you can not open multiple databases when the app is used in multiple browser tabs. To solve this, use the SharedWorker Plugin so that the memory-mapped storage runs inside of a SharedWorker exactly once and is then reused for all browser tabs. If you have a single JavaScript process, like in a React Native app, you do not have to care about this and can just use the memory-mapped storage in the main process. ","version":"Next","tagName":"h2"},{"title":"Encryption of the persistend data","type":1,"pageTitle":"Memory Mapped RxStorage (beta)","url":"/rx-storage-memory-mapped.html#encryption-of-the-persistend-data","content":" Normally RxDB is not capable of running queries on encrypted fields. But when you use the memory-mapped RxStorage, you can store the document data encrypted on disc, while being able to run queries on the not encrypted in-memory state. Make sure you use the encryption storage wrapper around the persistend storage, NOT around the memory-mapped storage as a whole. import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; import { getMemoryMappedRxStorage } from 'rxdb-premium/plugins/storage-memory-mapped'; import { wrappedKeyEncryptionWebCryptoStorage } from 'rxdb-premium/plugins/encryption-web-crypto'; const storage = getMemoryMappedRxStorage({ storage: wrappedKeyEncryptionWebCryptoStorage({ storage: getRxStorageIndexedDB() }) }); const db = await createRxDatabase({ name: 'myDatabase, storage, }); /** ... **/ ","version":"Next","tagName":"h2"},{"title":"Await Write Persistence","type":1,"pageTitle":"Memory Mapped RxStorage (beta)","url":"/rx-storage-memory-mapped.html#await-write-persistence","content":" Running operations on the memory-mapped storage by default returns directly when the operation has run on the in-memory state and then persist changes in the background. Sometimes you might want to ensure write operations is persisted, you can do this by setting awaitWritePersistence: true. const storage = getMemoryMappedRxStorage({ awaitWritePersistence: true, storage: getRxStorageIndexedDB() }); ","version":"Next","tagName":"h2"},{"title":"Memory RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-memory.html","content":"","keywords":"","version":"Next"},{"title":"Pros","type":1,"pageTitle":"Memory RxStorage","url":"/rx-storage-memory.html#pros","content":" Really fast. Uses binary search on all operations.Small build size ","version":"Next","tagName":"h3"},{"title":"Cons","type":1,"pageTitle":"Memory RxStorage","url":"/rx-storage-memory.html#cons","content":" No persistence import { createRxDatabase } from 'rxdb'; import { getRxStorageMemory } from 'rxdb/plugins/storage-memory'; const db = await createRxDatabase({ name: 'exampledb', storage: getRxStorageMemory() }); ","version":"Next","tagName":"h3"},{"title":"MongoDB RxStorage (beta)","type":0,"sectionRef":"#","url":"/rx-storage-mongodb.html","content":"","keywords":"","version":"Next"},{"title":"Limitations of the MongoDB RxStorage","type":1,"pageTitle":"MongoDB RxStorage (beta)","url":"/rx-storage-mongodb.html#limitations-of-the-mongodb-rxstorage","content":" Multiple Node.js servers using the same MongoDB database is currently not supportedRxAttachments are currently not supportedDoing non-RxDB writes on the MongoDB database is not supported. RxDB expects all writes to come from RxDB which update the required metadata. Doing non-RxDB writes can confuse the RxDatabase and lead to undefined behavior. But you can perform read-queries on the MongoDB storage from the outside at any time. ","version":"Next","tagName":"h2"},{"title":"Using the MongoDB RxStorage","type":1,"pageTitle":"MongoDB RxStorage (beta)","url":"/rx-storage-mongodb.html#using-the-mongodb-rxstorage","content":" To use the storage, you simply import the getRxStorageMongoDB method and use that when creating the RxDatabase. The connection parameter contains the MongoDB connection string. import { createRxDatabase } from 'rxdb'; import { getRxStorageMongoDB } from 'rxdb/plugins/storage-mongodb'; const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageMongoDB({ /** * MongoDB connection string * @link https://www.mongodb.com/docs/manual/reference/connection-string/ */ connection: 'mongodb://localhost:27017,localhost:27018,localhost:27019' }) }); ","version":"Next","tagName":"h2"},{"title":"📈 RxStorage Performance","type":0,"sectionRef":"#","url":"/rx-storage-performance.html","content":"","keywords":"","version":"Next"},{"title":"RxStorage Performance comparison","type":1,"pageTitle":"📈 RxStorage Performance","url":"/rx-storage-performance.html#rxstorage-performance-comparison","content":" A big difference in the RxStorage implementations is the performance. In difference to a server side database, RxDB is bound to the limits of the JavaScript runtime and depending on the runtime, there are different possibilities to store and fetch data. For example in the browser it is only possible to store data in a slow IndexedDB or OPFS instead of a filesystem while on React-Native you can use the SQLite storage. Therefore the performance can be completely different depending on where you use RxDB and what you do with it. Here you can see some performance measurements and descriptions on how the different storages work and how their performance is different. ","version":"Next","tagName":"h2"},{"title":"Persistend vs Semi-Persistend storages","type":1,"pageTitle":"📈 RxStorage Performance","url":"/rx-storage-performance.html#persistend-vs-semi-persistend-storages","content":" The "normal" storages are always persistend. This means each RxDB write is directly written to disc and all queries run on the disc state. This means a good startup performance because nothing has to be done on startup. In contrast, semi-persistend storages like Memory-Synced and LokiJS store all data in memory on startup and only save to disc occasionally (or on exit). Therefore it has a very fast read/write performance, but loading all data into memory on the first page load can take longer for big amounts of documents. Also these storages can only be used when all data fits into the memory at least once. In general it is recommended to stay on the persistend storages and only use semi-persitend ones, when you know for sure that the dataset will stay small (less then 2k documents). ","version":"Next","tagName":"h2"},{"title":"Performance comparison","type":1,"pageTitle":"📈 RxStorage Performance","url":"/rx-storage-performance.html#performance-comparison","content":" In the following you can find some performance measurements and comparisons. Notice that these are only a small set of possible RxDB operations. If performance is really relevant for your use case, you should do your own measurements with usage-patterns that are equal to how you use RxDB in production. ","version":"Next","tagName":"h2"},{"title":"Measurements","type":1,"pageTitle":"📈 RxStorage Performance","url":"/rx-storage-performance.html#measurements","content":" Here the following metrics are measured: time-to-first-insert: Many storages run lazy, so it makes no sense to compare the time which is required to create a database with collections. Instead we measure the time-to-first-insert which is the whole timespan from database creation until the first single document write is done.insert 200 documents: Insert 200 documents with a single bulk-insert operation.find 1200 documents by id: Here we fetch 100% of the stored documents with a single findByIds() call.find 12000 documents by query: Here we fetch 100% of the stored documents with a single find() call.find 300x4 documents by query: Here we fetch 100% of the stored documents with a 4 find() calls that run in parallel.count 1200 documents: Counts 100% of the stored documents with a single count() call. ","version":"Next","tagName":"h3"},{"title":"Browser based Storages Performance Comparison","type":1,"pageTitle":"📈 RxStorage Performance","url":"/rx-storage-performance.html#browser-based-storages-performance-comparison","content":" The performance patterns of the browser based storages are very diverse. The IndexedDB storage is recommended for mostly all use cases so you should start with that one. Later you can do performance testings and switch to another storage like OPFS or memory-synced. If you do not want to purchase RxDB Premium, you could use the slower Dexie.js based RxStorage instead. ","version":"Next","tagName":"h2"},{"title":"Node/Native based Storages Performance Comparison","type":1,"pageTitle":"📈 RxStorage Performance","url":"/rx-storage-performance.html#nodenative-based-storages-performance-comparison","content":" For most client-side native applications (react-native, electron, capacitor), using the SQLite RxStorage is recommended. For non-client side applications like a server, use the MongoDB storage instead. ","version":"Next","tagName":"h2"},{"title":"Memory Synced RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-memory-synced.html","content":"","keywords":"","version":"Next"},{"title":"Pros","type":1,"pageTitle":"Memory Synced RxStorage","url":"/rx-storage-memory-synced.html#pros","content":" Improves read/write performance because these operations run against the in-memory storage.Decreases initial page load because it load all data in a single bulk request. It even detects if the database is used for the first time and then it does not have to await the creation of the persistent storage. ","version":"Next","tagName":"h2"},{"title":"Cons","type":1,"pageTitle":"Memory Synced RxStorage","url":"/rx-storage-memory-synced.html#cons","content":" It does not support attachments.When the JavaScript process is killed ungracefully like when the browser crashes or the power of the PC is terminated, it might happen that some memory writes are not persisted to the parent storage. This can be prevented with the awaitWritePersistence flag.This can only be used if all data fits into the memory of the JavaScript process. This is normally not a problem because a browser has much memory these days and plain json document data is not that big.Because it has to await an initial replication from the parent storage into the memory, initial page load time can increase when much data is already stored. This is likely not a problem when you store less then 10k documents.The memory-synced storage itself does not support replication and migration. Instead you have to replicate the underlying parent storage.The memory-synced plugin is part of RxDB Premium 👑. It is not part of the default RxDB module. Consider using the Memory-Mapped RxStorage While the memory-synced storage works, it is not the best option for most users. Instead consider using the (newer) memory-mapped RxStorage which has better trade-offs and is easier to configure. ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"Memory Synced RxStorage","url":"/rx-storage-memory-synced.html#usage","content":" import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; import { getMemorySyncedRxStorage } from 'rxdb-premium/plugins/storage-memory-synced'; /** * Here we use the IndexedDB RxStorage as persistence storage. * Any other RxStorage can also be used. */ const parentStorage = getRxStorageIndexedDB(); // wrap the persistent storage with the memory synced one. const storage = getMemorySyncedRxStorage({ storage: parentStorage }); // create the RxDatabase like you would do with any other RxStorage const db = await createRxDatabase({ name: 'myDatabase, storage, }); /** ... **/ ","version":"Next","tagName":"h2"},{"title":"Options","type":1,"pageTitle":"Memory Synced RxStorage","url":"/rx-storage-memory-synced.html#options","content":" Some options can be provided to fine tune the performance and behavior. import { requestIdlePromise } from 'rxdb'; const storage = getMemorySyncedRxStorage({ storage: parentStorage, /** * Defines how many document * get replicated in a single batch. * [default=50] * * (optional) */ batchSize: 50, /** * By default, the parent storage will be created without indexes for a faster page load. * Indexes are not needed because the queries will anyway run on the memory storage. * You can disable this behavior by setting keepIndexesOnParent to true. * If you use the same parent storage for multiple RxDatabase instances where one is not * a asynced-memory storage, you will get the error: 'schema not equal to existing storage' * if you do not set keepIndexesOnParent to true. * * (optional) */ keepIndexesOnParent: true, /** * If set to true, all write operations will resolve AFTER the writes * have been persisted from the memory to the parentStorage. * This ensures writes are not lost even if the JavaScript process exits * between memory writes and the persistence interval. * default=false */ awaitWritePersistence: true, /** * After a write, await until the return value of this method resolves * before replicating with the master storage. * * By returning requestIdlePromise() we can ensure that the CPU is idle * and no other, more important operation is running. By doing so we can be sure * that the replication does not slow down any rendering of the browser process. * * (optional) */ waitBeforePersist: () => requestIdlePromise(); }); ","version":"Next","tagName":"h2"},{"title":"Comparison with the LokiJS RxStorage","type":1,"pageTitle":"Memory Synced RxStorage","url":"/rx-storage-memory-synced.html#comparison-with-the-lokijs-rxstorage","content":" The LokiJS RxStorage also loads the whole database state into the memory to improve operation time. In comparison to LokiJS, the Memory Synced RxStorage has many improvements and performance optimizations to reduce initial load time. Also it uses replication instead of the leader election to handle multi-tab usage. This alone decreases the initial page load by about 200 milliseconds. ","version":"Next","tagName":"h2"},{"title":"Replication and Migration with the memory-synced storage","type":1,"pageTitle":"Memory Synced RxStorage","url":"/rx-storage-memory-synced.html#replication-and-migration-with-the-memory-synced-storage","content":" The memory-synced storage itself does not support replication and migration. Instead you have to replicate the underlying parent storage. For example when you use it on top of an IndexedDB storage, you have to run replication on that storage instead by creating a different RxDatabase. const parentStorage = getRxStorageIndexedDB(); const memorySyncedStorage = getMemorySyncedRxStorage({ storage: parentStorage, keepIndexesOnParent: true }); const databaseName = 'mydata'; /** * Create a parent database with the same name+collections * and use it for replication and migration. * The parent database must be created BEFORE the memory-synced database * to ensure migration has already been run. */ const parentDatabase = await createRxDatabase({ name: databaseName, storage: parentStorage }); await parentDatabase.addCollections(/* ... */); replicateRxCollection({ collection: parentDatabase.myCollection, /* ... */ }); /** * Create an equal memory-synced database with the same name+collections * and use it for writes and queries. */ const memoryDatabase = await createRxDatabase({ name: databaseName, storage: memorySyncedStorage }); await memoryDatabase.addCollections(/* ... */); ","version":"Next","tagName":"h2"},{"title":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-opfs.html","content":"","keywords":"","version":"Next"},{"title":"What is OPFS","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#what-is-opfs","content":" The Origin Private File System (OPFS) is a native browser storage API that allows web applications to manage files in a private, sandboxed, origin-specific virtual filesystem. Unlike IndexedDB and LocalStorage, which are optimized as object/key-value storage, OPFS provides more granular control for file operations, enabling byte-by-byte access, file streaming, and even low-level manipulations. OPFS is ideal for applications requiring high-performance file operations (3x-4x faster compared to IndexedDB) inside of a client-side application, offering advantages like improved speed, more efficient use of resources, and enhanced security and privacy features. ","version":"Next","tagName":"h2"},{"title":"OPFS limitations","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#opfs-limitations","content":" From the beginning of 2023, the Origin Private File System API is supported by all modern browsers like Safari, Chrome, Edge and Firefox. Only Internet Explorer is not supported and likely will never get support. It is important to know that the most performant synchronous methods like read() and write() of the OPFS API are only available inside of a WebWorker. They cannot be used in the main thread, an iFrame or even a SharedWorker. The OPFS createSyncAccessHandle() method that gives you access to the synchronous methods is not exposed in the main thread, only in a Worker. While there is no concrete data size limit defined by the API, browsers will refuse to store more data at some point. If no more data can be written, a QuotaExceededError is thrown which should be handled by the application, like showing an error message to the user. ","version":"Next","tagName":"h3"},{"title":"How the OPFS API works","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#how-the-opfs-api-works","content":" The OPFS API is pretty straightforward to use. First you get the root filesystem. Then you can create files and directories on that. Notice that whenever you synchronously write to, or read from a file, an ArrayBuffer must be used that contains the data. It is not possible to synchronously write plain strings or objects into the file. Therefore the TextEncoder and TextDecoder API must be used. Also notice that some of the methods of FileSystemSyncAccessHandlehave been asynchronous in the past, but are synchronous since Chromium 108. To make it less confusing, we just use await in front of them, so it will work in both cases. // Access the root directory of the origin's private file system. const root = await navigator.storage.getDirectory(); // Create a subdirectory. const diaryDirectory = await root.getDirectoryHandle('subfolder', { create: true, }); // Create a new file named 'example.txt'. const fileHandle = await diaryDirectory.getFileHandle('example.txt', { create: true, }); // Create a FileSystemSyncAccessHandle on the file. const accessHandle = await fileHandle.createSyncAccessHandle(); // Write a sentence to the file. let writeBuffer = new TextEncoder().encode('Hello from RxDB'); const writeSize = accessHandle.write(writeBuffer); // Read file and transform data to string. const readBuffer = new Uint8Array(writeSize); const readSize = accessHandle.read(readBuffer, { at: 0 }); const contentAsString = new TextDecoder().decode(readBuffer); // Write an exclamation mark to the end of the file. writeBuffer = new TextEncoder().encode('!'); accessHandle.write(writeBuffer, { at: readSize }); // Truncate file to 10 bytes. await accessHandle.truncate(10); // Get the new size of the file. const fileSize = await accessHandle.getSize(); // Persist changes to disk. await accessHandle.flush(); // Always close FileSystemSyncAccessHandle if done, so others can open the file again. await accessHandle.close(); A more detailed description of the OPFS API can be found on MDN. ","version":"Next","tagName":"h2"},{"title":"OPFS performance","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#opfs-performance","content":" Because the Origin Private File System API provides low-level access to binary files, it is much faster compared to IndexedDB or localStorage. According to the storage performance test, OPFS is up to 2x times faster on plain inserts when a new file is created on each write. Reads are even faster. A good comparison about real world scenarios, are the performance results of the various RxDB storages. Here it shows that reads are up to 4x faster compared to IndexedDB, even with complex queries: ","version":"Next","tagName":"h2"},{"title":"Using OPFS as RxStorage in RxDB","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#using-opfs-as-rxstorage-in-rxdb","content":" The OPFS RxStorage itself must run inside a WebWorker. Therefore we use the Worker RxStorage and let it point to the prebuild opfs.worker.js file that comes shipped with RxDB Premium 👑. Notice that the OPFS RxStorage is part of the RxDB Premium 👑 plugin that must be purchased. import { createRxDatabase } from 'rxdb'; import { getRxStorageWorker } from 'rxdb-premium/plugins/storage-worker'; const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageWorker( { /** * This file must be statically served from a webserver. * You might want to first copy it somewhere outside of * your node_modules folder. */ workerInput: 'node_modules/rxdb-premium/dist/workers/opfs.worker.js' } ) }); ","version":"Next","tagName":"h2"},{"title":"Using OPFS in the main thread instead of a worker","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#using-opfs-in-the-main-thread-instead-of-a-worker","content":" The createSyncAccessHandle method from the Filesystem API is only available inside of a Webworker. Therefore you cannot use getRxStorageOPFS() in the main thread. But there is a slightly slower way to access the virtual filesystem from the main thread. RxDB support the getRxStorageOPFSMainThread() for that. Notice that this uses the createWritable function which is not supported in safari. Using OPFS from the main thread can have benefits because not having to cross the worker bridge can reduce latence in reads and writes. import { createRxDatabase } from 'rxdb'; import { getRxStorageOPFSMainThread } from 'rxdb-premium/plugins/storage-worker'; const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageOPFSMainThread() }); ","version":"Next","tagName":"h2"},{"title":"Building a custom worker.js","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#building-a-custom-workerjs","content":" When you want to run additional plugins like storage wrappers or replication inside of the worker, you have to build your own worker.js file. You can do that similar to other workers by calling exposeWorkerRxStorage like described in the worker storage plugin. // inside of the worker.js file import { getRxStorageOPFS } from 'rxdb-premium/plugins/storage-opfs'; import { exposeWorkerRxStorage } from 'rxdb-premium/plugins/storage-worker'; const storage = getRxStorageOPFS(); exposeWorkerRxStorage({ storage }); ","version":"Next","tagName":"h2"},{"title":"Setting usesRxDatabaseInWorker when a RxDatabase is also used inside of the worker","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#setting-usesrxdatabaseinworker-when-a-rxdatabase-is-also-used-inside-of-the-worker","content":" When you use the OPFS inside of a worker, it will internally use strings to represent operation results. This has the benefit that transferring strings from the worker to the main thread, is way faster compared to complex json objects. The getRxStorageWorker() will automatically decode these strings on the main thread so that the data can be used by the RxDatabase. But using a RxDatabase inside of your worker can make sense for example when you want to move the replication with a server. To enable this, you have to set usesRxDatabaseInWorker to true: // inside of the worker.js file import { getRxStorageOPFS } from 'rxdb-premium/plugins/storage-opfs'; const storage = getRxStorageOPFS({ usesRxDatabaseInWorker: true }); ","version":"Next","tagName":"h2"},{"title":"Setting jsonPositionSize to increase the maximum database size.","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#setting-jsonpositionsize-to-increase-the-maximum-database-size","content":" By default the jsonPositionSize value is set to 8 which allows the database to get up to 100 megabytes in size (per collection). This is ok for most use cases but you might want to just increase jsonPositionSize to 14. In the next major RxDB version the default will be set to 14, but this was not possible without introducing a breaking change. note If you have already stored data, you cannot just change the jsonPositionSize value because your stored binary data will not be compatible anymore. Also there is a opfs-big.worker.js file that has jsonPositionSize set to 14 already. ","version":"Next","tagName":"h2"},{"title":"OPFS in Electron, React-Native or Capacitor.js","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#opfs-in-electron-react-native-or-capacitorjs","content":" Origin Private File System is a browser API that is only accessible in browsers. Other JavaScript like React-Native or Node.js, do not support it. Electron has two JavaScript contexts: the browser (chromium) context and the Node.js context. While you could use the OPFS API in the browser context, it is not recommended. Instead you should use the Filesystem API of Node.js and then only transfer the relevant data with the ipcRenderer. With RxDB that is pretty easy to configure: In the main.js, expose the Node Filesystem storage with the exposeIpcMainRxStorage() that comes with the electron pluginIn the browser context, access the main storage with the getRxStorageIpcRenderer() method. React Native (and Expo) does not have an OPFS API. You could use the ReactNative Filesystem to directly write data. But to get a fully featured database like RxDB it is easier to use the SQLite RxStorage which starts an SQLite database inside of the ReactNative app and uses that to do the database operations. Capacitor.js is able to access the OPFS API. ","version":"Next","tagName":"h2"},{"title":"Difference between File System Access API and Origin Private File System (OPFS)","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#difference-between-file-system-access-api-and-origin-private-file-system-opfs","content":" Often developers are confused with the differences between the File System Access API and the Origin Private File System (OPFS). The File System Access API provides access to the files on the device file system, like the ones shown in the file explorer of the operating system. To use the File System API, the user has to actively select the files from a filepicker.Origin Private File System (OPFS) is a sub-part of the File System Standard and it only describes the things you can do with the filesystem root from navigator.storage.getDirectory(). OPFS writes to a sandboxed filesystem, not visible to the user. Therefore the user does not have to actively select or allow the data access. ","version":"Next","tagName":"h2"},{"title":"Learn more about OPFS:","type":1,"pageTitle":"Origin Private File System (OPFS) Database with the RxDB OPFS-RxStorage","url":"/rx-storage-opfs.html#learn-more-about-opfs","content":" WebKit: The File System API with Origin Private File SystemBrowser SupportPerformance Test Tool ","version":"Next","tagName":"h2"},{"title":"Remote RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-remote.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"Remote RxStorage","url":"/rx-storage-remote.html#usage","content":" The remote storage communicates over a message channel which has to implement the messageChannelCreator function which returns an object that has a messages$ observable and a send() function on both sides and a close() function that closes the RemoteMessageChannel. // on the client import { getRxStorageRemote } from 'rxdb/plugins/storage-remote'; const storage = getRxStorageRemote({ identifier: 'my-id', mode: 'storage', messageChannelCreator: () => Promise.resolve({ messages$: new Subject(), send(msg) { // send to remote storage } }) }); const myDb = await createRxDatabase({ storage }); // on the remote import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; import { exposeRxStorageRemote } from 'rxdb/plugins/storage-remote'; exposeRxStorageRemote({ storage: getRxStorageDexie(), messages$: new Subject(), send(msg){ // send to other side } }); ","version":"Next","tagName":"h2"},{"title":"Usage with a Websocket server","type":1,"pageTitle":"Remote RxStorage","url":"/rx-storage-remote.html#usage-with-a-websocket-server","content":" The remote storage plugin contains helper functions to create a remote storage over a WebSocket server. This is often used in Node.js to give one microservice access to another services database without having to replicate the full database state. // server.js import { getRxStorageMemory } from 'rxdb/plugins/storage-memory'; import { startRxStorageRemoteWebsocketServer } from 'rxdb/plugins/storage-remote-websocket'; // either you can create the server based on a RxDatabase const serverBasedOnDatabase = await startRxStorageRemoteWebsocketServer({ port: 8080, database: myRxDatabase }); // or you can create the server based on a pure RxStorage const serverBasedOn = await startRxStorageRemoteWebsocketServer({ port: 8080, storage: getRxStorageMemory() }); // client.js import { getRxStorageRemoteWebsocket } from 'rxdb/plugins/storage-remote-websocket'; const myDb = await createRxDatabase({ storage: getRxStorageRemoteWebsocket({ url: 'ws://example.com:8080' }) }); ","version":"Next","tagName":"h2"},{"title":"Sending custom messages","type":1,"pageTitle":"Remote RxStorage","url":"/rx-storage-remote.html#sending-custom-messages","content":" The remote storage can also be used to send custom messages to and from the remote instance. One the remote you have to define a customRequestHandler like: const serverBasedOnDatabase = await startRxStorageRemoteWebsocketServer({ port: 8080, database: myRxDatabase, async customRequestHandler(msg){ // here you can return any JSON object as an 'answer' return { foo: 'bar' }; } }); On the client instance you can then call the customRequest() method: const storage = getRxStorageRemoteWebsocket({ url: 'ws://example.com:8080' }); const answer = await storage.customRequest({ bar: 'foo' }); console.dir(answer); // > { foo: 'bar' } ","version":"Next","tagName":"h2"},{"title":"RxStorage PouchDB","type":0,"sectionRef":"#","url":"/rx-storage-pouchdb.html","content":"","keywords":"","version":"Next"},{"title":"Pros","type":1,"pageTitle":"RxStorage PouchDB","url":"/rx-storage-pouchdb.html#pros","content":" Most battle proven RxStorageSupports replication with a CouchDB endpointSupport storing attachmentsBig ecosystem of adapters ","version":"Next","tagName":"h2"},{"title":"Cons","type":1,"pageTitle":"RxStorage PouchDB","url":"/rx-storage-pouchdb.html#cons","content":" Big bundle sizeSlow performance because of revision handling overhead ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"RxStorage PouchDB","url":"/rx-storage-pouchdb.html#usage","content":" import { createRxDatabase } from 'rxdb'; import { getRxStoragePouch, addPouchPlugin } from 'rxdb/plugins/pouchdb'; addPouchPlugin(require('pouchdb-adapter-idb')); const db = await createRxDatabase({ name: 'exampledb', storage: getRxStoragePouch( 'idb', { /** * other pouchdb specific options * @link https://pouchdb.com/api.html#create_database */ } ) }); ","version":"Next","tagName":"h2"},{"title":"Polyfill the global variable","type":1,"pageTitle":"RxStorage PouchDB","url":"/rx-storage-pouchdb.html#polyfill-the-global-variable","content":" When you use RxDB with angular or other webpack based frameworks, you might get the error: <span style="color: red;">Uncaught ReferenceError: global is not defined</span> This is because pouchdb assumes a nodejs-specific global variable that is not added to browser runtimes by some bundlers. You have to add them by your own, like we do here. (window as any).global = window; (window as any).process = { env: { DEBUG: undefined }, }; ","version":"Next","tagName":"h2"},{"title":"Adapters","type":1,"pageTitle":"RxStorage PouchDB","url":"/rx-storage-pouchdb.html#adapters","content":" PouchDB has many adapters for all JavaScript runtimes. ","version":"Next","tagName":"h2"},{"title":"Using the internal PouchDB Database","type":1,"pageTitle":"RxStorage PouchDB","url":"/rx-storage-pouchdb.html#using-the-internal-pouchdb-database","content":" For custom operations, you can access the internal PouchDB database. This is dangerous because you might do changes that are not compatible with RxDB. Only use this when there is no way to achieve your goals via the RxDB API. import { getPouchDBOfRxCollection } from 'rxdb/plugins/pouchdb'; const pouch = getPouchDBOfRxCollection(myRxCollection); ","version":"Next","tagName":"h2"},{"title":"Sharding RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-sharding.html","content":"","keywords":"","version":"Next"},{"title":"Using the sharding plugin","type":1,"pageTitle":"Sharding RxStorage","url":"/rx-storage-sharding.html#using-the-sharding-plugin","content":" import { getRxStorageSharding } from 'rxdb-premium/plugins/storage-sharding'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; /** * First wrap the original RxStorage with the sharding RxStorage. */ const shardedRxStorage = getRxStorageSharding({ /** * Here we use the dexie.js RxStorage, * it is also possible to use any other RxStorage instead. */ storage: getRxStorageDexie() }); /** * Add the sharding options to your schema. * Changing these options will require a data migration. */ const mySchema = { /* ... */ sharding: { /** * Amount of shards per RxStorage instance. * Depending on your data size and query patterns, the optimal shard amount may differ. * Do a performance test to optimize that value. * 10 Shards is a good value to start with. * * IMPORTANT: Changing the value of shards is not possible on a already existing database state, * you will loose access to your data. */ shards: 10, /** * Sharding mode, * you can either shard by collection or by database. * For most cases you should use 'collection' which will shard on the collection level. * For example with the IndexedDB RxStorage, it will then create multiple stores per IndexedDB database * and not multiple IndexedDB databases, which would be slower. */ mode: 'collection' } /* ... */ } /** * Create the RxDatabase with the wrapped RxStorage. */ const database = await createRxDatabase({ name: 'mydatabase', storage: shardedRxStorage }); ","version":"Next","tagName":"h2"},{"title":"SharedWorker RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-shared-worker.html","content":"","keywords":"","version":"Next"},{"title":"Usage","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#usage","content":" ","version":"Next","tagName":"h2"},{"title":"On the SharedWorker process","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#on-the-sharedworker-process","content":" In the worker process JavaScript file, you have wrap the original RxStorage with getRxStorageIndexedDB(). // shared-worker.ts import { exposeWorkerRxStorage } from 'rxdb-premium/plugins/storage-worker'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/indexeddb'; exposeWorkerRxStorage({ /** * You can wrap any implementation of the RxStorage interface * into a worker. * Here we use the IndexedDB RxStorage. */ storage: getRxStorageIndexedDB() }); ","version":"Next","tagName":"h3"},{"title":"On the main process","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#on-the-main-process","content":" import { createRxDatabase } from 'rxdb'; import { getRxStorageSharedWorker } from 'rxdb-premium/plugins/storage-worker'; import { getRxStorageIndexedDB } from 'rxdb/plugins/storage-indexeddb'; const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageSharedWorker( { /** * Contains any value that can be used as parameter * to the SharedWorker constructor of thread.js * Most likely you want to put the path to the shared-worker.js file in here. * * @link https://developer.mozilla.org/en-US/docs/Web/API/SharedWorker?retiredLocale=de */ workerInput: 'path/to/shared-worker.js', /** * (Optional) options * for the worker. */ workerOptions: { type: 'module', credentials: 'omit' } } ) }); ","version":"Next","tagName":"h3"},{"title":"Pre-build workers","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#pre-build-workers","content":" The shared-worker.js must be a self containing JavaScript file that contains all dependencies in a bundle. To make it easier for you, RxDB ships with pre-bundles worker files that are ready to use. You can find them in the folder node_modules/rxdb-premium/dist/workers after you have installed the RxDB Premium 👑 Plugin. From there you can copy them to a location where it can be served from the webserver and then use their path to create the RxDatabase Any valid worker.js JavaScript file can be used both, for normal Workers and SharedWorkers. import { createRxDatabase } from 'rxdb'; import { getRxStorageSharedWorker } from 'rxdb-premium/plugins/storage-worker'; const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageSharedWorker( { /** * Path to where the copied file from node_modules/rxdb-premium/dist/workers * is reachable from the webserver. */ workerInput: '/indexeddb.shared-worker.js' } ) }); ","version":"Next","tagName":"h2"},{"title":"Building a custom worker","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#building-a-custom-worker","content":" To build a custom worker.js file, check out the webpack config at the worker documentation. Any worker file form the worker storage can also be used in a shared worker because exposeWorkerRxStorage detects where it runs and exposes the correct messaging endpoints. ","version":"Next","tagName":"h2"},{"title":"Passing in a SharedWorker instance","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#passing-in-a-sharedworker-instance","content":" Instead of setting an url as workerInput, you can also specify a function that returns a new SharedWorker instance when called. This is mostly used when you have a custom worker file and dynamically import it. This works equal to the workerInput of the Worker Storage ","version":"Next","tagName":"h2"},{"title":"Set multiInstance: false","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#set-multiinstance-false","content":" When you know that you only ever create your RxDatabase inside of the shared worker, you might want to set multiInstance: false to prevent sending change events across JavaScript realms and to improve performance. Do not set this when you also create the same storage on another realm, like when you have the same RxDatabase once inside the shared worker and once on the main thread. ","version":"Next","tagName":"h2"},{"title":"Replication with SharedWorker","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#replication-with-sharedworker","content":" When a SharedWorker RxStorage is used, it is recommended to run the replication inside of the worker. You can do that by opening another RxDatabase inside of it and starting the replication there. // shared-worker.ts import { exposeWorkerRxStorage } from 'rxdb-premium/plugins/storage-worker'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; import { createRxDatabase, addRxPlugin } from 'rxdb'; import { RxDBReplicationGraphQLPlugin } from 'rxdb/plugins/replication-graphql'; addRxPlugin(RxDBReplicationGraphQLPlugin); const baseStorage = getRxStorageIndexedDB(); // first expose the RxStorage to the outside exposeWorkerRxStorage({ storage: baseStorage }); /** * Then create a normal RxDatabase and RxCollections * and start the replication. */ const database = await createRxDatabase({ name: 'mydatabase', storage: baseStorage }); await db.addCollections({ humans: {/* ... */} }); const replicationState = db.humans.syncGraphQL({/* ... */}); ","version":"Next","tagName":"h2"},{"title":"Limitations","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#limitations","content":" The SharedWorker API is not available in some mobile browser ","version":"Next","tagName":"h3"},{"title":"FAQ","type":1,"pageTitle":"SharedWorker RxStorage","url":"/rx-storage-shared-worker.html#faq","content":" Can I use this plugin with a Service Worker? No. A Service Worker is not the same as a Shared Worker. While you can use RxDB inside of a ServiceWorker, you cannot use the ServiceWorker as a RxStorage that gets accessed by an outside RxDatabase instance. ","version":"Next","tagName":"h3"},{"title":"RxDB Tradeoffs","type":0,"sectionRef":"#","url":"/rxdb-tradeoffs.html","content":"","keywords":"","version":"Next"},{"title":"Why not SQL syntax","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#why-not-sql-syntax","content":" When you ask people which database they would want for browsers, the most answer I hear is something SQL based like SQLite. This makes sense, SQL is a query language that most developers had learned in school/university and it is reusable across various database solutions. But for RxDB (and other client side databases), using SQL is not a good option and instead it operates on document writes and the JSON based Mango-query syntax for querying. // A Mango Query const query = { selector: { age: { $gt: 10 }, lastName: 'foo' }, sort: [{ age: 'asc' }] }; ","version":"Next","tagName":"h2"},{"title":"SQL is made for database servers","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#sql-is-made-for-database-servers","content":" SQL is made to be used to run operations against a database server. You send a SQL string like SELECT SUM(column_name)... to the database server and the server then runs all operations required to calculate the result and only send back that result. This saves performance on the application side and ensures that the application itself is not blocked. But RxDB is a client-side database that runs inside of the application. There is no performance difference if the SUM() query is run inside of the database or at the application level where a Array.reduce() call calculates the result. ","version":"Next","tagName":"h3"},{"title":"Typescript support","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#typescript-support","content":" SQL is string based and therefore you need additional IDE tooling to ensure that your written database code is valid. Using the Mango Query syntax instead, TypeScript can be used validate the queries and to autocomplete code and knows which fields do exist and which do not. By doing so, the correctness of queries can be ensured at compile-time instead of run-time. ","version":"Next","tagName":"h3"},{"title":"Composeable queries","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#composeable-queries","content":" By using JSON based Mango Queries, it is easy to compose queries in plain JavaScript. For example if you have any given query and want to add the condition user MUST BE 'foobar', you can just add the condition to the selector without having to parse and understand a complex SQL string. query.selector.user = 'foobar'; Even merging the selectors of multiple queries is not a problem: queryA.selector = { $and: [ queryA.selector, queryB.selector ] }; ","version":"Next","tagName":"h3"},{"title":"Why Document based (NoSQL)","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#why-document-based-nosql","content":" Like other NoSQL databases, RxDB operates data on document level. It has no concept of tables, rows and columns. Instead we have collections, documents and fields. ","version":"Next","tagName":"h2"},{"title":"Javascript is made to work with objects","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#javascript-is-made-to-work-with-objects","content":" ","version":"Next","tagName":"h3"},{"title":"Caching","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#caching","content":" ","version":"Next","tagName":"h3"},{"title":"EventReduce","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#eventreduce","content":" ","version":"Next","tagName":"h3"},{"title":"Easier to use with typescript","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#easier-to-use-with-typescript","content":" Because of the document based approach, TypeScript can know the exact type of the query response while a SQL query could return anything from a number over a set of rows or a complex construct. ","version":"Next","tagName":"h3"},{"title":"Why no transactions","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#why-no-transactions","content":" Does not work with offline-firstDoes not work with multi-tabEasier conflict handling on document level -- Instead of transactions, rxdb works with revisions ","version":"Next","tagName":"h2"},{"title":"Why no relations","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#why-no-relations","content":" Does not work with easy replication ","version":"Next","tagName":"h2"},{"title":"Why is a schema required","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html#why-is-a-schema-required","content":" migration of data on clients is hardWhy jsonschema ","version":"Next","tagName":"h2"},{"title":"","type":1,"pageTitle":"RxDB Tradeoffs","url":"/rxdb-tradeoffs.html##","content":"","version":"Next","tagName":"h2"},{"title":"SQLite RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-sqlite.html","content":"","keywords":"","version":"Next"},{"title":"Performance comparison with other storages","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#performance-comparison-with-other-storages","content":" The SQLite storage is a bit slower compared to other Node.js based storages like the Filesystem Storage because wrapping SQLite has a bit of overhead and sending data from the JavaScript process to SQLite and backwards increases the latency. However for most hybrid apps the SQLite storage is the best option because it can leverage the SQLite version that comes already installed on the smartphones OS (iOS and android). Also for desktop electron apps it can be a viable solution because it is easy to ship SQLite together inside of the electron bundle. ","version":"Next","tagName":"h2"},{"title":"Using the SQLite RxStorage","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#using-the-sqlite-rxstorage","content":" To use the SQLite storage you have to import getRxStorageSQLite from the RxDB Premium 👑 package and then add the correct sqliteBasics adapter depending on which sqlite module you want to use. This can then be used as storage when creating the RxDatabase. In the following you can see some examples for some of the most common SQLite packages. ","version":"Next","tagName":"h2"},{"title":"Usage with Node.js SQLite","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#usage-with-nodejs-sqlite","content":" import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsNode } from 'rxdb-premium/plugins/storage-sqlite'; /** * In Node.js, we use the SQLite database * from the 'sqlite' npm module. * @link https://www.npmjs.com/package/sqlite3 */ import sqlite3 from 'sqlite3'; const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageSQLite({ /** * Different runtimes have different interfaces to SQLite. * For example in node.js we have a callback API, * while in capacitor sqlite we have Promises. * So we need a helper object that is capable of doing the basic * sqlite operations. */ sqliteBasics: getSQLiteBasicsNode(sqlite3) }) }); ","version":"Next","tagName":"h2"},{"title":"Usage with Webassembly in the Browser","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#usage-with-webassembly-in-the-browser","content":" In the browser you can use the wa-sqlite package to run sQLite in Webassembly. The wa-sqlite module also allows to use persistence with IndexedDB or OPFS. Notice that in general SQLite via Webassembly is slower compared to other storages like IndexedDB or OPFS because sending data from the main thread to wasm and backwards is slow in the browser. Have a look the performance comparison. import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsWasm } from 'rxdb-premium/plugins/storage-sqlite'; /** * In the Browser, we use the SQLite database * from the 'wa-sqlite' npm module. This contains the SQLite library * compiled to Webassembly * @link https://www.npmjs.com/package/wa-sqlite */ import SQLiteESMFactory from 'wa-sqlite/dist/wa-sqlite-async.mjs'; import SQLite from 'wa-sqlite'; const sqliteModule = await SQLiteESMFactory(); const sqlite3 = SQLite.Factory(module); const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsWasm(sqlite3) }) }); ","version":"Next","tagName":"h2"},{"title":"Usage with React Native","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#usage-with-react-native","content":" Install the react-native-quick-sqlite npm moduleImport getSQLiteBasicsQuickSQLite from the SQLite plugin and use it to create a RxDatabase: import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsQuickSQLite } from 'rxdb-premium/plugins/storage-sqlite'; import { open } from 'react-native-quick-sqlite'; // create database const myRxDatabase = await createRxDatabase({ name: 'exampledb', multiInstance: false, // <- Set multiInstance to false when using RxDB in React Native storage: getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsQuickSQLite(open) }) }); If react-native-quick-sqlite does not work for you, as alternative you can use the react-native-sqlite-2 library instead: import { getRxStorageSQLite, getSQLiteBasicsWebSQL } from 'rxdb-premium/plugins/storage-sqlite'; import SQLite from 'react-native-sqlite-2'; const storage = getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsWebSQL(SQLite.openDatabase) }); ","version":"Next","tagName":"h2"},{"title":"Usage with SQLite Capacitor","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#usage-with-sqlite-capacitor","content":" Install the sqlite capacitor npm moduleAdd the iOS database location to your capacitor config { "plugins": { "CapacitorSQLite": { "iosDatabaseLocation": "Library/CapacitorDatabase" } } } Use the function getSQLiteBasicsCapacitor to get the capacitor sqlite wrapper. import { createRxDatabase } from 'rxdb'; import { getRxStorageSQLite, getSQLiteBasicsCapacitor } from 'rxdb-premium/plugins/storage-sqlite'; /** * Import SQLite from the capacitor plugin. */ import { CapacitorSQLite, SQLiteConnection } from '@capacitor-community/sqlite'; import { Capacitor } from '@capacitor/core'; const sqlite = new SQLiteConnection(CapacitorSQLite); const myRxDatabase = await createRxDatabase({ name: 'exampledb', storage: getRxStorageSQLite({ /** * Different runtimes have different interfaces to SQLite. * For example in node.js we have a callback API, * while in capacitor sqlite we have Promises. * So we need a helper object that is capable of doing the basic * sqlite operations. */ sqliteBasics: getSQLiteBasicsCapacitor(sqlite, Capacitor) }) }); ","version":"Next","tagName":"h2"},{"title":"Database Connection","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#database-connection","content":" If you need to access the database connection for any reason you can use getDatabaseConnection to do so: import { getDatabaseConnection } from 'rxdb-premium/plugins/storage-sqlite' It has the following signature: getDatabaseConnection( sqliteBasics: SQLiteBasics<any>, databaseName: string ): Promise<SQLiteDatabaseClass>; ","version":"Next","tagName":"h2"},{"title":"Known Problems of SQLite in JavaScript apps","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#known-problems-of-sqlite-in-javascript-apps","content":" Some JavaScript runtimes do not contain a Buffer API which is used by SQLite to store binary attachments data as BLOB. You can set storeAttachmentsAsBase64String: true if you want to store the attachments data as base64 string instead. This increases the database size but makes it work even without having a Buffer. The SQlite RxStorage works on SQLite libraries that use SQLite in version 3.38.0 (2022-02-22) or newer, because it uses the SQLite JSON methods like JSON_EXTRACT. If you get an error like [Error: no such function: JSON_EXTRACT (code 1 SQLITE_ERROR[1]), you might have a too old version of SQLite. expo-sqlite cannot be used on android (but it works on iOS) because it uses an outdated SQLite version. This is fixed if you use Expo SDK version 50 or never. To debug all SQL operations, you can pass a log function to getRxStorageSQLite() like this: const storage = getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsCapacitor(sqlite, Capacitor), // pass log function log: console.log.bind(console) }); ","version":"Next","tagName":"h2"},{"title":"Related","type":1,"pageTitle":"SQLite RxStorage","url":"/rx-storage-sqlite.html#related","content":" React Native Databases ","version":"Next","tagName":"h2"},{"title":"RxStorage","type":0,"sectionRef":"#","url":"/rx-storage.html","content":"","keywords":"","version":"Next"},{"title":"Quick Recommendations","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#quick-recommendations","content":" In the Browser: Use the IndexedDB RxStorage if you have 👑 premium access, otherwise use the Dexie.js storage.In Electron and ReactNative: Use the SQLite RxStorage if you have 👑 premium access, otherwise use the LokiJS storage.In Capacitor: Use the SQLite RxStorage if you have 👑 premium access, otherwise use the Dexie.js storage. ","version":"Next","tagName":"h2"},{"title":"Configuration Examples","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#configuration-examples","content":" The RxStorage layer of RxDB is very flexible. Here are some examples on how to configure more complex settings: ","version":"Next","tagName":"h2"},{"title":"Storing much data in a browser securely","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#storing-much-data-in-a-browser-securely","content":" Lets say you build a browser app that needs to store a big amount of data as secure as possible. Here we can use a combination of the storages (encryption, IndexedDB, compression, schema-checks) that increase security and reduce the stored data size. We use the schema-validation on the top level to ensure schema-errors are clearly readable and do not contain encrypted/compressed data. The encryption is used inside of the compression because encryption of compressed data is more efficient. import { wrappedValidateAjvStorage } from 'rxdb/plugins/validate-ajv'; import { wrappedKeyCompressionStorage } from 'rxdb/plugins/key-compression'; import { wrappedKeyEncryptionCryptoJsStorage } from 'rxdb/plugins/encryption-crypto-js'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; const myDatabase = await createRxDatabase({ storage: wrappedValidateAjvStorage({ storage: wrappedKeyCompressionStorage({ storage: wrappedKeyEncryptionCryptoJsStorage({ storage: getRxStorageIndexedDB() }) }) }) }); ","version":"Next","tagName":"h3"},{"title":"High query Load","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#high-query-load","content":" Also we can utilize a combination of storages to create a database that is optimized to run complex queries on the data really fast. Here we use the shardingstorage together with the worker storage. This allows to run queries in parallel multithreading instead of a single JavaScript process. Because the worker initialization can slow down the initial page load, we also use the localstorage-meta-optimizer to improve initialization time. import { getRxStorageSharding } from 'rxdb-premium/plugins/storage-sharding'; import { getRxStorageWorker } from 'rxdb-premium/plugins/storage-worker'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; import { getLocalstorageMetaOptimizerRxStorage } from 'rxdb-premium/plugins/storage-localstorage-meta-optimizer'; const myDatabase = await createRxDatabase({ storage: getLocalstorageMetaOptimizerRxStorage({ storage: getRxStorageSharding({ storage: getRxStorageWorker({ workerInput: 'path/to/worker.js', storage: getRxStorageIndexedDB() }) }) }) }); ","version":"Next","tagName":"h3"},{"title":"Low Latency on Writes and Simple Reads","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#low-latency-on-writes-and-simple-reads","content":" Here we create a storage configuration that is optimized to have a low latency on simple reads and writes. It uses the memory-synced storage to fetch and store data in memory. For persistence the OPFS storage is used in the main thread which has lower latency for fetching big chunks of data when at initialization the data is loaded from disc into memory. We do not use workers because sending data from the main thread to workers and backwards would increase the latency. import { getLocalstorageMetaOptimizerRxStorage } from 'rxdb-premium/plugins/storage-localstorage-meta-optimizer'; import { getMemorySyncedRxStorage } from 'rxdb-premium/plugins/storage-memory-synced'; import { getRxStorageOPFSMainThread } from 'rxdb-premium/plugins/storage-worker'; const myDatabase = await createRxDatabase({ storage: getLocalstorageMetaOptimizerRxStorage({ storage: getMemorySyncedRxStorage({ storage: getRxStorageOPFSMainThread() }) }) }); ","version":"Next","tagName":"h3"},{"title":"All RxStorage Implementations List","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#all-rxstorage-implementations-list","content":" ","version":"Next","tagName":"h2"},{"title":"Dexie.js","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#dexiejs","content":" The Dexie.js based storage is based on the Dexie.js IndexedDB wrapper. It stores the data inside of a browsers IndexedDB database and has a very small bundle size. If you are new to RxDB, you should start with the Dexie.js RxStorage. Read more ","version":"Next","tagName":"h3"},{"title":"Memory","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#memory","content":" A storage that stores the data in as plain data in the memory of the JavaScript process. Really fast and can be used in all environments. Read more ","version":"Next","tagName":"h3"},{"title":"👑 IndexedDB","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-indexeddb","content":" The IndexedDB RxStorage is based on plain IndexedDB. This has a better performance than the Dexie.js storage, but it is slower compared to the OPFS storage. Read more ","version":"Next","tagName":"h3"},{"title":"👑 OPFS","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-opfs","content":" The OPFS RxStorage is based on the File System Access API. This has the best performance of all other non-in-memory storage, when RxDB is used inside of a browser. Read more ","version":"Next","tagName":"h3"},{"title":"👑 SQLite","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-sqlite","content":" The SQLite storage has great performance when RxDB is used on Node.js, Electron, React Native, Cordova or Capacitor. Read more ","version":"Next","tagName":"h3"},{"title":"👑 Filesystem Node","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-filesystem-node","content":" The Filesystem Node storage is best suited when you use RxDB in a Node.js process or with electron.js. Read more ","version":"Next","tagName":"h3"},{"title":"MongoDB","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#mongodb","content":" To use RxDB on the server side, the MongoDB RxStorage provides a way of having a secure, scalable and performant storage based on the popular MongoDB NoSQL database Read more ","version":"Next","tagName":"h3"},{"title":"DenoKV","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#denokv","content":" To use RxDB in Deno. The DenoKV RxStorage provides a way of having a secure, scalable and performant storage based on the Deno Key Value Store. Read more ","version":"Next","tagName":"h3"},{"title":"FoundationDB","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#foundationdb","content":" To use RxDB on the server side, the FoundationDB RxStorage provides a way of having a secure, fault-tolerant and performant storage. Read more ","version":"Next","tagName":"h3"},{"title":"LokiJS (deprecated)","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#lokijs-deprecated","content":" The LokiJS based storage is based on the LokiJS database. It has the special behavior of loading all data into memory at app start and therefore has a good performance when running operations over a small dataset where loading all data upfront is not a problem. Read more ","version":"Next","tagName":"h3"},{"title":"👑 Worker","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-worker","content":" The worker RxStorage is a wrapper around any other RxStorage which allows to run the storage in a WebWorker (in browsers) or a Worker Thread (in Node.js). By doing so, you can take CPU load from the main process and move it into the worker's process which can improve the perceived performance of your application. Read more ","version":"Next","tagName":"h3"},{"title":"👑 SharedWorker","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-sharedworker","content":" The worker RxStorage is a wrapper around any other RxStorage which allows to run the storage in a SharedWorker (only in browsers). By doing so, you can take CPU load from the main process and move it into the worker's process which can improve the perceived performance of your application. Read more ","version":"Next","tagName":"h3"},{"title":"Remote","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#remote","content":" The Remote RxStorage is made to use a remote storage and communicate with it over an asynchronous message channel. The remote part could be on another JavaScript process or even on a different host machine. Mostly used internally in other storages like Worker or Electron-ipc. Read more ","version":"Next","tagName":"h3"},{"title":"👑 Sharding","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-sharding","content":" On some RxStorage implementations (like IndexedDB), a huge performance improvement can be done by sharding the documents into multiple database instances. With the sharding plugin you can wrap any other RxStorage into a sharded storage. Read more ","version":"Next","tagName":"h3"},{"title":"👑 Memory Mapped","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-memory-mapped","content":" The memory-mapped RxStorage is a wrapper around any other RxStorage. The wrapper creates an in-memory storage that is used for query and write operations. This memory instance stores its data in an underlying storage for persistence. The main reason to use this is to improve query/write performance while still having the data stored on disc. Read more ","version":"Next","tagName":"h3"},{"title":"👑 Memory Synced","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-memory-synced","content":" The memory-synced RxStorage is a wrapper around any other RxStorage. The wrapper creates an in-memory storage that is used for query and write operations. This memory instance is replicated with the underlying storage for persistence. The main reason to use this is to improve initial page load and query/write times. This is mostly useful in browser based applications. While the memory-synced storage has its use cases, by default you should use the Memory-Mapped RxStorage instead. Read more ","version":"Next","tagName":"h3"},{"title":"👑 Localstorage Meta Optimizer","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#-localstorage-meta-optimizer","content":" The RxStorage Localstorage Meta Optimizer is a wrapper around any other RxStorage. The wrapper uses the original RxStorage for normal collection documents. But to optimize the initial page load time, it uses localstorage to store the plain key-value metadata that RxDB needs to create databases and collections. This plugin can only be used in browsers. Read more ","version":"Next","tagName":"h3"},{"title":"Electron IpcRenderer & IpcMain","type":1,"pageTitle":"RxStorage","url":"/rx-storage.html#electron-ipcrenderer--ipcmain","content":" To use RxDB in electron, it is recommended to run the RxStorage in the main process and the RxDatabase in the renderer processes. With the rxdb electron plugin you can create a remote RxStorage and consume it from the renderer process. Read more ","version":"Next","tagName":"h3"},{"title":"Worker RxStorage","type":0,"sectionRef":"#","url":"/rx-storage-worker.html","content":"","keywords":"","version":"Next"},{"title":"On the worker process","type":1,"pageTitle":"Worker RxStorage","url":"/rx-storage-worker.html#on-the-worker-process","content":" // worker.ts import { exposeWorkerRxStorage } from 'rxdb-premium/plugins/storage-worker'; import { getRxStorageLoki } from 'rxdb/plugins/storage-lokijs'; exposeWorkerRxStorage({ /** * You can wrap any implementation of the RxStorage interface * into a worker. * Here we use the LokiJS RxStorage. */ storage: getRxStorageLoki() }); ","version":"Next","tagName":"h2"},{"title":"On the main process","type":1,"pageTitle":"Worker RxStorage","url":"/rx-storage-worker.html#on-the-main-process","content":" import { createRxDatabase } from 'rxdb'; import { getRxStorageWorker } from 'rxdb-premium/plugins/storage-worker'; const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageWorker( { /** * Contains any value that can be used as parameter * to the Worker constructor of thread.js * Most likely you want to put the path to the worker.js file in here. * * @link https://developer.mozilla.org/en-US/docs/Web/API/Worker/Worker */ workerInput: 'path/to/worker.js', /** * (Optional) options * for the worker. */ workerOptions: { type: 'module', credentials: 'omit' } } ) }); ","version":"Next","tagName":"h2"},{"title":"Pre-build workers","type":1,"pageTitle":"Worker RxStorage","url":"/rx-storage-worker.html#pre-build-workers","content":" The worker.js must be a self containing JavaScript file that contains all dependencies in a bundle. To make it easier for you, RxDB ships with pre-bundles worker files that are ready to use. You can find them in the folder node_modules/rxdb-premium/dist/workers after you have installed the RxDB Premium 👑 Plugin. From there you can copy them to a location where it can be served from the webserver and then use their path to create the RxDatabase. Any valid worker.js JavaScript file can be used both, for normal Workers and SharedWorkers. import { createRxDatabase } from 'rxdb'; import { getRxStorageWorker } from 'rxdb-premium/plugins/storage-worker'; const database = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageWorker( { /** * Path to where the copied file from node_modules/rxdb/dist/workers * is reachable from the webserver. */ workerInput: '/lokijs-incremental-indexeddb.worker.js' } ) }); ","version":"Next","tagName":"h2"},{"title":"Building a custom worker","type":1,"pageTitle":"Worker RxStorage","url":"/rx-storage-worker.html#building-a-custom-worker","content":" The easiest way to bundle a custom worker.js file is by using webpack. Here is the webpack-config that is also used for the prebuild workers: // webpack.config.js const path = require('path'); const TerserPlugin = require('terser-webpack-plugin'); const projectRootPath = path.resolve( __dirname, '../../' // path from webpack-config to the root folder of the repo ); const babelConfig = require(path.join(projectRootPath, 'babel.config')); const baseDir = './dist/workers/'; // output path module.exports = { target: 'webworker', entry: { 'my-custom-worker': baseDir + 'my-custom-worker.js', }, output: { filename: '[name].js', clean: true, path: path.resolve( projectRootPath, 'dist/workers' ), }, mode: 'production', module: { rules: [ { test: /\\.tsx?$/, exclude: /(node_modules)/, use: { loader: 'babel-loader', options: babelConfig } } ], }, resolve: { extensions: ['.tsx', '.ts', '.js', '.mjs', '.mts'] }, optimization: { moduleIds: 'deterministic', minimize: true, minimizer: [new TerserPlugin({ terserOptions: { format: { comments: false, }, }, extractComments: false, })], } }; ","version":"Next","tagName":"h2"},{"title":"One worker per database","type":1,"pageTitle":"Worker RxStorage","url":"/rx-storage-worker.html#one-worker-per-database","content":" Each call to getRxStorageWorker() will create a different worker instance so that when you have more then one RxDatabase, each database will have its own JavaScript worker process. To reuse the worker instance in more than one RxDatabase, you can store the output of getRxStorageWorker() into a variable an use that one. Reusing the worker can decrease the initial page load, but you might get slower database operations. // Call getRxStorageWorker() exactly once const workerStorage = getRxStorageWorker({ workerInput: 'path/to/worker.js' }); // use the same storage for both databases. const databaseOne = await createRxDatabase({ name: 'database-one', storage: workerStorage }); const databaseTwo = await createRxDatabase({ name: 'database-two', storage: workerStorage }); ","version":"Next","tagName":"h2"},{"title":"Passing in a Worker instance","type":1,"pageTitle":"Worker RxStorage","url":"/rx-storage-worker.html#passing-in-a-worker-instance","content":" Instead of setting an url as workerInput, you can also specify a function that returns a new Worker instance when called. getRxStorageWorker({ workerInput: () => new Worker('path/to/worker.js') }) This can be helpful for environments where the worker is build dynamically by the bundler. For example in angular you would create a my-custom.worker.ts file that contains a custom build worker and then import it. const storage = getRxStorageWorker({ workerInput: () => new Worker(new URL('./my-custom.worker', import.meta.url)), }); //> my-custom.worker.ts import { exposeWorkerRxStorage } from 'rxdb-premium/plugins/storage-worker'; import { getRxStorageLoki } from 'rxdb/plugins/storage-lokijs'; exposeWorkerRxStorage({ storage: getRxStorageLoki() }); ","version":"Next","tagName":"h2"},{"title":"Schema validation","type":0,"sectionRef":"#","url":"/schema-validation.html","content":"","keywords":"","version":"Next"},{"title":"validate-ajv","type":1,"pageTitle":"Schema validation","url":"/schema-validation.html#validate-ajv","content":" A validation-module that does the schema-validation. This one is using ajv as validator which is a bit faster. Better compliant to the jsonschema-standard but also has a bigger build-size. import { wrappedValidateAjvStorage } from 'rxdb/plugins/validate-ajv'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // wrap the validation around the main RxStorage const storage = wrappedValidateAjvStorage({ storage: getRxStorageDexie() }); const db = await createRxDatabase({ name: randomCouchString(10), storage }); ","version":"Next","tagName":"h3"},{"title":"validate-z-schema","type":1,"pageTitle":"Schema validation","url":"/schema-validation.html#validate-z-schema","content":" Both is-my-json-valid and validate-ajv use eval() to perform validation which might not be wanted when 'unsafe-eval' is not allowed in Content Security Policies. This one is using z-schema as validator which doesn't use eval. import { wrappedValidateZSchemaStorage } from 'rxdb/plugins/validate-z-schema'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // wrap the validation around the main RxStorage const storage = wrappedValidateZSchemaStorage({ storage: getRxStorageDexie() }); const db = await createRxDatabase({ name: randomCouchString(10), storage }); ","version":"Next","tagName":"h3"},{"title":"validate-is-my-json-valid","type":1,"pageTitle":"Schema validation","url":"/schema-validation.html#validate-is-my-json-valid","content":" WARNING: The is-my-json-valid validation is no longer supported until this bug is fixed. The validate-is-my-json-valid plugin uses is-my-json-valid for schema validation. import { wrappedValidateIsMyJsonValidStorage } from 'rxdb/plugins/validate-is-my-json-valid'; import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; // wrap the validation around the main RxStorage const storage = wrappedValidateIsMyJsonValidStorage({ storage: getRxStorageDexie() }); const db = await createRxDatabase({ name: randomCouchString(10), storage }); ","version":"Next","tagName":"h3"},{"title":"Third Party Plugins","type":0,"sectionRef":"#","url":"/third-party-plugins.html","content":"Third Party Plugins rxdb-hooks A set of hooks to integrate RxDB into react applications.rxdb-flexsearch The full text search for RxDB using FlexSearch.rxdb-orion Enables replication with Laravel Orion.rxdb-supabase Enables replication with Supabase.rxdb-utils Additional features for RxDB like models, timestamps, default values, view and more.loki-async-reference-adapter Simple async adapter for LokiJS, suitable to use RxDB's Lokijs RxStorage with React Native.","keywords":"","version":"Next"},{"title":"Transactions, Conflicts and Revisions","type":0,"sectionRef":"#","url":"/transactions-conflicts-revisions.html","content":"","keywords":"","version":"Next"},{"title":"Why RxDB does not have transactions","type":1,"pageTitle":"Transactions, Conflicts and Revisions","url":"/transactions-conflicts-revisions.html#why-rxdb-does-not-have-transactions","content":" When talking about transactions, we mean ACID transactions that guarantee the properties of atomicity, consistency, isolation and durability. With an ACID transaction you can mutate data dependent on the current state of the database. It is ensured that no other database operations happen in between your transaction and after the transaction has finished, it is guaranteed that the new data is actually written to the disc. To implement ACID transactions on a single server, the database has to keep track on who is running transactions and then schedule these transactions so that they can run in isolation. As soon as you have to split your database on multiple servers, transaction handling becomes way more difficult. The servers have to communicate with each other to find a consensus about which transaction can run and which has to wait. Network connections might break, or one server might complete its part of the transaction and then be required to roll back its changes because of an error on another server. But with RxDB you have multiple clients that can go randomly online or offline. The users can have different devices and the clock of these devices can go off by any time. To support ACID transactions here, RxDB would have to make the whole world stand still for all clients, while one client is doing a write operation. And even that can only work when all clients are online. Implementing that might be possible, but at the cost of an unpredictable amount of performance loss and not being able to support offline-first. A single write operation to a document is the only atomic thing you can do in RxDB. The benefits of not having to support transactions: Clients can read and write data without blocking each other.Clients can write data while being offline and then replicate with a server when they are online again, called offline-first.Creating a compatible backend for the replication is easy so that RxDB can replicate with any existing infrastructure.Optimizations like Sharding can be used. ","version":"Next","tagName":"h2"},{"title":"Revisions","type":1,"pageTitle":"Transactions, Conflicts and Revisions","url":"/transactions-conflicts-revisions.html#revisions","content":" Working without transactions leads to having undefined state when doing multiple database operations at the same time. Most client side databases rely on a last-write-wins strategy on write operations. This might be a viable solution for some cases, but often this leads to strange problems that are hard to debug. Instead, to ensure that the behavior of RxDB is always predictable, RxDB relies on revisions for version control. Revisions work similar to Lamport Clocks. Each document is stored together with its revision string, that looks like 1-9dcca3b8e1a and consists of: The revision height, a number that starts with 1 and is increased with each write to that document.The database instance token. An operation to the RxDB data layer does not only contain the new document data, but also the previous document data with its revision string. If the previous revision matches the revision that is currently stored in the database, the write operation can succeed. If the previous revision is different than the revision that is currently stored in the database, the operation will throw a 409 CONFLICT error. ","version":"Next","tagName":"h2"},{"title":"Conflicts","type":1,"pageTitle":"Transactions, Conflicts and Revisions","url":"/transactions-conflicts-revisions.html#conflicts","content":" There are two types of conflicts in RxDB, the local conflict and the replication conflict. ","version":"Next","tagName":"h2"},{"title":"Local conflicts","type":1,"pageTitle":"Transactions, Conflicts and Revisions","url":"/transactions-conflicts-revisions.html#local-conflicts","content":" A local conflict can happen when a write operation assumes a different previous document state, then what is currently stored in the database. This can happen when multiple parts of your application do simultaneous writes to the same document. This can happen on a single browser tab, or when multiple tabs write at once or when a write appears while the document gets replicated from a remote server replication. When a local conflict appears, RxDB will throw a 409 CONFLICT error. The calling code must then handle the error properly, depending on the application logic. Instead of handling local conflicts, in most cases it is easier to ensure that they cannot happen, by using incremental database operations like incrementalModify(), incrementalPatch() or incrementalUpsert(). These write operations have a build in way to handle conflicts by re-applying the mutation functions to the conflicting document state. ","version":"Next","tagName":"h3"},{"title":"Replication conflicts","type":1,"pageTitle":"Transactions, Conflicts and Revisions","url":"/transactions-conflicts-revisions.html#replication-conflicts","content":" A replication conflict appears when multiple clients write to the same documents at once and these documents are then replicated to the backend server. When you replicate with the Graphql replication and the replication primitives, RxDB assumes that conflicts are detected and resolved at the client side. When a document is send to the backend and the backend detected a conflict (by comparing revisions or other properties), the backend will respond with the actual document state so that the client can compare this with the local document state and create a new, resolved document state that is then pushed to the server again. You can read more about the replication protocol here. ","version":"Next","tagName":"h2"},{"title":"Custom conflict handler","type":1,"pageTitle":"Transactions, Conflicts and Revisions","url":"/transactions-conflicts-revisions.html#custom-conflict-handler","content":" A conflict handler is a JavaScript function that has two tasks: Detect if a conflict existsSolve existing conflicts Because the conflict handler also is used for conflict detection, it will run many times on pull-, push- and write operations of RxDB. Most of the time it will detect that there is no conflict and then return. Lets have a look at the default conflict handler of RxDB to learn how to create a custom one: export const defaultConflictHandler: RxConflictHandler<any> = function ( /** * The conflict handler gets 3 input properties: * - assumedMasterState: The state of the document that is assumed to be on the master branch * - newDocumentState: The new document state of the fork branch (=client) that RxDB want to write to the master * - realMasterState: The real master state of the document */ i: RxConflictHandlerInput<any> ): Promise<RxConflictHandlerOutput<any>> { /** * Here we detect if a conflict exists in the first place. * If there is no conflict, we return isEqual=true. * If there is a conflict, return isEqual=false. * In the default handler we do a deepEqual check, * but in your custom conflict handler you probably want * to compare specific properties of the document, like the updatedAt time, * for better performance because deepEqual() is expensive. */ if (deepEqual( i.newDocumentState, i.realMasterState )) { return Promise.resolve({ isEqual: true }); } /** * If a conflict exists, we have to resolve it. * The default conflict handler will always * drop the fork state and use the master state instead. * * In your custom conflict handler you likely want to merge properties * of the realMasterState and the newDocumentState instead. */ return Promise.resolve({ isEqual: false, documentData: i.realMasterState }); }; To overwrite the default conflict handler, you have to specify a custom conflictHandler property when creating a collection with addCollections(). const myCollections = await myDatabase.addCollections({ // key = collectionName humans: { schema: mySchema, conflictHandler: myCustomConflictHandler } }); ","version":"Next","tagName":"h2"},{"title":"Why IndexedDB is slow and what to use instead","type":0,"sectionRef":"#","url":"/slow-indexeddb.html","content":"","keywords":"","version":"Next"},{"title":"Batched Cursor","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#batched-cursor","content":" With IndexedDB 2.0, new methods were introduced which can be utilized to improve performance. With the getAll() method, a faster alternative to the old openCursor() can be created which improves performance when reading data from the IndexedDB store. Lets say we want to query all user documents that have an age greater then 25 out of the store. To implement a fast batched cursor that only needs calls to getAll() and not to getAllKeys(), we first need to create an age index that contains the primary id as last field. myIndexedDBObjectStore.createIndex( 'age-index', [ 'age', 'id' ] ); This is required because the age field is not unique, and we need a way to checkpoint the last returned batch so we can continue from there in the next call to getAll(). const maxAge = 25; let result = []; const tx: IDBTransaction = db.transaction([storeName], 'readonly', TRANSACTION_SETTINGS); const store = tx.objectStore(storeName); const index = store.index('age-index'); let lastDoc; let done = false; /** * Run the batched cursor until all results are retrieved * or the end of the index is reached. */ while (done === false) { await new Promise((res, rej) => { const range = IDBKeyRange.bound( /** * If we have a previous document as checkpoint, * we have to continue from it's age and id values. */ [ lastDoc ? lastDoc.age : -Infinity, lastDoc ? lastDoc.id : -Infinity, ], [ maxAge + 0.00000001, String.fromCharCode(65535) ], true, false ); const openCursorRequest = index.getAll(range, batchSize); openCursorRequest.onerror = err => rej(err); openCursorRequest.onsuccess = e => { const subResult: TestDocument[] = e.target.result; lastDoc = lastOfArray(subResult); if (subResult.length === 0) { done = true; } else { result = result.concat(subResult); } res(); }; }); } console.dir(result); As the performance test results show, using a batched cursor can give a huge improvement. Interestingly choosing a high batch size is important. When you known that all results of a given IDBKeyRange are needed, you should not set a batch size at all and just directly query all documents via getAll(). RxDB uses batched cursors in the IndexedDB RxStorage. ","version":"Next","tagName":"h2"},{"title":"IndexedDB Sharding","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#indexeddb-sharding","content":" Sharding is a technique, normally used in server side databases, where the database is partitioned horizontally. Instead of storing all documents at one table/collection, the documents are split into so called shards and each shard is stored on one table/collection. This is done in server side architectures to spread the load between multiple physical servers which increases scalability. When you use IndexedDB in a browser, there is of course no way to split the load between the client and other servers. But you can still benefit from sharding. Partitioning the documents horizontally into multiple IndexedDB stores, has shown to have a big performance improvement in write- and read operations while only increasing initial pageload slightly. As shown in the performance test results, sharding should always be done by IDBObjectStore and not by database. Running a batched cursor over the whole dataset with 10 store shards in parallel is about 28% faster then running it over a single store. Initialization time increases minimal from 9 to 17 milliseconds. Getting a quarter of the dataset by batched iterating over an index, is even 43% faster with sharding then when a single store is queried. As downside, getting 10k documents by their id is slower when it has to run over the shards. Also it can be much effort to recombined the results from the different shards into the required query result. When a query without a limit is done, the sharding method might cause a data load huge overhead. Sharding can be used with RxDB with the Sharding Plugin. ","version":"Next","tagName":"h2"},{"title":"Custom Indexes","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#custom-indexes","content":" Indexes improve the query performance of IndexedDB significant. Instead of fetching all data from the storage when you search for a subset of it, you can iterate over the index and stop iterating when all relevant data has been found. For example to query for all user documents that have an age greater than 25, you would create an age+id index. To be able to run a batched cursor over the index, we always need our primary key (id) as the last index field. Instead of doing this, you can use a custom index which can improve the performance. The custom index runs over a helper field ageIdCustomIndex which is added to each document on write. Our index now only contains a single string field instead of two (age-number and id-string). // On document insert add the ageIdCustomIndex field. const idMaxLength = 20; // must be known to craft a custom index docData.ageIdCustomIndex = docData.age + docData.id.padStart(idMaxLength, ' '); store.put(docData); // ... // normal index myIndexedDBObjectStore.createIndex( 'age-index', [ 'age', 'id' ] ); // custom index myIndexedDBObjectStore.createIndex( 'age-index-custom', [ 'ageIdCustomIndex' ] ); To iterate over the index, you also use a custom crafted keyrange, depending on the last batched cursor checkpoint. Therefore the maxLength of id must be known. // keyrange for normal index const range = IDBKeyRange.bound( [25, ''], [Infinity, Infinity], true, false ); // keyrange for custom index const range = IDBKeyRange.bound( // combine both values to a single string 25 + ''.padStart(idMaxLength, ' '), Infinity, true, false ); As shown, using a custom index can further improve the performance of running a batched cursor by about 10%. Another big benefit of using custom indexes, is that you can also encode boolean values in them, which cannot be done with normal IndexedDB indexes. RxDB uses custom indexes in the IndexedDB RxStorage. ","version":"Next","tagName":"h2"},{"title":"Relaxed durability","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#relaxed-durability","content":" Chromium based browsers allow to set durability to relaxed when creating an IndexedDB transaction. Which runs the transaction in a less secure durability mode, which can improve the performance. The user agent may consider that the transaction has successfully committed as soon as all outstanding changes have been written to the operating system, without subsequent verification. As shown here, using the relaxed durability mode can improve performance slightly. The best performance improvement could be measured when many small transactions have to be run. Less, bigger transaction do not benefit that much. ","version":"Next","tagName":"h2"},{"title":"Explicit transaction commits","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#explicit-transaction-commits","content":" By explicitly committing a transaction, another slight performance improvement can be achieved. Instead of waiting for the browser to commit an open transaction, we call the commit() method to explicitly close it. // .commit() is not available on all browsers, so first check if it exists. if (transaction.commit) { transaction.commit() } The improvement of this technique is minimal, but observable as these tests show. ","version":"Next","tagName":"h2"},{"title":"In-Memory on top of IndexedDB","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#in-memory-on-top-of-indexeddb","content":" To prevent transaction handling and to fix the performance problems, we need to stop using IndexedDB as a database. Instead all data is loaded into the memory on the initial page load. Here all reads and writes happen in memory which is about 100x faster. Only some time after a write occurred, the memory state is persisted into IndexedDB with a single write transaction. In this scenario IndexedDB is used as a filesystem, not as a database. There are some libraries that already do that: LokiJS with the IndexedDB AdapterAbsurd-SQLSQL.js with the empscripten Filesystem APIDuckDB Wasm ","version":"Next","tagName":"h2"},{"title":"In-Memory: Persistence","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#in-memory-persistence","content":" One downside of not directly using IndexedDB, is that your data is not persistent all the time. And when the JavaScript process exists without having persisted to IndexedDB, data can be lost. To prevent this from happening, we have to ensure that the in-memory state is written down to the disc. One point is make persisting as fast as possible. LokiJS for example has the incremental-indexeddb-adapter which only saves new writes to the disc instead of persisting the whole state. Another point is to run the persisting at the correct point in time. For example the RxDB LokiJS storage persists in the following situations: When the database is idle and no write or query is running. In that time we can persist the state if any new writes appeared before.When the window fires the beforeunload event we can assume that the JavaScript process is exited any moment and we have to persist the state. After beforeunload there are several seconds time which are sufficient to store all new changes. This has shown to work quite reliable. The only missing event that can happen is when the browser exists unexpectedly like when it crashes or when the power of the computer is shut of. ","version":"Next","tagName":"h3"},{"title":"In-Memory: Multi Tab Support","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#in-memory-multi-tab-support","content":" One big difference between a web application and a 'normal' app, is that your users can use the app in multiple browser tabs at the same time. But when you have all database state in memory and only periodically write it to disc, multiple browser tabs could overwrite each other and you would loose data. This might not be a problem when you rely on a client-server replication, because the lost data might already be replicated with the backend and therefore with the other tabs. But this would not work when the client is offline. The ideal way to solve that problem, is to use a SharedWorker. A SharedWorker is like a WebWorker that runs its own JavaScript process only that the SharedWorker is shared between multiple contexts. You could create the database in the SharedWorker and then all browser tabs could request the Worker for data instead of having their own database. But unfortunately the SharedWorker API does not work in all browsers. Safari dropped its support and InternetExplorer or Android Chrome, never adopted it. Also it cannot be polyfilled. UPDATE:Apple added SharedWorkers back in Safari 142 Instead, we could use the BroadcastChannel API to communicate between tabs and then apply a leader election between them. The leader election ensures that, no matter how many tabs are open, always one tab is the Leader. The disadvantage is that the leader election process takes some time on the initial page load (about 150 milliseconds). Also the leader election can break when a JavaScript process is fully blocked for a longer time. When this happens, a good way is to just reload the browser tab to restart the election process. Using a leader election is implemented in the RxDB LokiJS Storage. ","version":"Next","tagName":"h3"},{"title":"Further read","type":1,"pageTitle":"Why IndexedDB is slow and what to use instead","url":"/slow-indexeddb.html#further-read","content":" Offline First Database ComparisonSpeeding up IndexedDB reads and writesSQLITE ON THE WEB: ABSURD-SQLSQLite in a PWA with FileSystemAccessAPIResponse to this article by Oren Eini ","version":"Next","tagName":"h2"},{"title":"Using RxDB with TypeScript","type":0,"sectionRef":"#","url":"/tutorials/typescript.html","content":"","keywords":"","version":"Next"},{"title":"Declare the types","type":1,"pageTitle":"Using RxDB with TypeScript","url":"/tutorials/typescript.html#declare-the-types","content":" First you import the types from RxDB. import { createRxDatabase, RxDatabase, RxCollection, RxJsonSchema, RxDocument, } from 'rxdb'; ","version":"Next","tagName":"h2"},{"title":"Create the base document type","type":1,"pageTitle":"Using RxDB with TypeScript","url":"/tutorials/typescript.html#create-the-base-document-type","content":" First we have to define the TypeScript type of the documents of a collection: ","version":"Next","tagName":"h2"},{"title":"Option A: Create the document type from the schema","type":1,"pageTitle":"Using RxDB with TypeScript","url":"/tutorials/typescript.html#option-a-create-the-document-type-from-the-schema","content":" import { toTypedRxJsonSchema, ExtractDocumentTypeFromTypedRxJsonSchema, RxJsonSchema } from 'rxdb'; export const heroSchemaLiteral = { title: 'hero schema', description: 'describes a human being', version: 0, keyCompression: true, primaryKey: 'passportId', type: 'object', properties: { passportId: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, firstName: { type: 'string' }, lastName: { type: 'string' }, age: { type: 'integer' } }, required: ['firstName', 'lastName', 'passportId'], indexes: ['firstName'] } as const; // <- It is important to set 'as const' to preserve the literal type const schemaTyped = toTypedRxJsonSchema(heroSchemaLiteral); // aggregate the document type from the schema export type HeroDocType = ExtractDocumentTypeFromTypedRxJsonSchema<typeof schemaTyped>; // create the typed RxJsonSchema from the literal typed object. export const heroSchema: RxJsonSchema<HeroDocType> = heroSchemaLiteral; ","version":"Next","tagName":"h3"},{"title":"Option B: Manually type the document type","type":1,"pageTitle":"Using RxDB with TypeScript","url":"/tutorials/typescript.html#option-b-manually-type-the-document-type","content":" export type HeroDocType = { passportId: string; firstName: string; lastName: string; age?: number; // optional }; ","version":"Next","tagName":"h3"},{"title":"Option C: Generate the document type from schema during build time","type":1,"pageTitle":"Using RxDB with TypeScript","url":"/tutorials/typescript.html#option-c-generate-the-document-type-from-schema-during-build-time","content":" If your schema is in a .json file or generated from somewhere else, you might generate the typings with the json-schema-to-typescript module. ","version":"Next","tagName":"h3"},{"title":"Types for the ORM methods","type":1,"pageTitle":"Using RxDB with TypeScript","url":"/tutorials/typescript.html#types-for-the-orm-methods","content":" We also add some ORM-methods for the document. export type HeroDocMethods = { scream: (v: string) => string; }; We can merge these into our HeroDocument. export type HeroDocument = RxDocument<HeroDocType, HeroDocMethods>; Now we can define type for the collection which contains the documents. // we declare one static ORM-method for the collection export type HeroCollectionMethods = { countAllDocuments: () => Promise<number>; } // and then merge all our types export type HeroCollection = RxCollection<HeroDocType, HeroDocMethods, HeroCollectionMethods>; Before we can define the database, we make a helper-type which contains all collections of it. export type MyDatabaseCollections = { heroes: HeroCollection } Now the database. export type MyDatabase = RxDatabase<MyDatabaseCollections>; ","version":"Next","tagName":"h2"},{"title":"Using the types","type":1,"pageTitle":"Using RxDB with TypeScript","url":"/tutorials/typescript.html#using-the-types","content":" Now that we have declare all our types, we can use them. /** * create database and collections */ const myDatabase: MyDatabase = await createRxDatabase<MyDatabaseCollections>({ name: 'mydb', storage: getRxStorageDexie() }); const heroSchema: RxJsonSchema<HeroDocType> = { title: 'human schema', description: 'describes a human being', version: 0, keyCompression: true, primaryKey: 'passportId', type: 'object', properties: { passportId: { type: 'string' }, firstName: { type: 'string' }, lastName: { type: 'string' }, age: { type: 'integer' } }, required: ['passportId', 'firstName', 'lastName'] }; const heroDocMethods: HeroDocMethods = { scream: function(this: HeroDocument, what: string) { return this.firstName + ' screams: ' + what.toUpperCase(); } }; const heroCollectionMethods: HeroCollectionMethods = { countAllDocuments: async function(this: HeroCollection) { const allDocs = await this.find().exec(); return allDocs.length; } }; await myDatabase.addCollections({ heroes: { schema: heroSchema, methods: heroDocMethods, statics: heroCollectionMethods } }); // add a postInsert-hook myDatabase.heroes.postInsert( function myPostInsertHook( this: HeroCollection, // own collection is bound to the scope docData: HeroDocType, // documents data doc: HeroDocument // RxDocument ) { console.log('insert to ' + this.name + '-collection: ' + doc.firstName); }, false // not async ); /** * use the database */ // insert a document const hero: HeroDocument = await myDatabase.heroes.insert({ passportId: 'myId', firstName: 'piotr', lastName: 'potter', age: 5 }); // access a property console.log(hero.firstName); // use a orm method hero.scream('AAH!'); // use a static orm method from the collection const amount: number = await myDatabase.heroes.countAllDocuments(); console.log(amount); /** * clean up */ myDatabase.destroy(); ","version":"Next","tagName":"h2"},{"title":"Known Problems","type":1,"pageTitle":"Using RxDB with TypeScript","url":"/tutorials/typescript.html#known-problems","content":" RxDB uses the WeakRef API. If your typescript bundler throws the error TS2304: Cannot find name 'WeakRef', you have to add ES2021.WeakRef to compilerOptions.lib in your tsconfig.json. { "compilerOptions": { "lib": ["ES2020", "ES2021.WeakRef"] } } ","version":"Next","tagName":"h2"},{"title":"Why UI applications need NoSQL","type":0,"sectionRef":"#","url":"/why-nosql.html","content":"","keywords":"","version":"Next"},{"title":"Transactions do not work with humans involved","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#transactions-do-not-work-with-humans-involved","content":" On the server side, transactions are used to run steps of logic inside of a self contained unit of work. The database system ensures that multiple transactions do not run in parallel or interfere with each other. This works well because on the server side you can predict how longer everything takes. It can be ensured that one transactions does not block everything else for too long which would make the system not responding anymore to other requests. When you build a UI based application that is used by a real human, you can no longer predict how long anything takes. The user clicks the edit button and expects to not have anyone else change the document while the user is in edit mode. Using a transaction to ensure nothing is changed in between, is not an option because the transaction could be open for a long time and other background tasks, like replication, would no longer work. So whenever a human is involved, this kind of logic has to be implemented using other strategies. Most NoSQL databases like RxDB or CouchDB use a system based on revision and conflicts to handle these. ","version":"Next","tagName":"h2"},{"title":"Transactions do not work with offline-first","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#transactions-do-not-work-with-offline-first","content":" When you want to build an offline-first application, it is assumed that the user can also read and write data, even when the device has lost the connection to the backend. You could use database transactions on writes to the client's database state, but enforcing a transaction boundary across other instances like clients or servers, is not possible when there is no connection. On the client you could run an update query where all color: red rows are changed to color: blue, but this would not guarantee that there will still be other red documents when the client goes online again and restarts the replication with the server. UPDATE docs SET docs.color = 'red' WHERE docs.color = 'blue'; ","version":"Next","tagName":"h2"},{"title":"Relational queries in NoSQL","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#relational-queries-in-nosql","content":" What most people want from a relational database, is to run queries over multiple tables. Some people think that they cannot do that with NoSQL, so let me explain. Let's say you have two tables with customers and cities where each city has an id and each customer has a city_id. You want to get every customer that resides in Tokyo. With SQL, you would use a query like this: SELECT * FROM city WHERE city.name = 'Tokyo' LEFT JOIN customer ON customer.city_id = city.id; With NoSQL you can just do the same, but you have to write it manually: const cityDocument = await db.cities.findOne().where('name').equals('Tokyo').exec(); const customerDocuments = await db.customers.find().where('city_id').equals(cityDocument.id).exec(); So what are the differences? The SQL version would run faster on a remote database server because it would aggregate all data there and return only the customers as result set. But when you have a local database, it is not really a difference. Querying the two tables by hand would have about the same performance as a JavaScript implementation of SQL that is running locally. The main benefit from using SQL is, that the SQL query runs inside of a single transaction. When a change to one of our two tables happens, while our query runs, the SQL database will ensure that the write does not affect the result of the query. This could happen with NoSQL, while you retrieve the city document, the customer table gets changed and your result is not correct for the dataset that was there when you started the querying. As a workaround, you could observe the database for changes and if a change happened in between, you have to re-run everything. ","version":"Next","tagName":"h2"},{"title":"Reliable replication","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#reliable-replication","content":" In an offline first app, your data is replicated from your backend servers to your users and you want it to be reliable. The replication is reliable when, no matter what happens, every online client is able to run a replication and end up with the exact same database state as any other client. Implementing a reliable replication protocol is hard because of the circumstances of your app: Your users have unknown devices.They have an unknown internet speed.They can go offline or online at any time.Clients can be offline for a several days with un-synced changes.You can have many users at the same time.The users can do many database writes at the same time to the same entities. Now lets say you have a SQL database and one of your users, called Alice, runs a query that mutates some rows, based on a condition. # mark all items out of stock as inStock=FALSE UPDATE Table_A SET Table_A.inStock = FALSE FROM Table_A WHERE Table_A.amountInStock = 0 At first, the query runs on the local database of Alice and everything is fine. But at the same time Bob, the other client, updates a row and sets amountInStock from 0 to 1. Now Bob's client replicates the changes from Alice and runs them. Bob will end up with a different database state than Alice because on one of the rows, the WHERE condition was not met. This is not what we want, so our replication protocol should be able to fix it. For that it has to reduce all mutations into a deterministic state. Let me loosely describe how "many" SQL replications work: Instead of just running all replicated queries, we remember a list of all past queries. When a new query comes in that happened before our last query, we roll back the previous queries, run the new query, and then re-execute our own queries on top of that. For that to work, all queries need a timestamp so we can order them correctly. But you cannot rely on the clock that is running at the client. Client side clocks drift, they can run in a different speed or even a malicious client modifies the clock on purpose. So instead of a normal timestamp, we have to use a Hybrid Logical Clock that takes a client generated id and the number of the clients query into account. Our timestamp will then look like 2021-10-04T15:29.40.273Z-0000-eede1195b7d94dd5. These timestamps can be brought into a deterministic order and each client can run the replicated queries in the same order. Watch this video to learn how to implement that. While this sounds easy and realizable, we have some problems: This kind of replication works great when you replicate between multiple SQL servers. It does not work great when you replicate between a single server and many clients. As mentioned above, clients can be offline for a long time which could require us to do many and heavy rollbacks on each client when someone comes back after a long time and replicates the change.We have many clients where many changes can appear and our database would have to roll back many times.During the rollback, the database cannot be used for read queries.It is required that each client downloads and keeps the whole query history. With NoSQL, replication works different. A new client downloads all current documents and each time a document changes, that document is downloaded again. Instead of replicating the query that leads to a data change, we just replicate the changed data itself. Of course, we could do the same with SQL and just replicate the affected rows of a query, like WatermelonDB does it. This was a clever way to go for WatermelonDB, because it was initially made for React Native and did want to use the fast SQLite instead of the slow AsyncStorage. But in a more general view, it defeats the whole purpose of having a replicating relational database because you have transactions locally, but these transactions become meaningless as soon as the data goes through the replication layer. ","version":"Next","tagName":"h2"},{"title":"Server side validation","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#server-side-validation","content":" Whenever there is client-side input, it must be validated on the server. On a NoSQL database, validating a changed document is trivial. The client sends the changed document to the server, and the server can then check if the user was allowed to modify that one document and if the applied changes are ok. Safely validating a SQL query is up to impossible. You first need a way to parse the query with all this complex SQL syntax and keywords.You have to ensure that the query does not DOS your system.Then you check which rows would be affected when running the query and if the user was allowed to change themThen you check if the mutation to that rows are valid. For simple queries like an insert/update/delete to a single row, this might be doable. But a query with 4 LEFT JOIN will be hard. ","version":"Next","tagName":"h2"},{"title":"Event optimization","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#event-optimization","content":" With NoSQL databases, each write event always affects exactly one document. This makes it easy to optimize the processing of events at the client. For example instead of handling multiple updates to the same document, when the user comes online again, you could skip everything but the last event. Similar to that you can optimize observable query results. When you query the customers table you get a query result of 10 customers. Now a new customer is added to the table and you want to know how the new query results look like. You could analyze the event and now you know that you only have to add the new customer to the previous results set, instead of running the whole query again. These types of optimizations can be run with all NoSQL queries and even work with limit and skip operators. In RxDB this all happens in the background with the EventReduce algorithm that calculates new query results on incoming changes. These optimizations do not really work with relational data. A change to one table could affect a query to any other tables. and you could not just calculate the new results based on the event. You would always have to re-run the full query to get the updated results. ","version":"Next","tagName":"h2"},{"title":"Migration without relations","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#migration-without-relations","content":" Sooner or later you change the layout of your data. You update the schema and you also have to migrate the stored rows/documents. In NoSQL this is often not a big deal because all of your documents are modeled as self containing piece of data. There is an old version of the document and you have a function that transforms it into the new version. With relational data, nothing is self-contained. The relevant data for the migration of a single row could be inside any other table. So when changing the schema, it will be important which table to migrate first and how to orchestrate the migration or relations. On client side applications, this is even harder because the client can close the application at any time and the migration must be able to continue. ","version":"Next","tagName":"h2"},{"title":"Everything can be downgraded to NoSQL","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#everything-can-be-downgraded-to-nosql","content":" To use an offline first database in the frontend, you have to make it compatible with your backend APIs. Making software things compatible often means you have to find the lowest common denominator. When you have SQLite in the frontend and want to replicate it with the backend, the backend also has to use SQLite. You cannot even use PostgreSQL because it has a different SQL dialect and some queries might fail. But you do not want to let the frontend dictate which technologies to use in the backend just to make replication work. With NoSQL, you just have documents and writes to these documents. You can build a document based layer on top of everything by removing functionality. It can be built on top of SQL, but also on top of a graph database or even on top of a key-value store like levelDB or FoundationDB. With that document layer you can build a replication API that serves documents sorted by the last update time and there you have a realtime replication. ","version":"Next","tagName":"h2"},{"title":"Caching query results","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#caching-query-results","content":" Memory is limited and this is especially true for client side applications where you never know how much free RAM the device really has. You want to have a fast realtime UI, so your database must be able to cache query results. When you run a SQL query like SELECT .. the result of it can be anything. An array, a number, a string, a single row, it depends on how the query goes on. So the caching strategy can only be to keep the result in memory, once for each query. This scales very bad because the more queries you run, the more results you have to store in memory. When you make a query to a NoSQL collection, you always know how the result will look like. It is a list of documents, based on the collection's schema (if you have one). The result set is stored in memory, but because you get similar documents for different queries to the same collection, we can de-duplicated the documents. So when multiple queries return the same document, we only have it in the cache once and each query caches point to the same memory object. So no matter how many queries you make, your cache maximum is the collection size. ","version":"Next","tagName":"h2"},{"title":"TypeScript support","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#typescript-support","content":" Modern web apps are build with TypeScript and you want the transpiler to know the types of your query result so it can give you build time errors when something does not match. This is quite easy on document based systems. The typings of for each document of a collection can be generated from the schema, and all queries to that collection will always return the given document type. With SQL you have to manually write the typings for each query by hand because it can contain all these aggregate functions that affect the type of the query's result. ","version":"Next","tagName":"h2"},{"title":"What you lose with NoSQL","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#what-you-lose-with-nosql","content":" You can not run relational queries across tables inside a single transaction.You can not mutate documents based on a WHERE clause, in a single transaction.You need to resolve replication conflicts on a per-document basis. ","version":"Next","tagName":"h2"},{"title":"But there is database XY","type":1,"pageTitle":"Why UI applications need NoSQL","url":"/why-nosql.html#but-there-is-database-xy","content":" Yes, there are SQL databases out there that run on the client side or have replication, but not both. WebSQL / sql.js: In the past there was WebSQL in the browser. It was a direct mapping to SQLite because all browsers used the SQLite implementation. You could store relational data in it, but there was no concept of replication at any point in time. sql.js is an SQLite complied to JavaScript. It has not replication and it has (for now) no persistent storage, everything is stored in memory.WatermelonDB is a SQL databases that runs in the client. WatermelonDB uses a document-based replication that is not able to replicate relational queries.Cockroach / Spanner/ PostgreSQL etc. are SQL databases with replication. But they run on servers, not on clients, so they can make different trade offs. Further read Cockroach Labs: Living Without Atomic Clocks Transactions, Conflicts and Revisions in RxDB Why MongoDB, Cassandra, HBase, DynamoDB, and Riak will only let you perform transactions on a single data item Make a PR to this file if you have more interesting links to that topic ","version":"Next","tagName":"h2"}],"options":{"excludeRoutes":["blog","releases"],"id":"default"}}