When we think of scaling we usually imagine spiky charts of users hitting a database or processing computationally expensive queries. What we don’t always think about it is deleting data. Handling large amounts of deletes is an important part of scaling a database. Imagine a system that’s required to delete historical records at a specific deadline. If these records are hundreds of gigabytes in size, it will likely be difficult to delete them all without bogging the database down for the rest of its users. This exact scenario hasn’t always been easy with the Firebase Realtime Database, but we’re excited to say that it just got a lot easier.
Today, we’re introducing a new way to efficiently perform large deletes!
Efficient large deletes
How to delete a large node without maxing out capacity
If you want to delete a large node, the new recommended approach is to use the Firebase CLI (> v6.4.0). The CLI automatically detects a large node and performs a chunked delete efficiently.
$ firebase database:remove /path/to/delete
Keep in mind that in order to delete a large node, the Firebase CLI has to break it down into chunks. This means that clients can see a partial state where part of the data is missing. Writes in the path that is being deleted will still succeed, but the CLI tool will eventually delete all data at this path. This behavior is acceptable if no app depends on this node. However, if there are active listeners within the delete path, please make sure the listener can gracefully handle partial documents.
If you want consistency and fast deletion, consider using a special field, a.k.a a tombstone to mark this document as hidden, and then run a cloud function cron job to asynchronously purge the data. You can use Firebase Rules to disallow access to hidden documents.
How to prevent large deletes from happening unintentionally
We’ve also added a configuration option (
defaultWriteSizeLimit) to the Realtime Database that allows you to specify a write size limit. This limit allows you to prevent operations (large deletes and writes) from being executed on your database if they exceed this limit.
You can use this option to prevent app code from accidentally triggering a large operation, which would make your app unresponsive for a time. For more detail, please see our documentation about this option.
You can check and update the configuration via the CLI tool (version 6.4.0 and newer). There are four available thresholds. You can pick appropriate threshold based on your application requirement
small- Abort if the estimated
writetime is longer than 10s.
medium- Abort if the estimated
writetime is longer than 30s.
large- Abort if the estimated
writetime is longer than 1min.
unlimited- No write size limit. All requests will be processed, at the risk of maxing out capacity.
Note: The target time is not a guaranteed cutoff off. The estimated time may be off from the actual write time.
$ firebase database:settings:set defaultWriteSizeLimit unlimited --instance <database-name> $ firebase database:settings:get defaultWriteSizeLimit --instance <database-name>
For REST requests, you can override
defaultWriteSizeLimit with the
writeSizeLimit query parameter. In addition, REST queries support a special
tiny- Abort if estimated time is longer than 1s. (Used by
firebase database:removeto minimize limit resource consumption)
$ curl -X PUT "https://<database-name>.firebaseio.com/path.json?writeSizeLimit=medium"
defaultWriteSizeLimit for new databases is
large. In order to avoid affecting existing apps, the setting will remain at
unlimited for existing projects for now.
We do want to extend this protection to everyone. So this summer (June~August, 2019), will set
large for existing databases that have not configured
defaultWriteSizeLimit. To avoid disruption, we will exclude any databases that have triggered at least one large delete in the past three months.
Consider setting defaultWriteSizeLimit now
These controls can help you keep your apps responsive and your users happy. We suggest setting defaultWriteSizeLimit for your existing apps today.
Let us know what you think
Let us know what you think of this new feature! Leave a message in our Google group.