Strike some unnecessary words from docs.

This commit is contained in:
Dan Helfman 2019-02-04 20:58:27 -08:00
parent 0bce77a2ac
commit 18ae91ea6e

View file

@ -4,10 +4,10 @@ title: How to deal with very large backups
## Biggish data ## Biggish data
Borg itself is great for efficiently de-duplicating data across successive Borg itself is great for efficiently de-duplicating data across successive
backup archives, even when dealing with very large repositories. However, you backup archives, even when dealing with very large repositories. But you may
may find that while borgmatic's default mode of "prune, create, and check" find that while borgmatic's default mode of "prune, create, and check" works
works well on small repositories, it's not so great on larger ones. That's well on small repositories, it's not so great on larger ones. That's because
because running the default consistency checks just takes a long time on large running the default consistency checks takes a long time on large
repositories. repositories.
### A la carte actions ### A la carte actions
@ -34,7 +34,7 @@ Another option is to customize your consistency checks. The default
consistency checks run both full-repository checks and per-archive checks consistency checks run both full-repository checks and per-archive checks
within each repository. within each repository.
But if you find that archive checks are just too slow, for example, you can But if you find that archive checks are too slow, for example, you can
configure borgmatic to run repository checks only. Configure this in the configure borgmatic to run repository checks only. Configure this in the
`consistency` section of borgmatic configuration: `consistency` section of borgmatic configuration: