Strike some unnecessary words from docs.

This commit is contained in:
Dan Helfman 2019-02-04 20:58:27 -08:00
parent 0bce77a2ac
commit 18ae91ea6e

View file

@ -4,10 +4,10 @@ title: How to deal with very large backups
## Biggish data
Borg itself is great for efficiently de-duplicating data across successive
backup archives, even when dealing with very large repositories. However, you
may find that while borgmatic's default mode of "prune, create, and check"
works well on small repositories, it's not so great on larger ones. That's
because running the default consistency checks just takes a long time on large
backup archives, even when dealing with very large repositories. But you may
find that while borgmatic's default mode of "prune, create, and check" works
well on small repositories, it's not so great on larger ones. That's because
running the default consistency checks takes a long time on large
repositories.
### A la carte actions
@ -34,7 +34,7 @@ Another option is to customize your consistency checks. The default
consistency checks run both full-repository checks and per-archive checks
within each repository.
But if you find that archive checks are just too slow, for example, you can
But if you find that archive checks are too slow, for example, you can
configure borgmatic to run repository checks only. Configure this in the
`consistency` section of borgmatic configuration: