Grandfather-Father-Son Backup Storage Calculator

JJ Ben-Joseph headshot JJ Ben-Joseph

How the Grandfather-Father-Son (GFS) backup storage calculator works

A Grandfather-Father-Son (GFS) rotation is a classic retention scheme used to keep multiple “layers” of backups over time. The idea is simple: keep frequent short-term restore points (the sons), keep less frequent medium-term restore points (the fathers), and keep the least frequent long-term restore points (the grandfathers). This calculator estimates how much raw storage you need to hold those retained copies at the same time.

What each term means

  • Full backup size: the size of one complete backup of your protected data set, in gigabytes (GB). This is typically a single full restore point (e.g., an image backup or a full file-level backup).
  • Incremental backup size: the size of one incremental backup (changes since the last backup), in GB. Incrementals are often daily, but you can treat them as “one incremental period.”
  • Daily incremental copies (Sons): how many incremental restore points you keep in the short-term tier. Example: keeping 7 incrementals for the last week.
  • Weekly full copies (Fathers): how many weekly full backups you keep. Example: 4 means you retain 4 weekly full restore points.
  • Monthly full copies (Grandfathers): how many monthly full backups you keep. Example: 12 means you retain 12 monthly full restore points.

Core model and formulas

This page uses a capacity planning model: it multiplies the size of each backup type by how many of that type you retain concurrently, then adds them together. In the simplest form:

TotalStorage = F×W + F×M + I×D

Where:

  • F = full backup size (GB)
  • I = incremental backup size (GB)
  • D = number of retained daily incrementals (sons)
  • W = number of retained weekly full backups (fathers)
  • M = number of retained monthly full backups (grandfathers)

The calculator also breaks the total into three components so you can see where the storage goes:

  • Incremental tier storage = I × D
  • Weekly tier storage = F × W
  • Monthly tier storage = F × M

How to interpret the result

The output is an estimate of the concurrent storage footprint for retained restore points under the assumptions above. In other words: “If I keep these daily incrementals and those weekly/monthly fulls at the same time, how much space do I need just for the backup data?”

Use the result for:

  • Repository sizing (NAS/SAN, backup appliance, object storage bucket capacity planning).
  • Budget estimation (rough cost forecasting for storage allocation).
  • Retention trade-offs (e.g., what happens if you increase monthly retention from 12 to 24).

To make the number more practical, the calculator displays totals in both GB and TB. For TB conversion it uses the common decimal convention: 1 TB = 1000 GB. (Some systems report tebibytes, where 1 TiB = 1024 GiB; your storage UI may differ.)

Worked example

Assume:

  • Full backup size F = 100 GB
  • Incremental backup size I = 10 GB
  • Daily incrementals retained D = 7 (keep 7 restore points)
  • Weekly fulls retained W = 4 (keep 4 weeks)
  • Monthly fulls retained M = 12 (keep 12 months)

Then:

  • Incrementals: I × D = 10 × 7 = 70 GB
  • Weekly fulls: F × W = 100 × 4 = 400 GB
  • Monthly fulls: F × M = 100 × 12 = 1200 GB

Total = 70 + 400 + 1200 = 1670 GB1.67 TB.

Interpretation: in this configuration, the long-term monthly retention dominates capacity. If you need to reduce storage, lowering monthly retention (or changing the monthly backup type) often has the biggest effect—assuming your full size remains constant.

Common GFS retention patterns (comparison table)

The right retention settings depend on business requirements (RPO/RTO), compliance, and how frequently data changes. The table below compares a few typical patterns and what they emphasize:

Pattern name Daily incrementals (sons) Weekly fulls (fathers) Monthly fulls (grandfathers) What it’s good for Trade-offs
Basic IT default 7 4 12 General-purpose retention with a year of monthly points Monthly tier can dominate storage
Short-term heavy 14 4 6 Frequent recent restores; less long-term history Higher incremental storage; fewer compliance-friendly restore points
Compliance-oriented 7 8 24 Longer audit window and more weekly granularity Large full-backup footprint; may need tiered storage
Lean archive 5 4 3 Minimal capacity usage while keeping a few historical points Limited rollback options for older data

Assumptions and limitations

This calculator intentionally stays simple, which makes it easy to use—but also means the estimate may differ from real-world backup products. Key assumptions:

  • Full backups are counted independently: weekly and monthly fulls are treated as separate full-size restore points. Some systems “promote” a weekly to monthly without storing a second copy; this calculator does not attempt to detect that.
  • Incrementals are not chained to specific fulls: many backup systems store incrementals associated with a base full and require the chain for restore. Here we only model the space of the incrementals themselves (I × D).
  • Sizes are constant: it assumes each full is the same size and each incremental is the same size. In reality, data growth and change rate vary by day/week/month.
  • No compression/deduplication modeled: backup repositories often use compression and deduplication; cloud/object storage may also have overhead. If your platform achieves (for example) 2:1 data reduction, actual capacity could be roughly half—but this varies widely by data type and backup method.
  • No overhead for metadata or snapshots: index databases, metadata, immutability overhead, and filesystem slack are not included.
  • Retention counts are “concurrent copies”: entering 12 monthly means you keep 12 monthly restore points stored at the same time, not “created over time” without overlap.

Practical tips before you finalize storage

  • Add a safety margin (commonly 10–30%) for growth, overhead, and variability.
  • If you use deduplication, validate using your vendor’s reported “stored size” over a few backup cycles.
  • Consider whether your “monthly” point is a true full, synthetic full, or simply a tagged weekly—this affects real storage.
Size of one complete full backup.
Size of one incremental restore point (one period).
Retention (number of copies kept)
Counts are concurrent retained restore points in each tier.

Results

Incremental tier (sons)
GB
Weekly full tier (fathers)
GB
Monthly full tier (grandfathers)
GB
Total estimated storage
GB ( TB)

Enter backup parameters to compute required storage.

The grandfather-father-son (GFS) scheme remains one of the most enduring approaches to long-term data protection. By rotating daily incrementals, weekly fulls, and monthly archives, it balances the need for frequent recovery points with the desire for historical snapshots. This calculator translates those abstract retention goals into concrete storage requirements so that administrators can provision disks, tapes, or cloud buckets with confidence. The underlying arithmetic is straightforward yet powerful. Suppose a daily incremental backup consumes I gigabytes and you keep D of them. The incremental tier therefore occupies D · I gigabytes. Weekly and monthly full copies, each of size F , require W · F and M · F respectively. Adding these tiers yields the total storage

T = D · I + W · F + M · F

While the computation can be coded in a single line, the decision of how many copies to retain involves a rich blend of risk tolerance, regulatory mandates, and operational realities. Organizations subject to financial audits might require seven years of monthly archives. A creative agency concerned primarily with short-term edits may prefer a leaner rotation. The GFS pattern offers flexibility: you can dial up or down the number of sons, fathers, or grandfathers to tailor cost versus recoverability. The table above updates instantly with your inputs, revealing how each tier contributes to the total. For example, doubling the incremental retention from seven to fourteen copies increases the daily tier linearly, whereas adding another monthly full increases the total by an entire full backup’s size.

Backup planning intersects with probability theory and the economics of downtime. Each additional restore point reduces the expected time to recover from corruption or accidental deletion. Imagine an engineer overwriting a configuration file at 4 PM and noticing at 5 PM. If only nightly backups exist, the most recent copy might already contain the mistake. Incremental snapshots taken every hour or every day shrink this vulnerability window. However, storage is finite and budgets are real. Decision makers thus weigh the marginal value of an extra snapshot against the cost of the media to house it. The formula above clarifies those trade-offs by showing exactly how many gigabytes each retention choice consumes.

Beyond simple storage sums, practitioners must also consider growth rate. Suppose your full backup today is 100 GB, but your dataset grows by 5 GB per month. A one-year horizon of monthly grandfathers could therefore demand not merely twelve times 100 GB but a series increasing from 100 to 155 GB. The calculator assumes constant sizes, yet the explanatory text here illustrates how to extend the model. Let the full size at month n be F + g · n , where g is growth per month. The total for monthly grandfathers becomes a summation: n = 0 M F + g · n . Closed-form solutions or spreadsheets can further refine budgeting exercises.

The human element is equally vital. Data protection policies influence employee behavior and vice versa. If staff know that only nightly backups exist, they may exercise greater caution before performing bulk operations late in the day. Conversely, frequent snapshots can foster experimentation by reducing fear of irreversible mistakes. Documenting the retention policy and communicating it to stakeholders helps align expectations. The calculator’s lengthy explanation doubles as a primer for such discussions, arming administrators with talking points about why certain tiers exist and how they safeguard the organization.

Although the GFS model predates cloud computing, it adapts elegantly to modern environments. Object storage services like Amazon S3 or Azure Blob offer lifecycle rules that expire old objects automatically. By coupling this calculator’s output with lifecycle policies, you can provision a bucket of the appropriate size and trust the platform to prune excess copies. For on-premises deployments, the numbers inform purchase decisions for NAS arrays or tape libraries. Even when storage is plentiful, computing the total remains worthwhile to estimate replication bandwidth and disaster-recovery synchronization windows. The weight of data moving across networks can impact performance just as much as the disk footprint itself.

Security considerations also intertwine with retention. Longer histories imply more media to protect. Encryption keys must remain accessible for as long as any backup persists, yet they should also rotate to reduce exposure. A nuanced policy might encrypt each monthly grandfather with a dedicated key, archived separately under stringent controls. Such strategies add overhead beyond the gigabytes counted here, but understanding the baseline storage requirement is the first step toward layered defenses. In compliance-heavy industries, auditable logs of backup creation and verification may themselves require storage planning.

Real-world case studies illustrate how organizations tune GFS. A small nonprofit may choose 7 daily incrementals, 4 weekly fulls, and 6 monthly archives, yielding a storage footprint of 7×10 GB + 4×100 GB + 6×100 GB = 1,270 GB. A media production company dealing with terabytes per project might retain only 3 daily incrementals but keep 26 weekly fulls to capture each client revision. Their formula becomes 3×500 GB + 26×5,000 GB = 133,000 GB. Such scenarios show how the same mathematics scales from modest to massive operations.

Another extension involves off-site replication. Many businesses duplicate backups to a secondary location or cloud region. If you maintain two copies of each backup tier for redundancy, simply multiply the calculator’s total by two. Advanced schemes like 3-2-1 (three copies on two media types with one off-site) further increase requirements. The equation evolves into T = c · D I + W F + M F , where c denotes copy count. Recognizing these relationships prevents underestimating storage when designing resilient systems.

Ultimately, the GFS backup storage calculator is both a practical tool and an educational device. By mapping the abstract notion of retention policies to tangible numbers, it demystifies the process of capacity planning. The MathML formula encapsulates the core computation, while the narrative explores nuances from growth and security to human factors and cloud integration. Whether you are a seasoned administrator reviewing your disaster recovery strategy or a newcomer learning best practices, the combination of interactive form, dynamic table, and comprehensive explanation equips you to design backup regimes that are robust, economical, and well understood.

Embed this calculator

Copy and paste the HTML below to add the Grandfather-Father-Son Backup Storage Calculator to your website.