WORK IN PROGRESS WORK IN PROGRESS WORK IN PROGRESS WORK IN PROGRESS WORK IN PROGRESS WORK IN PROGRESS WORK IN PROGRESS WORK IN PROGRESS

Sat Mar 21 2026 00:00:00 GMT+0000 (Coordinated Universal Time)

Odoo Filestore vs Attachment Metadata Divergence Recovery Runbook

A production-safe runbook for incidents where Odoo ir_attachment metadata and on-disk filestore objects diverge, causing missing files, broken previews, and download failures.

When users report broken downloads (File Not Found) while attachment rows still exist in PostgreSQL, you likely have divergence between ir_attachment metadata and the Odoo filestore. This runbook gives a deterministic sequence: scope impact, freeze risky writes, recover missing objects safely, and prevent reoccurrence.

Incident signals

  • Odoo logs contain errors like No such file or directory for filestore/<db_name>/... paths.
  • Users can see attachment records in chatter/documents but cannot download them.
  • Backup restore or storage migration recently occurred.
  • Disk-level cleanup scripts ran on filestore paths.

Step 0 — Stabilize the system before touching data

  1. Pause non-essential jobs that create/update attachments (imports, mail fetchers, large document sync).
  2. Keep Odoo online for read traffic if possible, but stop admin bulk operations.
  3. Capture a point-in-time PostgreSQL backup and a filestore snapshot before remediation.
# Example safety snapshot commands (adapt to your environment)
pg_dump "$ODOO_DB_URI" -Fc -f /var/backups/odoo-pre-filestore-incident.dump
rsync -a --delete /var/lib/odoo/.local/share/Odoo/filestore/<db_name>/ /var/backups/filestore-pre-incident/

Do not run destructive cleanup until you have both DB and filestore rollback points.

Step 1 — Confirm Odoo is using filestore-backed attachments

-- In standard Odoo filestore mode, store_fname is populated and db_datas is usually null
select
  count(*) as total,
  count(*) filter (where store_fname is not null and store_fname <> '') as filestore_backed,
  count(*) filter (where db_datas is not null) as db_backed
from ir_attachment;

If db_backed is unexpectedly high, stop and verify ir_attachment.location configuration history before proceeding.

Step 2 — Identify metadata rows pointing to missing files

Run this Python check on the Odoo host (fast, no writes):

python3 - <<'PY'
import os
import psycopg2

db = os.environ["PGDATABASE"]
base = f"/var/lib/odoo/.local/share/Odoo/filestore/{db}"
conn = psycopg2.connect("")
cur = conn.cursor()
cur.execute("""
select id, res_model, res_id, store_fname, file_size, create_date
from ir_attachment
where store_fname is not null and store_fname <> ''
""")
missing = []
for rid, model, res_id, store_fname, file_size, create_date in cur.fetchall():
    p = os.path.join(base, store_fname)
    if not os.path.exists(p):
        missing.append((rid, model, res_id, store_fname, file_size, create_date))

print(f"missing_count={len(missing)}")
for row in missing[:50]:
    print("|".join(str(x) for x in row))
PY

If count is large, export full results to a file and group by res_model to prioritize business-critical records first.

Step 3 — Identify orphaned files on disk (optional but useful)

This finds files present on disk with no matching store_fname row.

# 1) Export known store_fname values
psql "$ODOO_DB_URI" -Atc "select store_fname from ir_attachment where store_fname is not null and store_fname <> ''" \
  | sort > /tmp/store_fname_from_db.txt

# 2) List files from filestore relative path format
cd /var/lib/odoo/.local/share/Odoo/filestore/<db_name>
find . -type f | sed 's#^./##' | sort > /tmp/store_fname_from_disk.txt

# 3) Compare (disk entries not referenced in DB)
comm -23 /tmp/store_fname_from_disk.txt /tmp/store_fname_from_db.txt > /tmp/orphan_filestore_paths.txt
wc -l /tmp/orphan_filestore_paths.txt

Treat orphan files as evidence until incident closure; do not delete during active recovery.

Step 4 — Recovery paths (safe order)

Path A: Restore missing files from nearest good filestore backup (preferred)

  1. Restore only missing paths, not full overwrite.
  2. Re-check the missing set after each batch.
# Example: restore only one missing object
rsync -av /mnt/backup/filestore/<db_name>/ab/abcdef1234... \
          /var/lib/odoo/.local/share/Odoo/filestore/<db_name>/ab/

Path B: For non-critical regenerated artifacts, rebuild instead of restore

For attachments that can be deterministically recreated (for example generated reports), use controlled re-generation jobs and leave audit notes.

Path C: Mark irrecoverable references explicitly

If backup gaps make objects unrecoverable, coordinate with application owners and mark impacted records/workflows so users do not repeatedly hit failing downloads.

Step 5 — Verification before reopening normal write load

  • Missing-file check returns zero (or only accepted exceptions).
  • Odoo UI test: download attachments from top impacted models (account.move, mail.message, documents.document, etc.).
  • No new File Not Found errors in logs for at least 15 minutes under normal traffic.
# Example quick log watch
journalctl -u odoo -f | grep -E "File Not Found|No such file|ir_attachment"

Rollback plan

If remediation worsens impact:

  1. Stop write-heavy jobs again.
  2. Restore PostgreSQL and filestore from the Step 0 snapshots to a known-consistent point.
  3. Re-test critical attachment downloads.
  4. Re-attempt recovery with smaller batches and full command logging.

Post-incident hardening checklist

  • Enforce backup coupling: PostgreSQL backup and filestore snapshot must share the same recovery point objective window.
  • Add a daily integrity job that samples ir_attachment.store_fname paths and alerts on missing files.
  • Restrict manual filestore access; remove ad-hoc cleanup scripts from production hosts.
  • During migrations/restores, run an attachment integrity validation before declaring success.
  • Document storage topology (local disk, NFS, object-store mount) and ownership boundaries.

The operational rule: treat PostgreSQL rows and filestore objects as one atomic data domain. Restoring only one side is the fastest path to repeat incidents.

Back to blog