[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"blog-pg-catalog-vs-pg-dump-schema-snapshots":3,"related-pg-catalog-vs-pg-dump-schema-snapshots":1347},{"id":4,"title":5,"author":6,"body":7,"description":1333,"extension":1334,"keywords":1335,"meta":1338,"navigation":127,"path":1339,"publishedAt":1340,"readTime":1341,"seo":1342,"slug":1343,"stem":1344,"updatedAt":1345,"__hash__":1346},"blog\u002Fblog\u002Fpg-catalog-vs-pg-dump-schema-snapshots.md","pg_catalog vs pg_dump for schema snapshots","Maxwell Kimaiyo",{"type":8,"value":9,"toc":1318},"minimark",[10,19,22,31,36,43,51,57,60,66,69,141,144,194,197,200,204,207,255,269,272,275,281,286,293,296,302,306,309,419,423,426,529,532,536,539,595,598,653,657,660,663,676,679,682,685,810,814,817,820,887,892,895,899,909,1224,1227,1231,1237,1243,1258,1263,1277,1283,1287,1293,1299,1302,1311,1314],[11,12,13,14,18],"p",{},"If you have ever tried to detect PostgreSQL schema drift by diffing two ",[15,16,17],"code",{},"pg_dump"," outputs, you have probably run into the same frustrating problem: the diff says the schema changed, but nothing actually did.",[11,20,21],{},"A column seems to have moved. A constraint looks deleted and re-added. An index appears different for no real reason. The output changes, even though the schema is logically the same.",[11,23,24,25,27,28,30],{},"That is not really a bug in ",[15,26,17],{},". It is a side effect of what ",[15,29,17],{}," was built to do.",[11,32,33,35],{},[15,34,17],{}," is great for backup and restore. It is not great for deterministic comparison.",[11,37,38,39,42],{},"If your goal is schema drift detection, querying ",[15,40,41],{},"pg_catalog"," directly is usually the more reliable approach. With explicit ordering, you can produce stable snapshots that are much easier to diff, hash, and compare over time.",[44,45,47,48,50],"h2",{"id":46},"why-pg_dump-creates-noisy-diffs","Why ",[15,49,17],{}," creates noisy diffs",[11,52,53,54,56],{},"The key issue is that ",[15,55,17],{}," is designed to recreate a database, not to produce stable text output for comparison.",[11,58,59],{},"That distinction matters.",[11,61,62,63,65],{},"When ",[15,64,17],{}," generates schema-only SQL, its job is to emit valid DDL in an order that works for restore. It does not promise that objects will always appear in the same order across runs. In simple databases, you might get identical output twice in a row. But as schemas grow more complex, that becomes less reliable.",[11,67,68],{},"Take a small example:",[70,71,76],"pre",{"className":72,"code":73,"language":74,"meta":75,"style":75},"language-sql shiki shiki-themes github-light github-dark","CREATE TABLE orders (\n    id bigserial PRIMARY KEY,\n    customer_id bigint NOT NULL,\n    total_cents integer NOT NULL,\n    status text NOT NULL DEFAULT 'pending',\n    created_at timestamptz NOT NULL DEFAULT now()\n);\n\nCREATE INDEX idx_orders_customer ON orders (customer_id);\nCREATE INDEX idx_orders_status ON orders (status);\n","sql","",[15,77,78,86,92,98,104,110,116,122,129,135],{"__ignoreMap":75},[79,80,83],"span",{"class":81,"line":82},"line",1,[79,84,85],{},"CREATE TABLE orders (\n",[79,87,89],{"class":81,"line":88},2,[79,90,91],{},"    id bigserial PRIMARY KEY,\n",[79,93,95],{"class":81,"line":94},3,[79,96,97],{},"    customer_id bigint NOT NULL,\n",[79,99,101],{"class":81,"line":100},4,[79,102,103],{},"    total_cents integer NOT NULL,\n",[79,105,107],{"class":81,"line":106},5,[79,108,109],{},"    status text NOT NULL DEFAULT 'pending',\n",[79,111,113],{"class":81,"line":112},6,[79,114,115],{},"    created_at timestamptz NOT NULL DEFAULT now()\n",[79,117,119],{"class":81,"line":118},7,[79,120,121],{},");\n",[79,123,125],{"class":81,"line":124},8,[79,126,128],{"emptyLinePlaceholder":127},true,"\n",[79,130,132],{"class":81,"line":131},9,[79,133,134],{},"CREATE INDEX idx_orders_customer ON orders (customer_id);\n",[79,136,138],{"class":81,"line":137},10,[79,139,140],{},"CREATE INDEX idx_orders_status ON orders (status);\n",[11,142,143],{},"Now run:",[70,145,149],{"className":146,"code":147,"language":148,"meta":75,"style":75},"language-bash shiki shiki-themes github-light github-dark","pg_dump --schema-only mydb > dump1.sql\npg_dump --schema-only mydb > dump2.sql\ndiff dump1.sql dump2.sql\n","bash",[15,150,151,171,184],{"__ignoreMap":75},[79,152,153,156,160,164,168],{"class":81,"line":82},[79,154,17],{"class":155},"sScJk",[79,157,159],{"class":158},"sj4cs"," --schema-only",[79,161,163],{"class":162},"sZZnC"," mydb",[79,165,167],{"class":166},"szBVR"," >",[79,169,170],{"class":162}," dump1.sql\n",[79,172,173,175,177,179,181],{"class":81,"line":88},[79,174,17],{"class":155},[79,176,159],{"class":158},[79,178,163],{"class":162},[79,180,167],{"class":166},[79,182,183],{"class":162}," dump2.sql\n",[79,185,186,189,192],{"class":81,"line":94},[79,187,188],{"class":155},"diff",[79,190,191],{"class":162}," dump1.sql",[79,193,183],{"class":162},[11,195,196],{},"Sometimes you will get no diff. Sometimes you will. The more tables, foreign keys, indexes, and constraints you add, the more likely it is that ordering differences start to show up.",[11,198,199],{},"This gets worse when you compare environments. Production has years of history behind it. Staging may have been recreated last week. Even if the logical schema is the same, the catalog layout underneath may not be. That can be enough to produce different output ordering.",[44,201,203],{"id":202},"a-common-failure-mode","A common failure mode",[11,205,206],{},"Imagine a table with multiple check constraints:",[70,208,210],{"className":72,"code":209,"language":74,"meta":75,"style":75},"CREATE TABLE payments (\n    id bigserial PRIMARY KEY,\n    amount_cents integer NOT NULL,\n    currency char(3) NOT NULL,\n    status text NOT NULL,\n    CONSTRAINT chk_amount CHECK (amount_cents > 0),\n    CONSTRAINT chk_currency CHECK (currency IN ('USD', 'EUR', 'GBP')),\n    CONSTRAINT chk_status CHECK (status IN ('pending', 'processed', 'failed'))\n);\n",[15,211,212,217,221,226,231,236,241,246,251],{"__ignoreMap":75},[79,213,214],{"class":81,"line":82},[79,215,216],{},"CREATE TABLE payments (\n",[79,218,219],{"class":81,"line":88},[79,220,91],{},[79,222,223],{"class":81,"line":94},[79,224,225],{},"    amount_cents integer NOT NULL,\n",[79,227,228],{"class":81,"line":100},[79,229,230],{},"    currency char(3) NOT NULL,\n",[79,232,233],{"class":81,"line":106},[79,234,235],{},"    status text NOT NULL,\n",[79,237,238],{"class":81,"line":112},[79,239,240],{},"    CONSTRAINT chk_amount CHECK (amount_cents > 0),\n",[79,242,243],{"class":81,"line":118},[79,244,245],{},"    CONSTRAINT chk_currency CHECK (currency IN ('USD', 'EUR', 'GBP')),\n",[79,247,248],{"class":81,"line":124},[79,249,250],{},"    CONSTRAINT chk_status CHECK (status IN ('pending', 'processed', 'failed'))\n",[79,252,253],{"class":81,"line":131},[79,254,121],{},[11,256,257,258,261,262,261,265,268],{},"In one dump, the constraints may appear in this order: ",[15,259,260],{},"chk_amount",", ",[15,263,264],{},"chk_currency",[15,266,267],{},"chk_status",". In another, they may appear differently.",[11,270,271],{},"A text diff will make that look like change, even though the schema has not actually changed at all.",[11,273,274],{},"Multiply that across hundreds of tables and constraints, and you end up with pages of noise. The real drift signal gets buried inside false positives.",[44,276,47,278,280],{"id":277},"why-pg_catalog-works-better",[15,279,41],{}," works better",[11,282,283,285],{},[15,284,41],{}," is PostgreSQL's system catalog. It stores metadata about tables, columns, constraints, indexes, functions, types, and more.",[11,287,288,289,292],{},"The advantage is simple: you can query it directly and apply your own ",[15,290,291],{},"ORDER BY",".",[11,294,295],{},"That gives you deterministic output.",[11,297,298,299,301],{},"If the schema has not changed, the query result will come back in the same order every time. That makes it much better for drift detection than comparing raw ",[15,300,17],{}," output.",[44,303,305],{"id":304},"core-catalog-tables-worth-querying","Core catalog tables worth querying",[11,307,308],{},"For schema snapshots, these are the most useful catalog tables:",[310,311,312,325],"table",{},[313,314,315],"thead",{},[316,317,318,322],"tr",{},[319,320,321],"th",{},"Catalog table",[319,323,324],{},"What it contains",[326,327,328,339,349,359,369,379,389,399,409],"tbody",{},[316,329,330,336],{},[331,332,333],"td",{},[15,334,335],{},"pg_namespace",[331,337,338],{},"Schemas",[316,340,341,346],{},[331,342,343],{},[15,344,345],{},"pg_class",[331,347,348],{},"Tables, views, indexes, sequences",[316,350,351,356],{},[331,352,353],{},[15,354,355],{},"pg_attribute",[331,357,358],{},"Columns",[316,360,361,366],{},[331,362,363],{},[15,364,365],{},"pg_constraint",[331,367,368],{},"Primary keys, foreign keys, unique and check constraints",[316,370,371,376],{},[331,372,373],{},[15,374,375],{},"pg_index",[331,377,378],{},"Index metadata",[316,380,381,386],{},[331,382,383],{},[15,384,385],{},"pg_proc",[331,387,388],{},"Functions and procedures",[316,390,391,396],{},[331,392,393],{},[15,394,395],{},"pg_trigger",[331,397,398],{},"Triggers",[316,400,401,406],{},[331,402,403],{},[15,404,405],{},"pg_extension",[331,407,408],{},"Installed extensions",[316,410,411,416],{},[331,412,413],{},[15,414,415],{},"pg_type",[331,417,418],{},"Custom types and enums",[44,420,422],{"id":421},"deterministic-column-snapshot","Deterministic column snapshot",[11,424,425],{},"A column snapshot query looks like this:",[70,427,429],{"className":72,"code":428,"language":74,"meta":75,"style":75},"SELECT\n    n.nspname AS schema_name,\n    c.relname AS table_name,\n    a.attname AS column_name,\n    a.attnum AS ordinal_position,\n    pg_catalog.format_type(a.atttypid, a.atttypmod) AS data_type,\n    a.attnotnull AS is_not_null,\n    pg_catalog.pg_get_expr(d.adbin, d.adrelid) AS column_default\nFROM pg_catalog.pg_attribute a\nJOIN pg_catalog.pg_class c ON a.attrelid = c.oid\nJOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\nLEFT JOIN pg_catalog.pg_attrdef d\n    ON a.attrelid = d.adrelid AND a.attnum = d.adnum\nWHERE n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')\n  AND c.relkind IN ('r', 'p')\n  AND a.attnum > 0\n  AND NOT a.attisdropped\nORDER BY n.nspname, c.relname, a.attnum;\n",[15,430,431,436,441,446,451,456,461,466,471,476,481,487,493,499,505,511,517,523],{"__ignoreMap":75},[79,432,433],{"class":81,"line":82},[79,434,435],{},"SELECT\n",[79,437,438],{"class":81,"line":88},[79,439,440],{},"    n.nspname AS schema_name,\n",[79,442,443],{"class":81,"line":94},[79,444,445],{},"    c.relname AS table_name,\n",[79,447,448],{"class":81,"line":100},[79,449,450],{},"    a.attname AS column_name,\n",[79,452,453],{"class":81,"line":106},[79,454,455],{},"    a.attnum AS ordinal_position,\n",[79,457,458],{"class":81,"line":112},[79,459,460],{},"    pg_catalog.format_type(a.atttypid, a.atttypmod) AS data_type,\n",[79,462,463],{"class":81,"line":118},[79,464,465],{},"    a.attnotnull AS is_not_null,\n",[79,467,468],{"class":81,"line":124},[79,469,470],{},"    pg_catalog.pg_get_expr(d.adbin, d.adrelid) AS column_default\n",[79,472,473],{"class":81,"line":131},[79,474,475],{},"FROM pg_catalog.pg_attribute a\n",[79,477,478],{"class":81,"line":137},[79,479,480],{},"JOIN pg_catalog.pg_class c ON a.attrelid = c.oid\n",[79,482,484],{"class":81,"line":483},11,[79,485,486],{},"JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\n",[79,488,490],{"class":81,"line":489},12,[79,491,492],{},"LEFT JOIN pg_catalog.pg_attrdef d\n",[79,494,496],{"class":81,"line":495},13,[79,497,498],{},"    ON a.attrelid = d.adrelid AND a.attnum = d.adnum\n",[79,500,502],{"class":81,"line":501},14,[79,503,504],{},"WHERE n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')\n",[79,506,508],{"class":81,"line":507},15,[79,509,510],{},"  AND c.relkind IN ('r', 'p')\n",[79,512,514],{"class":81,"line":513},16,[79,515,516],{},"  AND a.attnum > 0\n",[79,518,520],{"class":81,"line":519},17,[79,521,522],{},"  AND NOT a.attisdropped\n",[79,524,526],{"class":81,"line":525},18,[79,527,528],{},"ORDER BY n.nspname, c.relname, a.attnum;\n",[11,530,531],{},"The important part is not just what you query, but how you order it. Once the ordering is explicit, the output becomes stable enough for reliable comparison.",[44,533,535],{"id":534},"constraints-and-indexes-follow-the-same-pattern","Constraints and indexes follow the same pattern",[11,537,538],{},"Constraints:",[70,540,542],{"className":72,"code":541,"language":74,"meta":75,"style":75},"SELECT\n    n.nspname AS schema_name,\n    c.relname AS table_name,\n    con.conname AS constraint_name,\n    con.contype AS constraint_type,\n    pg_catalog.pg_get_constraintdef(con.oid, true) AS definition\nFROM pg_catalog.pg_constraint con\nJOIN pg_catalog.pg_class c ON con.conrelid = c.oid\nJOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\nWHERE n.nspname NOT IN ('pg_catalog', 'information_schema')\nORDER BY n.nspname, c.relname, con.conname;\n",[15,543,544,548,552,556,561,566,571,576,581,585,590],{"__ignoreMap":75},[79,545,546],{"class":81,"line":82},[79,547,435],{},[79,549,550],{"class":81,"line":88},[79,551,440],{},[79,553,554],{"class":81,"line":94},[79,555,445],{},[79,557,558],{"class":81,"line":100},[79,559,560],{},"    con.conname AS constraint_name,\n",[79,562,563],{"class":81,"line":106},[79,564,565],{},"    con.contype AS constraint_type,\n",[79,567,568],{"class":81,"line":112},[79,569,570],{},"    pg_catalog.pg_get_constraintdef(con.oid, true) AS definition\n",[79,572,573],{"class":81,"line":118},[79,574,575],{},"FROM pg_catalog.pg_constraint con\n",[79,577,578],{"class":81,"line":124},[79,579,580],{},"JOIN pg_catalog.pg_class c ON con.conrelid = c.oid\n",[79,582,583],{"class":81,"line":131},[79,584,486],{},[79,586,587],{"class":81,"line":137},[79,588,589],{},"WHERE n.nspname NOT IN ('pg_catalog', 'information_schema')\n",[79,591,592],{"class":81,"line":483},[79,593,594],{},"ORDER BY n.nspname, c.relname, con.conname;\n",[11,596,597],{},"Indexes:",[70,599,601],{"className":72,"code":600,"language":74,"meta":75,"style":75},"SELECT\n    n.nspname AS schema_name,\n    c.relname AS table_name,\n    i.relname AS index_name,\n    pg_catalog.pg_get_indexdef(ix.indexrelid) AS index_definition\nFROM pg_catalog.pg_index ix\nJOIN pg_catalog.pg_class c ON ix.indrelid = c.oid\nJOIN pg_catalog.pg_class i ON ix.indexrelid = i.oid\nJOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\nWHERE n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')\nORDER BY n.nspname, c.relname, i.relname;\n",[15,602,603,607,611,615,620,625,630,635,640,644,648],{"__ignoreMap":75},[79,604,605],{"class":81,"line":82},[79,606,435],{},[79,608,609],{"class":81,"line":88},[79,610,440],{},[79,612,613],{"class":81,"line":94},[79,614,445],{},[79,616,617],{"class":81,"line":100},[79,618,619],{},"    i.relname AS index_name,\n",[79,621,622],{"class":81,"line":106},[79,623,624],{},"    pg_catalog.pg_get_indexdef(ix.indexrelid) AS index_definition\n",[79,626,627],{"class":81,"line":112},[79,628,629],{},"FROM pg_catalog.pg_index ix\n",[79,631,632],{"class":81,"line":118},[79,633,634],{},"JOIN pg_catalog.pg_class c ON ix.indrelid = c.oid\n",[79,636,637],{"class":81,"line":124},[79,638,639],{},"JOIN pg_catalog.pg_class i ON ix.indexrelid = i.oid\n",[79,641,642],{"class":81,"line":131},[79,643,486],{},[79,645,646],{"class":81,"line":137},[79,647,504],{},[79,649,650],{"class":81,"line":483},[79,651,652],{},"ORDER BY n.nspname, c.relname, i.relname;\n",[44,654,656],{"id":655},"a-fingerprint-is-even-better-than-a-raw-diff","A fingerprint is even better than a raw diff",[11,658,659],{},"Once your snapshot output is deterministic, you can go one step further and compute a schema fingerprint.",[11,661,662],{},"The idea is straightforward:",[664,665,666,670,673],"ol",{},[667,668,669],"li",{},"Capture the ordered metadata",[667,671,672],{},"Convert it into a canonical string",[667,674,675],{},"Hash it with SHA-256",[11,677,678],{},"If the fingerprint is unchanged, the schema is unchanged. If it differs, you know something moved and you can run a deeper diff.",[11,680,681],{},"That approach is much more efficient for continuous monitoring. Most of the time you only need to compare a short hash instead of full schema text.",[11,683,684],{},"You can compute the fingerprint directly in SQL:",[70,686,688],{"className":72,"code":687,"language":74,"meta":75,"style":75},"SELECT encode(\n    sha256(\n        string_agg(\n            row_to_text,\n            E'\\n' ORDER BY row_to_text\n        )::bytea\n    ),\n    'hex'\n) AS schema_fingerprint\nFROM (\n    SELECT format('%s.%s.%s.%s.%s.%s',\n        n.nspname, c.relname, a.attname, a.attnum,\n        pg_catalog.format_type(a.atttypid, a.atttypmod),\n        a.attnotnull\n    ) AS row_to_text\n    FROM pg_catalog.pg_attribute a\n    JOIN pg_catalog.pg_class c ON a.attrelid = c.oid\n    JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\n    WHERE n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')\n      AND c.relkind IN ('r', 'p')\n      AND a.attnum > 0\n      AND NOT a.attisdropped\n) sub;\n",[15,689,690,695,700,705,710,715,720,725,730,735,740,745,750,755,760,765,770,775,780,786,792,798,804],{"__ignoreMap":75},[79,691,692],{"class":81,"line":82},[79,693,694],{},"SELECT encode(\n",[79,696,697],{"class":81,"line":88},[79,698,699],{},"    sha256(\n",[79,701,702],{"class":81,"line":94},[79,703,704],{},"        string_agg(\n",[79,706,707],{"class":81,"line":100},[79,708,709],{},"            row_to_text,\n",[79,711,712],{"class":81,"line":106},[79,713,714],{},"            E'\\n' ORDER BY row_to_text\n",[79,716,717],{"class":81,"line":112},[79,718,719],{},"        )::bytea\n",[79,721,722],{"class":81,"line":118},[79,723,724],{},"    ),\n",[79,726,727],{"class":81,"line":124},[79,728,729],{},"    'hex'\n",[79,731,732],{"class":81,"line":131},[79,733,734],{},") AS schema_fingerprint\n",[79,736,737],{"class":81,"line":137},[79,738,739],{},"FROM (\n",[79,741,742],{"class":81,"line":483},[79,743,744],{},"    SELECT format('%s.%s.%s.%s.%s.%s',\n",[79,746,747],{"class":81,"line":489},[79,748,749],{},"        n.nspname, c.relname, a.attname, a.attnum,\n",[79,751,752],{"class":81,"line":495},[79,753,754],{},"        pg_catalog.format_type(a.atttypid, a.atttypmod),\n",[79,756,757],{"class":81,"line":501},[79,758,759],{},"        a.attnotnull\n",[79,761,762],{"class":81,"line":507},[79,763,764],{},"    ) AS row_to_text\n",[79,766,767],{"class":81,"line":513},[79,768,769],{},"    FROM pg_catalog.pg_attribute a\n",[79,771,772],{"class":81,"line":519},[79,773,774],{},"    JOIN pg_catalog.pg_class c ON a.attrelid = c.oid\n",[79,776,777],{"class":81,"line":525},[79,778,779],{},"    JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\n",[79,781,783],{"class":81,"line":782},19,[79,784,785],{},"    WHERE n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')\n",[79,787,789],{"class":81,"line":788},20,[79,790,791],{},"      AND c.relkind IN ('r', 'p')\n",[79,793,795],{"class":81,"line":794},21,[79,796,797],{},"      AND a.attnum > 0\n",[79,799,801],{"class":81,"line":800},22,[79,802,803],{},"      AND NOT a.attisdropped\n",[79,805,807],{"class":81,"line":806},23,[79,808,809],{},") sub;\n",[44,811,813],{"id":812},"performance-is-another-advantage","Performance is another advantage",[11,815,816],{},"This approach is not just cleaner. It is often faster.",[11,818,819],{},"On a database with 350 tables, 1,200 columns, 400 constraints, and 500 indexes:",[310,821,822,838],{},[313,823,824],{},[316,825,826,829,832,835],{},[319,827,828],{},"Approach",[319,830,831],{},"Time",[319,833,834],{},"Output size",[319,836,837],{},"Deterministic",[326,839,840,856,872],{},[316,841,842,847,850,853],{},[331,843,844],{},[15,845,846],{},"pg_dump --schema-only",[331,848,849],{},"1.8s",[331,851,852],{},"245 KB",[331,854,855],{},"No",[316,857,858,863,866,869],{},[331,859,860,862],{},[15,861,41],{}," queries",[331,864,865],{},"0.3s",[331,867,868],{},"82 KB",[331,870,871],{},"Yes",[316,873,874,879,882,885],{},[331,875,876,878],{},[15,877,41],{}," + SHA-256",[331,880,881],{},"0.4s",[331,883,884],{},"64 bytes",[331,886,871],{},[11,888,889,891],{},[15,890,17],{}," has to resolve dependencies, order DDL for restore, and format everything as valid SQL. Catalog queries skip that overhead and pull only the metadata you actually need.",[11,893,894],{},"The fingerprint comparison is the key insight for continuous monitoring at scale. You are comparing a 64-character string, not 245KB of schema text. If it matches, you are done in milliseconds. If it differs, you run the full queries to find what changed.",[44,896,898],{"id":897},"a-practical-java-approach","A practical Java approach",[11,900,901,902,905,906,908],{},"At Arcnull, this pattern is implemented in Java through a ",[15,903,904],{},"CatalogService"," that queries ",[15,907,41],{},", builds canonical snapshots, and computes a fingerprint:",[70,910,914],{"className":911,"code":912,"language":913,"meta":75,"style":75},"language-java shiki shiki-themes github-light github-dark","@Service\npublic class CatalogService {\n\n    private final JdbcTemplate jdbc;\n\n    public List\u003CColumnSnapshot> captureColumns(String schema) {\n        return jdbc.query(\"\"\"\n            SELECT\n                n.nspname AS schema_name,\n                c.relname AS table_name,\n                a.attname AS column_name,\n                a.attnum AS ordinal_position,\n                pg_catalog.format_type(a.atttypid, a.atttypmod) AS data_type,\n                a.attnotnull AS is_not_null,\n                pg_catalog.pg_get_expr(d.adbin, d.adrelid) AS column_default\n            FROM pg_catalog.pg_attribute a\n            JOIN pg_catalog.pg_class c ON a.attrelid = c.oid\n            JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\n            LEFT JOIN pg_catalog.pg_attrdef d\n                ON a.attrelid = d.adrelid AND a.attnum = d.adnum\n            WHERE n.nspname = ?\n              AND c.relkind IN ('r','p')\n              AND a.attnum > 0\n              AND NOT a.attisdropped\n            ORDER BY n.nspname, c.relname, a.attnum\n            \"\"\",\n            (rs, rowNum) -> new ColumnSnapshot(\n                rs.getString(\"schema_name\"),\n                rs.getString(\"table_name\"),\n                rs.getString(\"column_name\"),\n                rs.getInt(\"ordinal_position\"),\n                rs.getString(\"data_type\"),\n                rs.getBoolean(\"is_not_null\"),\n                rs.getString(\"column_default\")\n            ),\n            schema\n        );\n    }\n\n    public String computeFingerprint(String schema) {\n        String combined = Stream.of(\n                captureColumns(schema).stream()\n                    .map(ColumnSnapshot::toCanonicalString),\n                captureConstraints(schema).stream()\n                    .map(ConstraintSnapshot::toCanonicalString),\n                captureIndexes(schema).stream()\n                    .map(IndexSnapshot::toCanonicalString)\n            )\n            .flatMap(Function.identity())\n            .collect(Collectors.joining(\"\\n\"));\n\n        return Hashing.sha256()\n            .hashString(combined, StandardCharsets.UTF_8)\n            .toString();\n    }\n}\n","java",[15,915,916,921,926,930,935,939,944,949,954,959,964,969,974,979,984,989,994,999,1004,1009,1014,1019,1024,1029,1035,1041,1047,1053,1059,1065,1071,1077,1083,1089,1095,1101,1107,1113,1119,1124,1130,1136,1142,1148,1154,1160,1166,1172,1178,1184,1190,1195,1201,1207,1213,1218],{"__ignoreMap":75},[79,917,918],{"class":81,"line":82},[79,919,920],{},"@Service\n",[79,922,923],{"class":81,"line":88},[79,924,925],{},"public class CatalogService {\n",[79,927,928],{"class":81,"line":94},[79,929,128],{"emptyLinePlaceholder":127},[79,931,932],{"class":81,"line":100},[79,933,934],{},"    private final JdbcTemplate jdbc;\n",[79,936,937],{"class":81,"line":106},[79,938,128],{"emptyLinePlaceholder":127},[79,940,941],{"class":81,"line":112},[79,942,943],{},"    public List\u003CColumnSnapshot> captureColumns(String schema) {\n",[79,945,946],{"class":81,"line":118},[79,947,948],{},"        return jdbc.query(\"\"\"\n",[79,950,951],{"class":81,"line":124},[79,952,953],{},"            SELECT\n",[79,955,956],{"class":81,"line":131},[79,957,958],{},"                n.nspname AS schema_name,\n",[79,960,961],{"class":81,"line":137},[79,962,963],{},"                c.relname AS table_name,\n",[79,965,966],{"class":81,"line":483},[79,967,968],{},"                a.attname AS column_name,\n",[79,970,971],{"class":81,"line":489},[79,972,973],{},"                a.attnum AS ordinal_position,\n",[79,975,976],{"class":81,"line":495},[79,977,978],{},"                pg_catalog.format_type(a.atttypid, a.atttypmod) AS data_type,\n",[79,980,981],{"class":81,"line":501},[79,982,983],{},"                a.attnotnull AS is_not_null,\n",[79,985,986],{"class":81,"line":507},[79,987,988],{},"                pg_catalog.pg_get_expr(d.adbin, d.adrelid) AS column_default\n",[79,990,991],{"class":81,"line":513},[79,992,993],{},"            FROM pg_catalog.pg_attribute a\n",[79,995,996],{"class":81,"line":519},[79,997,998],{},"            JOIN pg_catalog.pg_class c ON a.attrelid = c.oid\n",[79,1000,1001],{"class":81,"line":525},[79,1002,1003],{},"            JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\n",[79,1005,1006],{"class":81,"line":782},[79,1007,1008],{},"            LEFT JOIN pg_catalog.pg_attrdef d\n",[79,1010,1011],{"class":81,"line":788},[79,1012,1013],{},"                ON a.attrelid = d.adrelid AND a.attnum = d.adnum\n",[79,1015,1016],{"class":81,"line":794},[79,1017,1018],{},"            WHERE n.nspname = ?\n",[79,1020,1021],{"class":81,"line":800},[79,1022,1023],{},"              AND c.relkind IN ('r','p')\n",[79,1025,1026],{"class":81,"line":806},[79,1027,1028],{},"              AND a.attnum > 0\n",[79,1030,1032],{"class":81,"line":1031},24,[79,1033,1034],{},"              AND NOT a.attisdropped\n",[79,1036,1038],{"class":81,"line":1037},25,[79,1039,1040],{},"            ORDER BY n.nspname, c.relname, a.attnum\n",[79,1042,1044],{"class":81,"line":1043},26,[79,1045,1046],{},"            \"\"\",\n",[79,1048,1050],{"class":81,"line":1049},27,[79,1051,1052],{},"            (rs, rowNum) -> new ColumnSnapshot(\n",[79,1054,1056],{"class":81,"line":1055},28,[79,1057,1058],{},"                rs.getString(\"schema_name\"),\n",[79,1060,1062],{"class":81,"line":1061},29,[79,1063,1064],{},"                rs.getString(\"table_name\"),\n",[79,1066,1068],{"class":81,"line":1067},30,[79,1069,1070],{},"                rs.getString(\"column_name\"),\n",[79,1072,1074],{"class":81,"line":1073},31,[79,1075,1076],{},"                rs.getInt(\"ordinal_position\"),\n",[79,1078,1080],{"class":81,"line":1079},32,[79,1081,1082],{},"                rs.getString(\"data_type\"),\n",[79,1084,1086],{"class":81,"line":1085},33,[79,1087,1088],{},"                rs.getBoolean(\"is_not_null\"),\n",[79,1090,1092],{"class":81,"line":1091},34,[79,1093,1094],{},"                rs.getString(\"column_default\")\n",[79,1096,1098],{"class":81,"line":1097},35,[79,1099,1100],{},"            ),\n",[79,1102,1104],{"class":81,"line":1103},36,[79,1105,1106],{},"            schema\n",[79,1108,1110],{"class":81,"line":1109},37,[79,1111,1112],{},"        );\n",[79,1114,1116],{"class":81,"line":1115},38,[79,1117,1118],{},"    }\n",[79,1120,1122],{"class":81,"line":1121},39,[79,1123,128],{"emptyLinePlaceholder":127},[79,1125,1127],{"class":81,"line":1126},40,[79,1128,1129],{},"    public String computeFingerprint(String schema) {\n",[79,1131,1133],{"class":81,"line":1132},41,[79,1134,1135],{},"        String combined = Stream.of(\n",[79,1137,1139],{"class":81,"line":1138},42,[79,1140,1141],{},"                captureColumns(schema).stream()\n",[79,1143,1145],{"class":81,"line":1144},43,[79,1146,1147],{},"                    .map(ColumnSnapshot::toCanonicalString),\n",[79,1149,1151],{"class":81,"line":1150},44,[79,1152,1153],{},"                captureConstraints(schema).stream()\n",[79,1155,1157],{"class":81,"line":1156},45,[79,1158,1159],{},"                    .map(ConstraintSnapshot::toCanonicalString),\n",[79,1161,1163],{"class":81,"line":1162},46,[79,1164,1165],{},"                captureIndexes(schema).stream()\n",[79,1167,1169],{"class":81,"line":1168},47,[79,1170,1171],{},"                    .map(IndexSnapshot::toCanonicalString)\n",[79,1173,1175],{"class":81,"line":1174},48,[79,1176,1177],{},"            )\n",[79,1179,1181],{"class":81,"line":1180},49,[79,1182,1183],{},"            .flatMap(Function.identity())\n",[79,1185,1187],{"class":81,"line":1186},50,[79,1188,1189],{},"            .collect(Collectors.joining(\"\\n\"));\n",[79,1191,1193],{"class":81,"line":1192},51,[79,1194,128],{"emptyLinePlaceholder":127},[79,1196,1198],{"class":81,"line":1197},52,[79,1199,1200],{},"        return Hashing.sha256()\n",[79,1202,1204],{"class":81,"line":1203},53,[79,1205,1206],{},"            .hashString(combined, StandardCharsets.UTF_8)\n",[79,1208,1210],{"class":81,"line":1209},54,[79,1211,1212],{},"            .toString();\n",[79,1214,1216],{"class":81,"line":1215},55,[79,1217,1118],{},[79,1219,1221],{"class":81,"line":1220},56,[79,1222,1223],{},"}\n",[11,1225,1226],{},"The design choice that matters: each snapshot object produces a stable string representation. Once you have that, the fingerprint is simple. Capture ordered metadata, serialize consistently, hash it, compare against the previous scan, and only run a full semantic diff when the fingerprint changes.",[44,1228,1230],{"id":1229},"when-to-use-each-tool","When to use each tool",[11,1232,1233,1234,1236],{},"The real takeaway is not that ",[15,1235,17],{}," is bad. It solves a different problem.",[11,1238,1239,1240,1242],{},"Use ",[15,1241,17],{}," when you need to:",[1244,1245,1246,1249,1252,1255],"ul",{},[667,1247,1248],{},"Create backups for disaster recovery",[667,1250,1251],{},"Move schemas between PostgreSQL versions",[667,1253,1254],{},"Generate SQL for manual inspection",[667,1256,1257],{},"Clone or restore databases",[11,1259,1239,1260,1262],{},[15,1261,41],{}," queries when you need to:",[1244,1264,1265,1268,1271,1274],{},[667,1266,1267],{},"Detect schema drift between environments",[667,1269,1270],{},"Compute deterministic fingerprints",[667,1272,1273],{},"Build automated schema comparison pipelines",[667,1275,1276],{},"Monitor production schemas for untracked changes",[11,1278,1279,1280,1282],{},"These tools are complementary. Problems start when ",[15,1281,17],{}," is used for something it was never meant to do.",[44,1284,1286],{"id":1285},"final-takeaway","Final takeaway",[11,1288,1289,1290,1292],{},"The issue with ",[15,1291,17],{}," is not something you can fully fix with better diff tooling. The noise comes from the nature of the output itself.",[11,1294,1295,1296,1298],{},"If you want reliable schema snapshots, start closer to the source. Query ",[15,1297,41],{},", order the results explicitly, and compare deterministic output instead of restore-oriented SQL.",[11,1300,1301],{},"That gives you cleaner diffs, fewer false alarms, and a much more dependable foundation for drift detection.",[11,1303,1304,1305,1310],{},"If you are building this yourself, start there. If you want it production-ready with scanning, diffing, CI integration, and alerts, that is what ",[1306,1307,1309],"a",{"href":1308},"\u002Fproducts\u002Fdrift-scanner","Drift Scanner"," is built to handle.",[11,1312,1313],{},"Your future self debugging a 3 AM deployment failure will thank you.",[1315,1316,1317],"style",{},"html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html pre.shiki code .sScJk, html code.shiki .sScJk{--shiki-default:#6F42C1;--shiki-dark:#B392F0}html pre.shiki code .sj4cs, html code.shiki .sj4cs{--shiki-default:#005CC5;--shiki-dark:#79B8FF}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html pre.shiki code .szBVR, html code.shiki .szBVR{--shiki-default:#D73A49;--shiki-dark:#F97583}",{"title":75,"searchDepth":88,"depth":88,"links":1319},[1320,1322,1323,1325,1326,1327,1328,1329,1330,1331,1332],{"id":46,"depth":88,"text":1321},"Why pg_dump creates noisy diffs",{"id":202,"depth":88,"text":203},{"id":277,"depth":88,"text":1324},"Why pg_catalog works better",{"id":304,"depth":88,"text":305},{"id":421,"depth":88,"text":422},{"id":534,"depth":88,"text":535},{"id":655,"depth":88,"text":656},{"id":812,"depth":88,"text":813},{"id":897,"depth":88,"text":898},{"id":1229,"depth":88,"text":1230},{"id":1285,"depth":88,"text":1286},"Why snapshot ordering matters for schema drift detection, and why querying PostgreSQL metadata directly is often the more reliable approach.","md",[41,17,1336,1337],"postgresql schema snapshot","deterministic schema diff",{},"\u002Fblog\u002Fpg-catalog-vs-pg-dump-schema-snapshots","2026-04-21","6",{"title":5,"description":1333},"pg-catalog-vs-pg-dump-schema-snapshots","blog\u002Fpg-catalog-vs-pg-dump-schema-snapshots","2026-04-15","cHqUrY_M7AqfHT6TlFRC7V7wXKMebXxXZhviPxKd3bY",[1348,1600],{"id":1349,"title":1350,"author":6,"body":1351,"description":1584,"extension":1334,"keywords":1585,"meta":1592,"navigation":127,"path":1593,"publishedAt":1594,"readTime":1595,"seo":1596,"slug":1597,"stem":1598,"updatedAt":1345,"__hash__":1599},"blog\u002Fblog\u002Fmcp-server-security-governance-2026.md","MCP server security: why governance matters as agent tool use grows",{"type":8,"value":1352,"toc":1569},[1353,1356,1359,1362,1365,1368,1371,1375,1378,1381,1384,1387,1390,1394,1397,1400,1403,1420,1423,1427,1432,1435,1438,1442,1445,1448,1452,1455,1458,1462,1465,1468,1472,1475,1478,1482,1485,1488,1491,1494,1498,1501,1508,1514,1520,1526,1532,1536,1539,1542,1545,1549,1552,1555,1558,1561],[11,1354,1355],{},"The Model Context Protocol makes it much easier for AI agents to use real tools. That is a big step forward. It means the same model can query a database, call an internal API, update a CRM record, or trigger part of a deployment workflow through a common interface.",[11,1357,1358],{},"That simplicity is exactly why MCP is getting attention.",[11,1360,1361],{},"It is also why teams need to think more carefully about governance.",[11,1363,1364],{},"In many early MCP deployments, the focus is naturally on getting tools connected and workflows running. The security model often comes later. That creates a gap: agents can suddenly reach more systems, but the organization still has limited visibility into who is calling what, what data is being accessed, and which actions are being taken.",[11,1366,1367],{},"This is where governance starts to matter. Not because MCP is broken, but because a protocol for tool use does not automatically solve authentication, authorization, auditability, or rate control. Those still need to be designed.",[11,1369,1370],{},"This article looks at where the risks show up, why they grow quickly once multiple teams adopt MCP, and why a governance proxy is becoming a practical pattern for production environments.",[44,1372,1374],{"id":1373},"what-mcp-is-and-why-teams-are-adopting-it","What MCP is, and why teams are adopting it",[11,1376,1377],{},"MCP gives AI agents a standard way to discover and call tools. An MCP server exposes tools with defined schemas, and an agent can call those tools as part of a conversation or workflow.",[11,1379,1380],{},"That sounds simple, but it is powerful in practice.",[11,1382,1383],{},"Once tools are exposed through MCP, an agent can work across multiple systems without custom glue code for every integration. A support assistant might look up a customer, check an order, issue a refund, and send a follow-up email in one flow. A developer assistant might read logs, inspect a schema, and open a ticket.",[11,1385,1386],{},"That is the appeal. Tool use becomes much easier to standardize.",[11,1388,1389],{},"The catch is that standardizing tool access also makes it easier to scale access before governance has caught up.",[44,1391,1393],{"id":1392},"where-the-risk-starts","Where the risk starts",[11,1395,1396],{},"The risk usually does not begin with one obviously dangerous deployment. It starts with something useful and local.",[11,1398,1399],{},"A team creates an MCP server for one internal system. It helps with debugging, support, or reporting. Then another team starts using it for a different workflow. Then a third team connects it to an internal assistant. Before long, the same server is being used in several contexts, by different people, for different kinds of actions.",[11,1401,1402],{},"At that point, the question is no longer just whether the server works. The question becomes:",[1244,1404,1405,1408,1411,1414,1417],{},[667,1406,1407],{},"Who is allowed to call which tools?",[667,1409,1410],{},"Which actions require approval?",[667,1412,1413],{},"What gets logged?",[667,1415,1416],{},"How do you trace a tool call back to a user, a session, or a business purpose?",[667,1418,1419],{},"What happens when an agent behaves unexpectedly?",[11,1421,1422],{},"Without a governance layer, those questions usually get answered inconsistently, or not at all.",[44,1424,1426],{"id":1425},"five-practical-risks-of-ungoverned-mcp-servers","Five practical risks of ungoverned MCP servers",[1428,1429,1431],"h3",{"id":1430},"_1-prompt-injection-can-turn-tool-access-into-data-exposure","1. Prompt injection can turn tool access into data exposure",[11,1433,1434],{},"If an agent can read sensitive data and also take external actions, prompt injection becomes much more serious. A malicious instruction hidden in data can push the agent to retrieve information it should not expose, or send it somewhere it should not go.",[11,1436,1437],{},"What makes this hard is that the individual tool calls may look valid in isolation. The problem is the sequence and the intent behind it.",[1428,1439,1441],{"id":1440},"_2-tool-chaining-can-create-privilege-problems","2. Tool chaining can create privilege problems",[11,1443,1444],{},"One safe-looking tool call can become risky when combined with another. An agent may gather identifiers or context from one system, then use that context to make a higher-impact call somewhere else.",[11,1446,1447],{},"Traditional authorization checks are often request-by-request. Agent workflows are not always that simple. The surrounding chain matters.",[1428,1449,1451],{"id":1450},"_3-audit-trails-are-often-incomplete","3. Audit trails are often incomplete",[11,1453,1454],{},"Logging that \"tool X was called\" is not enough for most real-world governance needs. Teams usually need more context: who initiated the workflow, what data was touched, why the action happened, and whether a policy decision was involved.",[11,1456,1457],{},"Without that context, investigations get harder and compliance work gets weaker.",[1428,1459,1461],{"id":1460},"_4-runaway-agents-can-overwhelm-downstream-systems","4. Runaway agents can overwhelm downstream systems",[11,1463,1464],{},"Autonomous workflows can generate more volume than teams expect. Retries, loops, or poor workflow design can flood a server or the systems behind it.",[11,1466,1467],{},"MCP makes tool use easier. That also means mistakes can scale faster.",[1428,1469,1471],{"id":1470},"_5-sensitive-data-can-leak-through-responses-and-errors","5. Sensitive data can leak through responses and errors",[11,1473,1474],{},"Credentials, stack traces, or overly verbose error messages can escape through tool responses. An agent does not reliably understand that a token or secret is dangerous. It may repeat it, store it, or pass it along in another step.",[11,1476,1477],{},"That makes response filtering and redaction more important than many early implementations assume.",[44,1479,1481],{"id":1480},"why-a-governance-proxy-helps","Why a governance proxy helps",[11,1483,1484],{},"A governance proxy sits between the agent and the MCP servers it uses.",[11,1486,1487],{},"Instead of every server implementing its own access model, logging conventions, and rate controls, the proxy becomes the place where those decisions are applied consistently. It can authenticate the caller, evaluate policy, log the request with context, limit abuse, and filter sensitive data before a response goes back to the agent.",[11,1489,1490],{},"That does not remove all risk, but it gives teams a much better control point.",[11,1492,1493],{},"It also matches how organizations usually want to manage production systems: one place for policy, one place for visibility, and one place to investigate what happened.",[44,1495,1497],{"id":1496},"what-that-governance-layer-should-do","What that governance layer should do",[11,1499,1500],{},"At a minimum, a useful governance layer should handle a few things well.",[11,1502,1503,1507],{},[1504,1505,1506],"strong",{},"Authentication."," It should establish who is behind the request, whether that is a user, service, or agent session.",[11,1509,1510,1513],{},[1504,1511,1512],{},"Authorization."," It should evaluate whether a tool call is allowed based on identity, tool, parameters, and context.",[11,1515,1516,1519],{},[1504,1517,1518],{},"Audit logging."," It should record enough information to reconstruct what happened later, including the policy decision that was applied.",[11,1521,1522,1525],{},[1504,1523,1524],{},"Rate limiting."," It should keep one broken or badly behaved workflow from overwhelming shared systems.",[11,1527,1528,1531],{},[1504,1529,1530],{},"Data filtering."," It should be able to redact or block sensitive fields before they reach the model or the user.",[44,1533,1535],{"id":1534},"why-this-matters-now","Why this matters now",[11,1537,1538],{},"MCP adoption is growing because it solves a real integration problem. That is a good thing. But once agents move from answering questions to taking actions, governance stops being a nice extra and starts becoming part of the production architecture.",[11,1540,1541],{},"The teams that handle this well will not necessarily be the ones with the most tools. They will be the ones with the clearest controls around how those tools are used.",[11,1543,1544],{},"Teams that delay governance will usually end up choosing between slower adoption and weaker controls. Neither is a good position once the workflows are already running in production.",[44,1546,1548],{"id":1547},"conclusion","Conclusion",[11,1550,1551],{},"MCP makes agent tool use easier to standardize. Governance makes it safer to run at scale.",[11,1553,1554],{},"As more teams connect agents to databases, APIs, internal systems, and operational workflows, the main challenge is no longer just integration. It is visibility, control, and trust.",[11,1556,1557],{},"A governance proxy is one practical way to get there. It gives teams a central place to apply policy, capture audit context, and reduce the risk that comes with giving agents access to real systems.",[11,1559,1560],{},"If you are already experimenting with MCP in production, this is the point where governance starts to move from something to think about later to something worth designing for now.",[11,1562,1563,1564,1568],{},"If you are building this kind of control layer, ",[1306,1565,1567],{"href":1566},"\u002Fproducts\u002Fmcp-vault","MCP Vault"," is the direction we are exploring at Arcnull.",{"title":75,"searchDepth":88,"depth":88,"links":1570},[1571,1572,1573,1580,1581,1582,1583],{"id":1373,"depth":88,"text":1374},{"id":1392,"depth":88,"text":1393},{"id":1425,"depth":88,"text":1426,"children":1574},[1575,1576,1577,1578,1579],{"id":1430,"depth":94,"text":1431},{"id":1440,"depth":94,"text":1441},{"id":1450,"depth":94,"text":1451},{"id":1460,"depth":94,"text":1461},{"id":1470,"depth":94,"text":1471},{"id":1480,"depth":88,"text":1481},{"id":1496,"depth":88,"text":1497},{"id":1534,"depth":88,"text":1535},{"id":1547,"depth":88,"text":1548},"As more teams connect AI agents to real tools through MCP, access control, auditability, and oversight become practical production concerns. Here is why a governance layer is starting to matter.",[1586,1587,1588,1589,1590,1591],"mcp server security","mcp governance","ai agent security","model context protocol","mcp proxy","ai governance 2026",{},"\u002Fblog\u002Fmcp-server-security-governance-2026","2026-05-12","10 min read",{"title":1350,"description":1584},"mcp-server-security-governance-2026","blog\u002Fmcp-server-security-governance-2026","FM34I7GmFMb7DzrmfK2i88S8bqZEw5Kg5SWG9GAf-OE",{"id":1601,"title":1602,"author":6,"body":1603,"description":2505,"extension":1334,"keywords":2506,"meta":2510,"navigation":127,"path":2511,"publishedAt":2512,"readTime":2513,"seo":2514,"slug":2515,"stem":2516,"updatedAt":1345,"__hash__":2517},"blog\u002Fblog\u002Fdetect-postgresql-schema-changes-github-action.md","Detecting PostgreSQL schema changes with a GitHub Action",{"type":8,"value":1604,"toc":2480},[1605,1608,1615,1622,1625,1629,1632,1646,1649,1653,1659,1693,1699,1705,1709,1716,1932,1938,1942,1946,1952,1958,1962,1971,1977,1996,2002,2008,2011,2017,2023,2029,2035,2039,2042,2046,2053,2057,2060,2064,2067,2166,2169,2173,2176,2183,2186,2190,2194,2197,2276,2279,2283,2286,2306,2309,2313,2316,2319,2323,2326,2395,2409,2412,2416,2422,2428,2451,2455,2458,2468,2471,2477],[11,1606,1607],{},"Every team eventually gets burned by schema drift.",[11,1609,1610,1611,1614],{},"A migration passes in CI, looks fine in review, and then blows up in production because production is not actually in the state everyone thought it was. Maybe someone ran an ",[15,1612,1613],{},"ALTER TABLE"," during an incident. Maybe a DBA added an index to calm down a slow query. Either way, your migration history says one thing, and the database says another.",[11,1616,1617,1618,1621],{},"The ",[15,1619,1620],{},"arcnull-hq\u002Fschema-drift-action"," is meant to catch that before a pull request gets merged. It compares the schema changes introduced by your PR against the real state of your target database and flags anything that could break or drift from what your migrations expect.",[11,1623,1624],{},"In this walkthrough I will show you how to set it up in GitHub Actions, how to configure it safely for PostgreSQL, and what to look for when it reports drift.",[44,1626,1628],{"id":1627},"what-you-need-before-you-start","What you need before you start",[11,1630,1631],{},"A few basics need to be in place:",[1244,1633,1634,1637,1640,1643],{},[667,1635,1636],{},"A PostgreSQL database to compare against — usually production or staging",[667,1638,1639],{},"A read-only PostgreSQL user the action can connect with",[667,1641,1642],{},"That connection string stored as a GitHub Actions secret",[667,1644,1645],{},"Migration files in your repository — Flyway, Liquibase, Alembic, or plain SQL all work",[11,1647,1648],{},"The action only reads schema metadata from PostgreSQL system catalogs. It does not need write access to anything.",[44,1650,1652],{"id":1651},"step-1-create-a-read-only-database-user","Step 1: Create a read-only database user",[11,1654,1655,1656,1658],{},"The action needs to inspect ",[15,1657,41],{}," to understand the current schema state. Give it a dedicated user with the minimum access it actually needs:",[70,1660,1662],{"className":72,"code":1661,"language":74,"meta":75,"style":75},"CREATE ROLE schema_drift_reader\n  WITH LOGIN PASSWORD 'your-secure-password';\nGRANT CONNECT ON DATABASE your_database\n  TO schema_drift_reader;\nGRANT USAGE ON SCHEMA public\n  TO schema_drift_reader;\n",[15,1663,1664,1669,1674,1679,1684,1689],{"__ignoreMap":75},[79,1665,1666],{"class":81,"line":82},[79,1667,1668],{},"CREATE ROLE schema_drift_reader\n",[79,1670,1671],{"class":81,"line":88},[79,1672,1673],{},"  WITH LOGIN PASSWORD 'your-secure-password';\n",[79,1675,1676],{"class":81,"line":94},[79,1677,1678],{},"GRANT CONNECT ON DATABASE your_database\n",[79,1680,1681],{"class":81,"line":100},[79,1682,1683],{},"  TO schema_drift_reader;\n",[79,1685,1686],{"class":81,"line":106},[79,1687,1688],{},"GRANT USAGE ON SCHEMA public\n",[79,1690,1691],{"class":81,"line":112},[79,1692,1683],{},[11,1694,1695,1696,1698],{},"Note: ",[15,1697,41],{}," is readable by all PostgreSQL users by default — no explicit GRANT is needed. The above three statements are sufficient.",[11,1700,1701,1702,292],{},"Store the connection string as a GitHub Actions secret named ",[15,1703,1704],{},"DRIFT_DATABASE_URL",[44,1706,1708],{"id":1707},"step-2-add-the-workflow-file","Step 2: Add the workflow file",[11,1710,1711,1712,1715],{},"Create ",[15,1713,1714],{},".github\u002Fworkflows\u002Fschema-drift.yml",":",[70,1717,1721],{"className":1718,"code":1719,"language":1720,"meta":75,"style":75},"language-yaml shiki shiki-themes github-light github-dark","name: Schema Drift Check\n\non:\n  pull_request:\n    paths:\n      - 'src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\u002F**'\n      - 'migrations\u002F**'\n      - 'alembic\u002Fversions\u002F**'\n      - 'sql\u002F**'\n\njobs:\n  schema-drift-check:\n    name: Detect Schema Drift\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Checkout repository\n        uses: actions\u002Fcheckout@v4\n\n      - name: Run Arcnull Schema Drift Scanner\n        uses: arcnull-hq\u002Fschema-drift-action@v1\n        with:\n          database-url: ${{ secrets.DRIFT_DATABASE_URL }}\n          migration-path: src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\n          schema: public\n          fail-on: breaking\n","yaml",[15,1722,1723,1736,1740,1748,1755,1762,1770,1777,1784,1791,1795,1802,1809,1819,1829,1833,1840,1851,1861,1865,1876,1885,1892,1902,1912,1922],{"__ignoreMap":75},[79,1724,1725,1729,1733],{"class":81,"line":82},[79,1726,1728],{"class":1727},"s9eBZ","name",[79,1730,1732],{"class":1731},"sVt8B",": ",[79,1734,1735],{"class":162},"Schema Drift Check\n",[79,1737,1738],{"class":81,"line":88},[79,1739,128],{"emptyLinePlaceholder":127},[79,1741,1742,1745],{"class":81,"line":94},[79,1743,1744],{"class":158},"on",[79,1746,1747],{"class":1731},":\n",[79,1749,1750,1753],{"class":81,"line":100},[79,1751,1752],{"class":1727},"  pull_request",[79,1754,1747],{"class":1731},[79,1756,1757,1760],{"class":81,"line":106},[79,1758,1759],{"class":1727},"    paths",[79,1761,1747],{"class":1731},[79,1763,1764,1767],{"class":81,"line":112},[79,1765,1766],{"class":1731},"      - ",[79,1768,1769],{"class":162},"'src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\u002F**'\n",[79,1771,1772,1774],{"class":81,"line":118},[79,1773,1766],{"class":1731},[79,1775,1776],{"class":162},"'migrations\u002F**'\n",[79,1778,1779,1781],{"class":81,"line":124},[79,1780,1766],{"class":1731},[79,1782,1783],{"class":162},"'alembic\u002Fversions\u002F**'\n",[79,1785,1786,1788],{"class":81,"line":131},[79,1787,1766],{"class":1731},[79,1789,1790],{"class":162},"'sql\u002F**'\n",[79,1792,1793],{"class":81,"line":137},[79,1794,128],{"emptyLinePlaceholder":127},[79,1796,1797,1800],{"class":81,"line":483},[79,1798,1799],{"class":1727},"jobs",[79,1801,1747],{"class":1731},[79,1803,1804,1807],{"class":81,"line":489},[79,1805,1806],{"class":1727},"  schema-drift-check",[79,1808,1747],{"class":1731},[79,1810,1811,1814,1816],{"class":81,"line":495},[79,1812,1813],{"class":1727},"    name",[79,1815,1732],{"class":1731},[79,1817,1818],{"class":162},"Detect Schema Drift\n",[79,1820,1821,1824,1826],{"class":81,"line":501},[79,1822,1823],{"class":1727},"    runs-on",[79,1825,1732],{"class":1731},[79,1827,1828],{"class":162},"ubuntu-latest\n",[79,1830,1831],{"class":81,"line":507},[79,1832,128],{"emptyLinePlaceholder":127},[79,1834,1835,1838],{"class":81,"line":513},[79,1836,1837],{"class":1727},"    steps",[79,1839,1747],{"class":1731},[79,1841,1842,1844,1846,1848],{"class":81,"line":519},[79,1843,1766],{"class":1731},[79,1845,1728],{"class":1727},[79,1847,1732],{"class":1731},[79,1849,1850],{"class":162},"Checkout repository\n",[79,1852,1853,1856,1858],{"class":81,"line":525},[79,1854,1855],{"class":1727},"        uses",[79,1857,1732],{"class":1731},[79,1859,1860],{"class":162},"actions\u002Fcheckout@v4\n",[79,1862,1863],{"class":81,"line":782},[79,1864,128],{"emptyLinePlaceholder":127},[79,1866,1867,1869,1871,1873],{"class":81,"line":788},[79,1868,1766],{"class":1731},[79,1870,1728],{"class":1727},[79,1872,1732],{"class":1731},[79,1874,1875],{"class":162},"Run Arcnull Schema Drift Scanner\n",[79,1877,1878,1880,1882],{"class":81,"line":794},[79,1879,1855],{"class":1727},[79,1881,1732],{"class":1731},[79,1883,1884],{"class":162},"arcnull-hq\u002Fschema-drift-action@v1\n",[79,1886,1887,1890],{"class":81,"line":800},[79,1888,1889],{"class":1727},"        with",[79,1891,1747],{"class":1731},[79,1893,1894,1897,1899],{"class":81,"line":806},[79,1895,1896],{"class":1727},"          database-url",[79,1898,1732],{"class":1731},[79,1900,1901],{"class":162},"${{ secrets.DRIFT_DATABASE_URL }}\n",[79,1903,1904,1907,1909],{"class":81,"line":1031},[79,1905,1906],{"class":1727},"          migration-path",[79,1908,1732],{"class":1731},[79,1910,1911],{"class":162},"src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\n",[79,1913,1914,1917,1919],{"class":81,"line":1037},[79,1915,1916],{"class":1727},"          schema",[79,1918,1732],{"class":1731},[79,1920,1921],{"class":162},"public\n",[79,1923,1924,1927,1929],{"class":81,"line":1043},[79,1925,1926],{"class":1727},"          fail-on",[79,1928,1732],{"class":1731},[79,1930,1931],{"class":162},"breaking\n",[11,1933,1617,1934,1937],{},[15,1935,1936],{},"paths"," filter matters more than people think. It keeps the workflow from running on every single PR and limits it to changes that actually touch migrations. That saves CI time and keeps the signal cleaner — you do not want drift alerts on a PR that only changed a README.",[44,1939,1941],{"id":1940},"step-3-configure-the-inputs","Step 3: Configure the inputs",[1428,1943,1945],{"id":1944},"required","Required",[11,1947,1948,1951],{},[15,1949,1950],{},"database-url"," — the PostgreSQL connection string for the database you want to compare against.",[11,1953,1954,1957],{},[15,1955,1956],{},"migration-path"," — path to your migration files, relative to the repository root.",[1428,1959,1961],{"id":1960},"optional","Optional",[11,1963,1964,1967,1968,292],{},[15,1965,1966],{},"schema"," — PostgreSQL schema to scan. Defaults to ",[15,1969,1970],{},"public",[11,1972,1973,1976],{},[15,1974,1975],{},"fail-on"," — controls how strict the check is.",[11,1978,1979,1982,1983,261,1986,261,1989,1992,1993,292],{},[15,1980,1981],{},"migration-tool"," — one of ",[15,1984,1985],{},"flyway",[15,1987,1988],{},"liquibase",[15,1990,1991],{},"alembic",", or ",[15,1994,1995],{},"auto",[11,1997,1998,2001],{},[15,1999,2000],{},"ignore-patterns"," — comma-separated object name patterns to exclude from the check.",[1428,2003,2005,2006],{"id":2004},"understanding-fail-on","Understanding ",[15,2007,1975],{},[11,2009,2010],{},"This is the setting teams spend the most time thinking about, so it is worth being specific.",[11,2012,2013,2016],{},[15,2014,2015],{},"any"," — fail the PR for any detected drift at all. This is the strictest option. It makes sense when your team wants every schema change to flow through migrations with no exceptions, period.",[11,2018,2019,2022],{},[15,2020,2021],{},"breaking"," — fail only when the drift is likely to make the PR's migrations break. Missing tables, conflicting constraints, columns that already exist when the migration assumes they do not. Extra indexes or non-blocking columns still get reported but do not stop the merge. This is probably the right default for most teams.",[11,2024,2025,2028],{},[15,2026,2027],{},"none"," — never fail the check, just report what it finds. A good rollout setting when you want visibility before you start enforcing anything.",[11,2030,2031,2032,2034],{},"If you are not sure where to start, use ",[15,2033,2027],{}," first. See what your environment actually looks like before deciding how strict to be.",[44,2036,2038],{"id":2037},"step-4-read-the-output","Step 4: Read the output",[11,2040,2041],{},"The action produces three kinds of feedback.",[1428,2043,2045],{"id":2044},"pr-check-status","PR check status",[11,2047,2048,2049,2052],{},"The workflow passes or fails. With ",[15,2050,2051],{},"fail-on: breaking",", a breaking drift finding fails the check. If you have branch protection rules that require this check to pass, the PR cannot be merged until the issue is addressed.",[1428,2054,2056],{"id":2055},"pr-annotations","PR annotations",[11,2058,2059],{},"The action adds annotations directly to the migration files in the PR, pointing to the exact line where the migration assumes a schema state that no longer matches reality. Instead of a vague failure, you get a concrete pointer tied to the SQL in question.",[1428,2061,2063],{"id":2062},"drift-report-comment","Drift report comment",[11,2065,2066],{},"The action posts a summary comment on the PR:",[70,2068,2072],{"className":2069,"code":2070,"language":2071,"meta":75,"style":75},"language-markdown shiki shiki-themes github-light github-dark","## Schema Drift Report\n\n**Database:** production (postgres:\u002F\u002F...@prod-db:5432\u002Fmyapp)\n**Schema:** public\n**Scan time:** 342ms\n\n### Breaking Changes (1)\n\n| Object | Expected | Actual | Impact |\n|--------|----------|--------|--------|\n| `users.email_verified` | NOT EXISTS | `boolean DEFAULT false` | Migration V42 assumes column does not exist and will fail on ADD COLUMN |\n\n### Warnings (2)\n\n| Object | Expected | Actual | Impact |\n|--------|----------|--------|--------|\n| `idx_orders_created_at` | NOT EXISTS | `btree (created_at)` | Untracked index, no migration impact |\n| `payments.processor_ref` | NOT EXISTS | `text` | Untracked column, no migration impact |\n\n**Recommendation:** Resolve the 1 breaking change before merging. Create a migration that accounts for the existing `users.email_verified` column, or remove it from production if it was added in error.\n","markdown",[15,2073,2074,2079,2083,2088,2093,2098,2102,2107,2111,2116,2121,2126,2130,2135,2139,2143,2147,2152,2157,2161],{"__ignoreMap":75},[79,2075,2076],{"class":81,"line":82},[79,2077,2078],{},"## Schema Drift Report\n",[79,2080,2081],{"class":81,"line":88},[79,2082,128],{"emptyLinePlaceholder":127},[79,2084,2085],{"class":81,"line":94},[79,2086,2087],{},"**Database:** production (postgres:\u002F\u002F...@prod-db:5432\u002Fmyapp)\n",[79,2089,2090],{"class":81,"line":100},[79,2091,2092],{},"**Schema:** public\n",[79,2094,2095],{"class":81,"line":106},[79,2096,2097],{},"**Scan time:** 342ms\n",[79,2099,2100],{"class":81,"line":112},[79,2101,128],{"emptyLinePlaceholder":127},[79,2103,2104],{"class":81,"line":118},[79,2105,2106],{},"### Breaking Changes (1)\n",[79,2108,2109],{"class":81,"line":124},[79,2110,128],{"emptyLinePlaceholder":127},[79,2112,2113],{"class":81,"line":131},[79,2114,2115],{},"| Object | Expected | Actual | Impact |\n",[79,2117,2118],{"class":81,"line":137},[79,2119,2120],{},"|--------|----------|--------|--------|\n",[79,2122,2123],{"class":81,"line":483},[79,2124,2125],{},"| `users.email_verified` | NOT EXISTS | `boolean DEFAULT false` | Migration V42 assumes column does not exist and will fail on ADD COLUMN |\n",[79,2127,2128],{"class":81,"line":489},[79,2129,128],{"emptyLinePlaceholder":127},[79,2131,2132],{"class":81,"line":495},[79,2133,2134],{},"### Warnings (2)\n",[79,2136,2137],{"class":81,"line":501},[79,2138,128],{"emptyLinePlaceholder":127},[79,2140,2141],{"class":81,"line":507},[79,2142,2115],{},[79,2144,2145],{"class":81,"line":513},[79,2146,2120],{},[79,2148,2149],{"class":81,"line":519},[79,2150,2151],{},"| `idx_orders_created_at` | NOT EXISTS | `btree (created_at)` | Untracked index, no migration impact |\n",[79,2153,2154],{"class":81,"line":525},[79,2155,2156],{},"| `payments.processor_ref` | NOT EXISTS | `text` | Untracked column, no migration impact |\n",[79,2158,2159],{"class":81,"line":782},[79,2160,128],{"emptyLinePlaceholder":127},[79,2162,2163],{"class":81,"line":788},[79,2164,2165],{},"**Recommendation:** Resolve the 1 breaking change before merging. Create a migration that accounts for the existing `users.email_verified` column, or remove it from production if it was added in error.\n",[11,2167,2168],{},"The thing that makes this useful is the separation between \"this will actually break deployment\" and \"this is drift you should probably clean up.\" Not every mismatch needs to block a PR. The dangerous ones absolutely should.",[44,2170,2172],{"id":2171},"step-5-make-it-required-with-branch-protection","Step 5: Make it required with branch protection",[11,2174,2175],{},"Once you trust the signal, make the check required.",[11,2177,2178,2179,2182],{},"Go to your repository Settings → Branches → edit the protection rule for ",[15,2180,2181],{},"main"," → enable Require status checks to pass before merging → add Detect Schema Drift as a required check.",[11,2184,2185],{},"After that, PRs with breaking drift cannot be merged until someone resolves it.",[44,2187,2189],{"id":2188},"what-to-do-when-drift-is-detected","What to do when drift is detected",[1428,2191,2193],{"id":2192},"the-migration-is-wrong","The migration is wrong",[11,2195,2196],{},"Sometimes the problem is that the migration assumes a clean state that no longer exists. Make it more defensive:",[70,2198,2200],{"className":72,"code":2199,"language":74,"meta":75,"style":75},"-- Instead of:\nALTER TABLE users ADD COLUMN email_verified boolean DEFAULT false;\n\n-- Use:\nDO $$\nBEGIN\n    IF NOT EXISTS (\n        SELECT 1 FROM information_schema.columns\n        WHERE table_name = 'users'\n        AND column_name = 'email_verified'\n    ) THEN\n        ALTER TABLE users\n          ADD COLUMN email_verified boolean DEFAULT false;\n    END IF;\nEND $$;\n",[15,2201,2202,2207,2212,2216,2221,2226,2231,2236,2241,2246,2251,2256,2261,2266,2271],{"__ignoreMap":75},[79,2203,2204],{"class":81,"line":82},[79,2205,2206],{},"-- Instead of:\n",[79,2208,2209],{"class":81,"line":88},[79,2210,2211],{},"ALTER TABLE users ADD COLUMN email_verified boolean DEFAULT false;\n",[79,2213,2214],{"class":81,"line":94},[79,2215,128],{"emptyLinePlaceholder":127},[79,2217,2218],{"class":81,"line":100},[79,2219,2220],{},"-- Use:\n",[79,2222,2223],{"class":81,"line":106},[79,2224,2225],{},"DO $$\n",[79,2227,2228],{"class":81,"line":112},[79,2229,2230],{},"BEGIN\n",[79,2232,2233],{"class":81,"line":118},[79,2234,2235],{},"    IF NOT EXISTS (\n",[79,2237,2238],{"class":81,"line":124},[79,2239,2240],{},"        SELECT 1 FROM information_schema.columns\n",[79,2242,2243],{"class":81,"line":131},[79,2244,2245],{},"        WHERE table_name = 'users'\n",[79,2247,2248],{"class":81,"line":137},[79,2249,2250],{},"        AND column_name = 'email_verified'\n",[79,2252,2253],{"class":81,"line":483},[79,2254,2255],{},"    ) THEN\n",[79,2257,2258],{"class":81,"line":489},[79,2259,2260],{},"        ALTER TABLE users\n",[79,2262,2263],{"class":81,"line":495},[79,2264,2265],{},"          ADD COLUMN email_verified boolean DEFAULT false;\n",[79,2267,2268],{"class":81,"line":501},[79,2269,2270],{},"    END IF;\n",[79,2272,2273],{"class":81,"line":507},[79,2274,2275],{},"END $$;\n",[11,2277,2278],{},"This is especially useful when cleaning up legacy drift across multiple environments that have diverged over time.",[1428,2280,2282],{"id":2281},"the-production-change-was-intentional","The production change was intentional",[11,2284,2285],{},"A DBA added an index to fix a slow query. The change was deliberate but never made it back into versioned migrations. Create a migration that documents it:",[70,2287,2289],{"className":72,"code":2288,"language":74,"meta":75,"style":75},"-- V43__document_existing_index.sql\nCREATE INDEX IF NOT EXISTS idx_orders_created_at\n  ON orders (created_at);\n",[15,2290,2291,2296,2301],{"__ignoreMap":75},[79,2292,2293],{"class":81,"line":82},[79,2294,2295],{},"-- V43__document_existing_index.sql\n",[79,2297,2298],{"class":81,"line":88},[79,2299,2300],{},"CREATE INDEX IF NOT EXISTS idx_orders_created_at\n",[79,2302,2303],{"class":81,"line":94},[79,2304,2305],{},"  ON orders (created_at);\n",[11,2307,2308],{},"Safe to run whether the object exists already or not. Migration history catches up with reality.",[1428,2310,2312],{"id":2311},"the-production-change-was-accidental","The production change was accidental",[11,2314,2315],{},"If the drift came from an unintended manual change, revert it in production and restore alignment with your migration history.",[11,2317,2318],{},"Be careful here. Before removing anything, verify that nothing — no application code, no reporting job, no operational script — started depending on the accidental change.",[44,2320,2322],{"id":2321},"ignoring-known-drift","Ignoring known drift",[11,2324,2325],{},"Some drift is expected and permanent. Monitoring infrastructure, extension-managed objects, things your application does not own. Tell the action to skip them:",[70,2327,2329],{"className":1718,"code":2328,"language":1720,"meta":75,"style":75},"- name: Run Arcnull Schema Drift Scanner\n  uses: arcnull-hq\u002Fschema-drift-action@v1\n  with:\n    database-url: ${{ secrets.DRIFT_DATABASE_URL }}\n    migration-path: src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\n    fail-on: breaking\n    ignore-patterns: \"pg_stat_%,pganalyze_%,idx_monitoring_%\"\n",[15,2330,2331,2342,2351,2358,2367,2376,2385],{"__ignoreMap":75},[79,2332,2333,2336,2338,2340],{"class":81,"line":82},[79,2334,2335],{"class":1731},"- ",[79,2337,1728],{"class":1727},[79,2339,1732],{"class":1731},[79,2341,1875],{"class":162},[79,2343,2344,2347,2349],{"class":81,"line":88},[79,2345,2346],{"class":1727},"  uses",[79,2348,1732],{"class":1731},[79,2350,1884],{"class":162},[79,2352,2353,2356],{"class":81,"line":94},[79,2354,2355],{"class":1727},"  with",[79,2357,1747],{"class":1731},[79,2359,2360,2363,2365],{"class":81,"line":100},[79,2361,2362],{"class":1727},"    database-url",[79,2364,1732],{"class":1731},[79,2366,1901],{"class":162},[79,2368,2369,2372,2374],{"class":81,"line":106},[79,2370,2371],{"class":1727},"    migration-path",[79,2373,1732],{"class":1731},[79,2375,1911],{"class":162},[79,2377,2378,2381,2383],{"class":81,"line":112},[79,2379,2380],{"class":1727},"    fail-on",[79,2382,1732],{"class":1731},[79,2384,1931],{"class":162},[79,2386,2387,2390,2392],{"class":81,"line":118},[79,2388,2389],{"class":1727},"    ignore-patterns",[79,2391,1732],{"class":1731},[79,2393,2394],{"class":162},"\"pg_stat_%,pganalyze_%,idx_monitoring_%\"\n",[11,2396,2397,2398,2401,2402,2405,2406,292],{},"Note: patterns use SQL ",[15,2399,2400],{},"LIKE"," syntax, not glob syntax. Use ",[15,2403,2404],{},"%"," as the wildcard, not ",[15,2407,2408],{},"*",[11,2410,2411],{},"Use ignore lists sparingly. They grow. Every pattern you add is one more place drift can hide undetected.",[44,2413,2415],{"id":2414},"common-issues","Common issues",[11,2417,2418,2421],{},[1504,2419,2420],{},"Action times out connecting to the database","\nYour database firewall may be blocking GitHub Actions IP ranges. Add the GitHub Actions IP ranges to your database allowlist, or use a self-hosted runner inside your VPC.",[11,2423,2424,2427],{},[1504,2425,2426],{},"Action reports drift that was just resolved","\nThe action scans the live database at PR time. If drift was fixed after the PR was opened, close and reopen the PR to trigger a fresh scan.",[11,2429,2430,2433,2434,2436,2437,2439,2440,2442,2443,2446,2447,2450],{},[1504,2431,2432],{},"Patterns not matching in ignore-patterns","\nUse ",[15,2435,2404],{}," not ",[15,2438,2408],{},". SQL ",[15,2441,2400],{}," syntax, not glob syntax. ",[15,2444,2445],{},"pg_stat_%"," works. ",[15,2448,2449],{},"pg_stat_*"," does not.",[44,2452,2454],{"id":2453},"wrapping-up","Wrapping up",[11,2456,2457],{},"Schema drift checks feel optional right up until the day they save you from a bad production migration. Catching drift in a PR is a lot cheaper than discovering it mid-deploy, and considerably less stressful than debugging a migration failure at 2 AM.",[11,2459,2460,2461,2464,2465,2467],{},"The action handles the tedious work — reading the catalog, comparing expected versus actual state, reporting the differences where your team already works. A sensible way to roll it out is to start with ",[15,2462,2463],{},"fail-on: none",", clean up what you find, and then move to ",[15,2466,2051],{}," once the noise is under control.",[11,2469,2470],{},"That gives you a smoother adoption path and a much better chance of making schema checks something the team actually keeps enabled.",[11,2472,2473,2474,2476],{},"For continuous monitoring beyond CI — scheduled scans, Slack alerts, and historical drift tracking — ",[1306,2475,1309],{"href":1308}," handles all of that.",[1315,2478,2479],{},"html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html pre.shiki code .s9eBZ, html code.shiki .s9eBZ{--shiki-default:#22863A;--shiki-dark:#85E89D}html pre.shiki code .sVt8B, html code.shiki .sVt8B{--shiki-default:#24292E;--shiki-dark:#E1E4E8}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html pre.shiki code .sj4cs, html code.shiki .sj4cs{--shiki-default:#005CC5;--shiki-dark:#79B8FF}",{"title":75,"searchDepth":88,"depth":88,"links":2481},[2482,2483,2484,2485,2491,2496,2497,2502,2503,2504],{"id":1627,"depth":88,"text":1628},{"id":1651,"depth":88,"text":1652},{"id":1707,"depth":88,"text":1708},{"id":1940,"depth":88,"text":1941,"children":2486},[2487,2488,2489],{"id":1944,"depth":94,"text":1945},{"id":1960,"depth":94,"text":1961},{"id":2004,"depth":94,"text":2490},"Understanding fail-on",{"id":2037,"depth":88,"text":2038,"children":2492},[2493,2494,2495],{"id":2044,"depth":94,"text":2045},{"id":2055,"depth":94,"text":2056},{"id":2062,"depth":94,"text":2063},{"id":2171,"depth":88,"text":2172},{"id":2188,"depth":88,"text":2189,"children":2498},[2499,2500,2501],{"id":2192,"depth":94,"text":2193},{"id":2281,"depth":94,"text":2282},{"id":2311,"depth":94,"text":2312},{"id":2321,"depth":88,"text":2322},{"id":2414,"depth":88,"text":2415},{"id":2453,"depth":88,"text":2454},"A practical walkthrough for catching unapproved PostgreSQL schema changes in CI before they make it into production.",[2507,2508,2509],"postgresql schema github action","schema drift ci cd","database migration github action",{},"\u002Fblog\u002Fdetect-postgresql-schema-changes-github-action","2026-05-05","7",{"title":1602,"description":2505},"detect-postgresql-schema-changes-github-action","blog\u002Fdetect-postgresql-schema-changes-github-action","KO3ZMEM6L4JyjIxvR_FRBjcPKGGcBR8aGwntIKJRSnw"]