[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"blog-what-is-postgresql-schema-drift":3,"related-what-is-postgresql-schema-drift":497},{"id":4,"title":5,"author":6,"body":7,"description":480,"extension":481,"keywords":482,"meta":487,"navigation":488,"path":489,"publishedAt":490,"readTime":491,"seo":492,"slug":493,"stem":494,"updatedAt":495,"__hash__":496},"blog\u002Fblog\u002Fwhat-is-postgresql-schema-drift.md","What PostgreSQL schema drift looks like in production","Maxwell Kimaiyo",{"type":8,"value":9,"toc":455},"minimark",[10,14,22,25,28,31,36,39,42,45,48,52,55,60,63,67,74,78,81,85,88,92,95,99,102,106,113,117,120,124,127,131,134,138,141,144,157,160,163,177,180,188,191,194,197,202,205,216,225,232,238,241,244,345,348,351,355,358,361,375,378,382,385,388,391,394,398,401,404,421,424,428,431,434,440,443,451],[11,12,13],"p",{},"Schema drift usually does not announce itself clearly. It hides for weeks or months, then shows up at the worst possible time: during a deployment, in the middle of an incident, or right when a critical migration runs.",[11,15,16,17,21],{},"Your migration files say the ",[18,19,20],"code",{},"users"," table has twelve columns. Production has fourteen. Nobody is fully sure when the extra two were added, who added them, or whether any service depends on them. The next release assumes the migration history is correct, runs against production, and fails.",[11,23,24],{},"Now the deployment is stuck, the application is unstable, and the team is scrambling to understand what changed.",[11,26,27],{},"That is schema drift.",[11,29,30],{},"It is one of the most common causes of PostgreSQL deployment failures, but it does not get talked about nearly as much as query performance, replication issues, or connection pooling.",[32,33,35],"h2",{"id":34},"what-schema-drift-actually-is","What schema drift actually is",[11,37,38],{},"Schema drift happens when the database schema you think you have is no longer the schema you actually have.",[11,40,41],{},"The expected schema usually lives in migration files inside version control. The real schema lives in PostgreSQL itself. When those two stop matching, drift has already started.",[11,43,44],{},"A simple way to think about it: your migration files are the blueprint, and production is the building. If someone quietly adds a room, moves a wall, or changes the wiring without updating the blueprint, the next contractor is going to run into problems. The same thing happens with databases.",[11,46,47],{},"At first, nothing may look broken. The app still runs. Queries still work. But over time, that gap between expectation and reality grows. Eventually, a migration, rollback, or deployment trips over it.",[32,49,51],{"id":50},"what-schema-drift-looks-like-in-practice","What schema drift looks like in practice",[11,53,54],{},"Schema drift is not just one thing. It shows up in a few common ways.",[56,57,59],"h3",{"id":58},"column-drift","Column drift",[11,61,62],{},"A column gets added, removed, renamed, or changed directly in production, but no migration file is ever checked in. It often happens during incident response, when someone makes a quick fix under pressure.",[56,64,66],{"id":65},"constraint-drift","Constraint drift",[11,68,69,70,73],{},"Constraints are added, removed, or changed outside the normal migration path. This is especially risky because it can affect data quality without being immediately visible. A dropped ",[18,71,72],{},"NOT NULL"," or foreign key constraint may not trigger an outage right away, but it can quietly let bad data into the system.",[56,75,77],{"id":76},"index-drift","Index drift",[11,79,80],{},"Someone creates an index in production to fix a slow query, but the change never makes it back into source control. Everything seems fine until a future migration rebuilds the table and the index disappears.",[56,82,84],{"id":83},"extension-and-function-drift","Extension and function drift",[11,86,87],{},"Extensions, stored procedures, and custom functions are sometimes changed directly in production. These changes are easy to miss and can make environments behave differently in subtle ways.",[56,89,91],{"id":90},"permission-drift","Permission drift",[11,93,94],{},"Roles, grants, and access controls can drift too. That is not just an operational issue — it can also become a security and compliance problem.",[32,96,98],{"id":97},"how-drift-usually-starts","How drift usually starts",[11,100,101],{},"Schema drift does not usually happen because a team is careless. It happens because production systems are busy, real environments, and sometimes the migration pipeline is not the only place where changes happen.",[56,103,105],{"id":104},"the-late-night-hotfix","The late-night hotfix",[11,107,108,109,112],{},"A production issue needs an immediate fix. An engineer logs in, runs an ",[18,110,111],{},"ALTER TABLE",", solves the problem, and plans to backfill the migration later. Later never quite comes.",[56,114,116],{"id":115},"environment-mismatch","Environment mismatch",[11,118,119],{},"Staging and production look similar, but not identical. A migration passes in staging because staging is missing a manual change that production already has. The exact same migration then fails in production.",[56,121,123],{"id":122},"automatic-schema-changes-from-tooling","Automatic schema changes from tooling",[11,125,126],{},"Some frameworks and tools make schema changes easier, but that convenience can create problems if multiple services or teams are involved. When several actors touch the same database, migration history alone may stop telling the full story.",[56,128,130],{"id":129},"dba-or-ops-side-changes","DBA or ops-side changes",[11,132,133],{},"DBAs and platform teams sometimes make direct schema changes for perfectly valid reasons: performance tuning, partitioning, operational fixes, or emergency adjustments. If those changes are not reflected back into the main migration history, drift starts building.",[32,135,137],{"id":136},"why-this-becomes-a-production-problem","Why this becomes a production problem",[11,139,140],{},"Schema drift is dangerous because it stays invisible until something depends on the schema being exactly what the migration history says it is.",[11,142,143],{},"That usually happens during one of three moments:",[145,146,147,151,154],"ul",{},[148,149,150],"li",{},"A deployment",[148,152,153],{},"A rollback",[148,155,156],{},"A production incident",[11,158,159],{},"That is why it feels so disruptive. Drift often accumulates quietly, then surfaces when the team is already under pressure.",[11,161,162],{},"Instead of a straightforward release, you get a confusing failure:",[145,164,165,168,171,174],{},[148,166,167],{},"A migration tries to create something that already exists",[148,169,170],{},"A column is missing in one environment but not another",[148,172,173],{},"A constraint behaves differently than expected",[148,175,176],{},"A rollback script assumes a state the database no longer has",[11,178,179],{},"The real problem is not just that the schema changed. It is that the team lost a reliable picture of reality.",[32,181,183,184,187],{"id":182},"why-pg_dump-is-not-a-great-way-to-detect-drift","Why ",[18,185,186],{},"pg_dump"," is not a great way to detect drift",[11,189,190],{},"A lot of teams start with what seems like the obvious approach: dump the schema from two environments, diff the files, and look for differences.",[11,192,193],{},"On paper, that sounds reasonable.",[11,195,196],{},"In practice, it creates noise.",[11,198,199,201],{},[18,200,186],{}," is built for backup and restore. Its job is to generate SQL that can recreate the schema, not to produce a stable representation for comparison. That means ordering can vary even when nothing meaningful has changed.",[11,203,204],{},"When you diff two dumps, you may end up seeing changes that are not really changes at all:",[145,206,207,210,213],{},[148,208,209],{},"Constraints appearing in a different order",[148,211,212],{},"Index definitions grouped differently",[148,214,215],{},"Statements reordered in a way that makes the diff look louder than reality",[11,217,218,219,224],{},"That makes it harder to spot the drift that actually matters. For a deeper look at why, see the companion article on ",[220,221,223],"a",{"href":222},"\u002Fblog\u002Fpg-catalog-vs-pg-dump-schema-snapshots","pg_catalog vs pg_dump",".",[32,226,183,228,231],{"id":227},"why-pg_catalog-works-better",[18,229,230],{},"pg_catalog"," works better",[11,233,234,235,237],{},"PostgreSQL already stores its schema metadata in ",[18,236,230],{},". That gives you a much better foundation for drift detection because you can query the metadata directly and apply your own deterministic ordering.",[11,239,240],{},"If you query columns and explicitly sort by schema, table, and column position, the output stays stable for the same schema. That makes it much more reliable for comparison.",[11,242,243],{},"A simplified example:",[245,246,251],"pre",{"className":247,"code":248,"language":249,"meta":250,"style":250},"language-sql shiki shiki-themes github-light github-dark","SELECT\n    n.nspname AS schema_name,\n    c.relname AS table_name,\n    a.attname AS column_name,\n    pg_catalog.format_type(a.atttypid, a.atttypmod) AS data_type,\n    a.attnotnull AS is_not_null,\n    a.attnum AS ordinal_position\nFROM pg_catalog.pg_attribute a\nJOIN pg_catalog.pg_class c ON a.attrelid = c.oid\nJOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\nWHERE n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')\n  AND c.relkind = 'r'\n  AND a.attnum > 0\n  AND NOT a.attisdropped\nORDER BY n.nspname, c.relname, a.attnum;\n","sql","",[18,252,253,261,267,273,279,285,291,297,303,309,315,321,327,333,339],{"__ignoreMap":250},[254,255,258],"span",{"class":256,"line":257},"line",1,[254,259,260],{},"SELECT\n",[254,262,264],{"class":256,"line":263},2,[254,265,266],{},"    n.nspname AS schema_name,\n",[254,268,270],{"class":256,"line":269},3,[254,271,272],{},"    c.relname AS table_name,\n",[254,274,276],{"class":256,"line":275},4,[254,277,278],{},"    a.attname AS column_name,\n",[254,280,282],{"class":256,"line":281},5,[254,283,284],{},"    pg_catalog.format_type(a.atttypid, a.atttypmod) AS data_type,\n",[254,286,288],{"class":256,"line":287},6,[254,289,290],{},"    a.attnotnull AS is_not_null,\n",[254,292,294],{"class":256,"line":293},7,[254,295,296],{},"    a.attnum AS ordinal_position\n",[254,298,300],{"class":256,"line":299},8,[254,301,302],{},"FROM pg_catalog.pg_attribute a\n",[254,304,306],{"class":256,"line":305},9,[254,307,308],{},"JOIN pg_catalog.pg_class c ON a.attrelid = c.oid\n",[254,310,312],{"class":256,"line":311},10,[254,313,314],{},"JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\n",[254,316,318],{"class":256,"line":317},11,[254,319,320],{},"WHERE n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')\n",[254,322,324],{"class":256,"line":323},12,[254,325,326],{},"  AND c.relkind = 'r'\n",[254,328,330],{"class":256,"line":329},13,[254,331,332],{},"  AND a.attnum > 0\n",[254,334,336],{"class":256,"line":335},14,[254,337,338],{},"  AND NOT a.attisdropped\n",[254,340,342],{"class":256,"line":341},15,[254,343,344],{},"ORDER BY n.nspname, c.relname, a.attnum;\n",[11,346,347],{},"The same idea applies to constraints, indexes, triggers, functions, and extensions. Once you capture them in a consistent order, you can build a stable snapshot of the schema.",[11,349,350],{},"From there, you can hash the result and turn the entire schema into a single fingerprint. If the fingerprint changes, something changed. If it does not, the schema is the same. That removes a lot of false positives.",[32,352,354],{"id":353},"what-good-drift-detection-should-do","What good drift detection should do",[11,356,357],{},"A useful drift detection system should do more than say \"something is different.\"",[11,359,360],{},"It should tell you:",[145,362,363,366,369,372],{},[148,364,365],{},"What changed",[148,367,368],{},"Where it changed",[148,370,371],{},"Whether the change is likely to break a deployment",[148,373,374],{},"How serious it is",[11,376,377],{},"That is the real difference between noisy comparison and actionable detection.",[32,379,381],{"id":380},"why-catching-drift-early-matters","Why catching drift early matters",[11,383,384],{},"The direct cost of schema drift is downtime, failed releases, and engineering time spent debugging under pressure.",[11,386,387],{},"But there is a bigger cost: trust.",[11,389,390],{},"When teams stop trusting that staging matches production, every deployment feels riskier. Every migration needs more caution. Every rollback becomes more stressful. The delivery pipeline slows down because the schema is no longer something people feel confident about.",[11,392,393],{},"That affects far more than the database team. It affects release speed across the whole company.",[32,395,397],{"id":396},"a-simple-way-to-get-started","A simple way to get started",[11,399,400],{},"If you are not checking for drift today, you do not need a big platform project to begin.",[11,402,403],{},"Start by:",[405,406,407,412,415,418],"ol",{},[148,408,409,410],{},"Capturing a baseline snapshot from production using ",[18,411,230],{},[148,413,414],{},"Storing that output in a canonical form",[148,416,417],{},"Running the same snapshot regularly",[148,419,420],{},"Comparing the results for changes",[11,422,423],{},"Even that basic approach is far better than waiting for the next failed deployment to reveal the drift for you.",[32,425,427],{"id":426},"final-thought","Final thought",[11,429,430],{},"Schema drift is not rare. It is just easy to ignore until it becomes expensive.",[11,432,433],{},"Every team running PostgreSQL in production will eventually deal with it. The real choice is whether you discover it during a calm review cycle or during a stressful outage.",[11,435,436,437,439],{},"If you build detection on deterministic ",[18,438,230],{}," snapshots instead of noisy schema dumps, you can catch the real changes early and avoid a lot of painful surprises.",[11,441,442],{},"And in production, that kind of visibility is not just nice to have. It is part of staying deployable.",[11,444,445,446,450],{},"If you want this running automatically with CI gates, Slack alerts, and historical drift tracking, ",[220,447,449],{"href":448},"\u002Fproducts\u002Fdrift-scanner","Drift Scanner"," is built around exactly this approach.",[452,453,454],"style",{},"html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",{"title":250,"searchDepth":263,"depth":263,"links":456},[457,458,465,471,472,474,476,477,478,479],{"id":34,"depth":263,"text":35},{"id":50,"depth":263,"text":51,"children":459},[460,461,462,463,464],{"id":58,"depth":269,"text":59},{"id":65,"depth":269,"text":66},{"id":76,"depth":269,"text":77},{"id":83,"depth":269,"text":84},{"id":90,"depth":269,"text":91},{"id":97,"depth":263,"text":98,"children":466},[467,468,469,470],{"id":104,"depth":269,"text":105},{"id":115,"depth":269,"text":116},{"id":122,"depth":269,"text":123},{"id":129,"depth":269,"text":130},{"id":136,"depth":263,"text":137},{"id":182,"depth":263,"text":473},"Why pg_dump is not a great way to detect drift",{"id":227,"depth":263,"text":475},"Why pg_catalog works better",{"id":353,"depth":263,"text":354},{"id":380,"depth":263,"text":381},{"id":396,"depth":263,"text":397},{"id":426,"depth":263,"text":427},"What schema drift actually is, how it happens, and why it tends to surface at the worst possible time.","md",[483,484,485,486],"postgresql schema drift","database schema monitoring","production outage prevention","postgresql migration",{},true,"\u002Fblog\u002Fwhat-is-postgresql-schema-drift","2026-04-14","8",{"title":5,"description":480},"what-is-postgresql-schema-drift","blog\u002Fwhat-is-postgresql-schema-drift","2026-04-15","GHhOfTrbFIkN8eDWk1yURvj7tzY_Xb8nW9rgeUpuFJI",[498,749],{"id":499,"title":500,"author":6,"body":501,"description":733,"extension":481,"keywords":734,"meta":741,"navigation":488,"path":742,"publishedAt":743,"readTime":744,"seo":745,"slug":746,"stem":747,"updatedAt":495,"__hash__":748},"blog\u002Fblog\u002Fmcp-server-security-governance-2026.md","MCP server security: why governance matters as agent tool use grows",{"type":8,"value":502,"toc":718},[503,506,509,512,515,518,521,525,528,531,534,537,540,544,547,550,553,570,573,577,581,584,587,591,594,597,601,604,607,611,614,617,621,624,627,631,634,637,640,643,647,650,657,663,669,675,681,685,688,691,694,698,701,704,707,710],[11,504,505],{},"The Model Context Protocol makes it much easier for AI agents to use real tools. That is a big step forward. It means the same model can query a database, call an internal API, update a CRM record, or trigger part of a deployment workflow through a common interface.",[11,507,508],{},"That simplicity is exactly why MCP is getting attention.",[11,510,511],{},"It is also why teams need to think more carefully about governance.",[11,513,514],{},"In many early MCP deployments, the focus is naturally on getting tools connected and workflows running. The security model often comes later. That creates a gap: agents can suddenly reach more systems, but the organization still has limited visibility into who is calling what, what data is being accessed, and which actions are being taken.",[11,516,517],{},"This is where governance starts to matter. Not because MCP is broken, but because a protocol for tool use does not automatically solve authentication, authorization, auditability, or rate control. Those still need to be designed.",[11,519,520],{},"This article looks at where the risks show up, why they grow quickly once multiple teams adopt MCP, and why a governance proxy is becoming a practical pattern for production environments.",[32,522,524],{"id":523},"what-mcp-is-and-why-teams-are-adopting-it","What MCP is, and why teams are adopting it",[11,526,527],{},"MCP gives AI agents a standard way to discover and call tools. An MCP server exposes tools with defined schemas, and an agent can call those tools as part of a conversation or workflow.",[11,529,530],{},"That sounds simple, but it is powerful in practice.",[11,532,533],{},"Once tools are exposed through MCP, an agent can work across multiple systems without custom glue code for every integration. A support assistant might look up a customer, check an order, issue a refund, and send a follow-up email in one flow. A developer assistant might read logs, inspect a schema, and open a ticket.",[11,535,536],{},"That is the appeal. Tool use becomes much easier to standardize.",[11,538,539],{},"The catch is that standardizing tool access also makes it easier to scale access before governance has caught up.",[32,541,543],{"id":542},"where-the-risk-starts","Where the risk starts",[11,545,546],{},"The risk usually does not begin with one obviously dangerous deployment. It starts with something useful and local.",[11,548,549],{},"A team creates an MCP server for one internal system. It helps with debugging, support, or reporting. Then another team starts using it for a different workflow. Then a third team connects it to an internal assistant. Before long, the same server is being used in several contexts, by different people, for different kinds of actions.",[11,551,552],{},"At that point, the question is no longer just whether the server works. The question becomes:",[145,554,555,558,561,564,567],{},[148,556,557],{},"Who is allowed to call which tools?",[148,559,560],{},"Which actions require approval?",[148,562,563],{},"What gets logged?",[148,565,566],{},"How do you trace a tool call back to a user, a session, or a business purpose?",[148,568,569],{},"What happens when an agent behaves unexpectedly?",[11,571,572],{},"Without a governance layer, those questions usually get answered inconsistently, or not at all.",[32,574,576],{"id":575},"five-practical-risks-of-ungoverned-mcp-servers","Five practical risks of ungoverned MCP servers",[56,578,580],{"id":579},"_1-prompt-injection-can-turn-tool-access-into-data-exposure","1. Prompt injection can turn tool access into data exposure",[11,582,583],{},"If an agent can read sensitive data and also take external actions, prompt injection becomes much more serious. A malicious instruction hidden in data can push the agent to retrieve information it should not expose, or send it somewhere it should not go.",[11,585,586],{},"What makes this hard is that the individual tool calls may look valid in isolation. The problem is the sequence and the intent behind it.",[56,588,590],{"id":589},"_2-tool-chaining-can-create-privilege-problems","2. Tool chaining can create privilege problems",[11,592,593],{},"One safe-looking tool call can become risky when combined with another. An agent may gather identifiers or context from one system, then use that context to make a higher-impact call somewhere else.",[11,595,596],{},"Traditional authorization checks are often request-by-request. Agent workflows are not always that simple. The surrounding chain matters.",[56,598,600],{"id":599},"_3-audit-trails-are-often-incomplete","3. Audit trails are often incomplete",[11,602,603],{},"Logging that \"tool X was called\" is not enough for most real-world governance needs. Teams usually need more context: who initiated the workflow, what data was touched, why the action happened, and whether a policy decision was involved.",[11,605,606],{},"Without that context, investigations get harder and compliance work gets weaker.",[56,608,610],{"id":609},"_4-runaway-agents-can-overwhelm-downstream-systems","4. Runaway agents can overwhelm downstream systems",[11,612,613],{},"Autonomous workflows can generate more volume than teams expect. Retries, loops, or poor workflow design can flood a server or the systems behind it.",[11,615,616],{},"MCP makes tool use easier. That also means mistakes can scale faster.",[56,618,620],{"id":619},"_5-sensitive-data-can-leak-through-responses-and-errors","5. Sensitive data can leak through responses and errors",[11,622,623],{},"Credentials, stack traces, or overly verbose error messages can escape through tool responses. An agent does not reliably understand that a token or secret is dangerous. It may repeat it, store it, or pass it along in another step.",[11,625,626],{},"That makes response filtering and redaction more important than many early implementations assume.",[32,628,630],{"id":629},"why-a-governance-proxy-helps","Why a governance proxy helps",[11,632,633],{},"A governance proxy sits between the agent and the MCP servers it uses.",[11,635,636],{},"Instead of every server implementing its own access model, logging conventions, and rate controls, the proxy becomes the place where those decisions are applied consistently. It can authenticate the caller, evaluate policy, log the request with context, limit abuse, and filter sensitive data before a response goes back to the agent.",[11,638,639],{},"That does not remove all risk, but it gives teams a much better control point.",[11,641,642],{},"It also matches how organizations usually want to manage production systems: one place for policy, one place for visibility, and one place to investigate what happened.",[32,644,646],{"id":645},"what-that-governance-layer-should-do","What that governance layer should do",[11,648,649],{},"At a minimum, a useful governance layer should handle a few things well.",[11,651,652,656],{},[653,654,655],"strong",{},"Authentication."," It should establish who is behind the request, whether that is a user, service, or agent session.",[11,658,659,662],{},[653,660,661],{},"Authorization."," It should evaluate whether a tool call is allowed based on identity, tool, parameters, and context.",[11,664,665,668],{},[653,666,667],{},"Audit logging."," It should record enough information to reconstruct what happened later, including the policy decision that was applied.",[11,670,671,674],{},[653,672,673],{},"Rate limiting."," It should keep one broken or badly behaved workflow from overwhelming shared systems.",[11,676,677,680],{},[653,678,679],{},"Data filtering."," It should be able to redact or block sensitive fields before they reach the model or the user.",[32,682,684],{"id":683},"why-this-matters-now","Why this matters now",[11,686,687],{},"MCP adoption is growing because it solves a real integration problem. That is a good thing. But once agents move from answering questions to taking actions, governance stops being a nice extra and starts becoming part of the production architecture.",[11,689,690],{},"The teams that handle this well will not necessarily be the ones with the most tools. They will be the ones with the clearest controls around how those tools are used.",[11,692,693],{},"Teams that delay governance will usually end up choosing between slower adoption and weaker controls. Neither is a good position once the workflows are already running in production.",[32,695,697],{"id":696},"conclusion","Conclusion",[11,699,700],{},"MCP makes agent tool use easier to standardize. Governance makes it safer to run at scale.",[11,702,703],{},"As more teams connect agents to databases, APIs, internal systems, and operational workflows, the main challenge is no longer just integration. It is visibility, control, and trust.",[11,705,706],{},"A governance proxy is one practical way to get there. It gives teams a central place to apply policy, capture audit context, and reduce the risk that comes with giving agents access to real systems.",[11,708,709],{},"If you are already experimenting with MCP in production, this is the point where governance starts to move from something to think about later to something worth designing for now.",[11,711,712,713,717],{},"If you are building this kind of control layer, ",[220,714,716],{"href":715},"\u002Fproducts\u002Fmcp-vault","MCP Vault"," is the direction we are exploring at Arcnull.",{"title":250,"searchDepth":263,"depth":263,"links":719},[720,721,722,729,730,731,732],{"id":523,"depth":263,"text":524},{"id":542,"depth":263,"text":543},{"id":575,"depth":263,"text":576,"children":723},[724,725,726,727,728],{"id":579,"depth":269,"text":580},{"id":589,"depth":269,"text":590},{"id":599,"depth":269,"text":600},{"id":609,"depth":269,"text":610},{"id":619,"depth":269,"text":620},{"id":629,"depth":263,"text":630},{"id":645,"depth":263,"text":646},{"id":683,"depth":263,"text":684},{"id":696,"depth":263,"text":697},"As more teams connect AI agents to real tools through MCP, access control, auditability, and oversight become practical production concerns. Here is why a governance layer is starting to matter.",[735,736,737,738,739,740],"mcp server security","mcp governance","ai agent security","model context protocol","mcp proxy","ai governance 2026",{},"\u002Fblog\u002Fmcp-server-security-governance-2026","2026-05-12","10 min read",{"title":500,"description":733},"mcp-server-security-governance-2026","blog\u002Fmcp-server-security-governance-2026","FM34I7GmFMb7DzrmfK2i88S8bqZEw5Kg5SWG9GAf-OE",{"id":750,"title":751,"author":6,"body":752,"description":1668,"extension":481,"keywords":1669,"meta":1673,"navigation":488,"path":1674,"publishedAt":1675,"readTime":1676,"seo":1677,"slug":1678,"stem":1679,"updatedAt":495,"__hash__":1680},"blog\u002Fblog\u002Fdetect-postgresql-schema-changes-github-action.md","Detecting PostgreSQL schema changes with a GitHub Action",{"type":8,"value":753,"toc":1643},[754,757,763,770,773,777,780,794,797,801,807,841,847,853,857,864,1094,1100,1104,1108,1114,1120,1124,1133,1139,1159,1165,1171,1174,1180,1186,1192,1198,1202,1205,1209,1216,1220,1223,1227,1230,1329,1332,1336,1339,1346,1349,1353,1357,1360,1439,1442,1446,1449,1469,1472,1476,1479,1482,1486,1489,1558,1572,1575,1579,1585,1591,1614,1618,1621,1631,1634,1640],[11,755,756],{},"Every team eventually gets burned by schema drift.",[11,758,759,760,762],{},"A migration passes in CI, looks fine in review, and then blows up in production because production is not actually in the state everyone thought it was. Maybe someone ran an ",[18,761,111],{}," during an incident. Maybe a DBA added an index to calm down a slow query. Either way, your migration history says one thing, and the database says another.",[11,764,765,766,769],{},"The ",[18,767,768],{},"arcnull-hq\u002Fschema-drift-action"," is meant to catch that before a pull request gets merged. It compares the schema changes introduced by your PR against the real state of your target database and flags anything that could break or drift from what your migrations expect.",[11,771,772],{},"In this walkthrough I will show you how to set it up in GitHub Actions, how to configure it safely for PostgreSQL, and what to look for when it reports drift.",[32,774,776],{"id":775},"what-you-need-before-you-start","What you need before you start",[11,778,779],{},"A few basics need to be in place:",[145,781,782,785,788,791],{},[148,783,784],{},"A PostgreSQL database to compare against — usually production or staging",[148,786,787],{},"A read-only PostgreSQL user the action can connect with",[148,789,790],{},"That connection string stored as a GitHub Actions secret",[148,792,793],{},"Migration files in your repository — Flyway, Liquibase, Alembic, or plain SQL all work",[11,795,796],{},"The action only reads schema metadata from PostgreSQL system catalogs. It does not need write access to anything.",[32,798,800],{"id":799},"step-1-create-a-read-only-database-user","Step 1: Create a read-only database user",[11,802,803,804,806],{},"The action needs to inspect ",[18,805,230],{}," to understand the current schema state. Give it a dedicated user with the minimum access it actually needs:",[245,808,810],{"className":247,"code":809,"language":249,"meta":250,"style":250},"CREATE ROLE schema_drift_reader\n  WITH LOGIN PASSWORD 'your-secure-password';\nGRANT CONNECT ON DATABASE your_database\n  TO schema_drift_reader;\nGRANT USAGE ON SCHEMA public\n  TO schema_drift_reader;\n",[18,811,812,817,822,827,832,837],{"__ignoreMap":250},[254,813,814],{"class":256,"line":257},[254,815,816],{},"CREATE ROLE schema_drift_reader\n",[254,818,819],{"class":256,"line":263},[254,820,821],{},"  WITH LOGIN PASSWORD 'your-secure-password';\n",[254,823,824],{"class":256,"line":269},[254,825,826],{},"GRANT CONNECT ON DATABASE your_database\n",[254,828,829],{"class":256,"line":275},[254,830,831],{},"  TO schema_drift_reader;\n",[254,833,834],{"class":256,"line":281},[254,835,836],{},"GRANT USAGE ON SCHEMA public\n",[254,838,839],{"class":256,"line":287},[254,840,831],{},[11,842,843,844,846],{},"Note: ",[18,845,230],{}," is readable by all PostgreSQL users by default — no explicit GRANT is needed. The above three statements are sufficient.",[11,848,849,850,224],{},"Store the connection string as a GitHub Actions secret named ",[18,851,852],{},"DRIFT_DATABASE_URL",[32,854,856],{"id":855},"step-2-add-the-workflow-file","Step 2: Add the workflow file",[11,858,859,860,863],{},"Create ",[18,861,862],{},".github\u002Fworkflows\u002Fschema-drift.yml",":",[245,865,869],{"className":866,"code":867,"language":868,"meta":250,"style":250},"language-yaml shiki shiki-themes github-light github-dark","name: Schema Drift Check\n\non:\n  pull_request:\n    paths:\n      - 'src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\u002F**'\n      - 'migrations\u002F**'\n      - 'alembic\u002Fversions\u002F**'\n      - 'sql\u002F**'\n\njobs:\n  schema-drift-check:\n    name: Detect Schema Drift\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Checkout repository\n        uses: actions\u002Fcheckout@v4\n\n      - name: Run Arcnull Schema Drift Scanner\n        uses: arcnull-hq\u002Fschema-drift-action@v1\n        with:\n          database-url: ${{ secrets.DRIFT_DATABASE_URL }}\n          migration-path: src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\n          schema: public\n          fail-on: breaking\n","yaml",[18,870,871,885,890,899,906,913,921,928,935,942,946,953,960,970,980,984,992,1004,1015,1020,1032,1042,1050,1061,1072,1083],{"__ignoreMap":250},[254,872,873,877,881],{"class":256,"line":257},[254,874,876],{"class":875},"s9eBZ","name",[254,878,880],{"class":879},"sVt8B",": ",[254,882,884],{"class":883},"sZZnC","Schema Drift Check\n",[254,886,887],{"class":256,"line":263},[254,888,889],{"emptyLinePlaceholder":488},"\n",[254,891,892,896],{"class":256,"line":269},[254,893,895],{"class":894},"sj4cs","on",[254,897,898],{"class":879},":\n",[254,900,901,904],{"class":256,"line":275},[254,902,903],{"class":875},"  pull_request",[254,905,898],{"class":879},[254,907,908,911],{"class":256,"line":281},[254,909,910],{"class":875},"    paths",[254,912,898],{"class":879},[254,914,915,918],{"class":256,"line":287},[254,916,917],{"class":879},"      - ",[254,919,920],{"class":883},"'src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\u002F**'\n",[254,922,923,925],{"class":256,"line":293},[254,924,917],{"class":879},[254,926,927],{"class":883},"'migrations\u002F**'\n",[254,929,930,932],{"class":256,"line":299},[254,931,917],{"class":879},[254,933,934],{"class":883},"'alembic\u002Fversions\u002F**'\n",[254,936,937,939],{"class":256,"line":305},[254,938,917],{"class":879},[254,940,941],{"class":883},"'sql\u002F**'\n",[254,943,944],{"class":256,"line":311},[254,945,889],{"emptyLinePlaceholder":488},[254,947,948,951],{"class":256,"line":317},[254,949,950],{"class":875},"jobs",[254,952,898],{"class":879},[254,954,955,958],{"class":256,"line":323},[254,956,957],{"class":875},"  schema-drift-check",[254,959,898],{"class":879},[254,961,962,965,967],{"class":256,"line":329},[254,963,964],{"class":875},"    name",[254,966,880],{"class":879},[254,968,969],{"class":883},"Detect Schema Drift\n",[254,971,972,975,977],{"class":256,"line":335},[254,973,974],{"class":875},"    runs-on",[254,976,880],{"class":879},[254,978,979],{"class":883},"ubuntu-latest\n",[254,981,982],{"class":256,"line":341},[254,983,889],{"emptyLinePlaceholder":488},[254,985,987,990],{"class":256,"line":986},16,[254,988,989],{"class":875},"    steps",[254,991,898],{"class":879},[254,993,995,997,999,1001],{"class":256,"line":994},17,[254,996,917],{"class":879},[254,998,876],{"class":875},[254,1000,880],{"class":879},[254,1002,1003],{"class":883},"Checkout repository\n",[254,1005,1007,1010,1012],{"class":256,"line":1006},18,[254,1008,1009],{"class":875},"        uses",[254,1011,880],{"class":879},[254,1013,1014],{"class":883},"actions\u002Fcheckout@v4\n",[254,1016,1018],{"class":256,"line":1017},19,[254,1019,889],{"emptyLinePlaceholder":488},[254,1021,1023,1025,1027,1029],{"class":256,"line":1022},20,[254,1024,917],{"class":879},[254,1026,876],{"class":875},[254,1028,880],{"class":879},[254,1030,1031],{"class":883},"Run Arcnull Schema Drift Scanner\n",[254,1033,1035,1037,1039],{"class":256,"line":1034},21,[254,1036,1009],{"class":875},[254,1038,880],{"class":879},[254,1040,1041],{"class":883},"arcnull-hq\u002Fschema-drift-action@v1\n",[254,1043,1045,1048],{"class":256,"line":1044},22,[254,1046,1047],{"class":875},"        with",[254,1049,898],{"class":879},[254,1051,1053,1056,1058],{"class":256,"line":1052},23,[254,1054,1055],{"class":875},"          database-url",[254,1057,880],{"class":879},[254,1059,1060],{"class":883},"${{ secrets.DRIFT_DATABASE_URL }}\n",[254,1062,1064,1067,1069],{"class":256,"line":1063},24,[254,1065,1066],{"class":875},"          migration-path",[254,1068,880],{"class":879},[254,1070,1071],{"class":883},"src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\n",[254,1073,1075,1078,1080],{"class":256,"line":1074},25,[254,1076,1077],{"class":875},"          schema",[254,1079,880],{"class":879},[254,1081,1082],{"class":883},"public\n",[254,1084,1086,1089,1091],{"class":256,"line":1085},26,[254,1087,1088],{"class":875},"          fail-on",[254,1090,880],{"class":879},[254,1092,1093],{"class":883},"breaking\n",[11,1095,765,1096,1099],{},[18,1097,1098],{},"paths"," filter matters more than people think. It keeps the workflow from running on every single PR and limits it to changes that actually touch migrations. That saves CI time and keeps the signal cleaner — you do not want drift alerts on a PR that only changed a README.",[32,1101,1103],{"id":1102},"step-3-configure-the-inputs","Step 3: Configure the inputs",[56,1105,1107],{"id":1106},"required","Required",[11,1109,1110,1113],{},[18,1111,1112],{},"database-url"," — the PostgreSQL connection string for the database you want to compare against.",[11,1115,1116,1119],{},[18,1117,1118],{},"migration-path"," — path to your migration files, relative to the repository root.",[56,1121,1123],{"id":1122},"optional","Optional",[11,1125,1126,1129,1130,224],{},[18,1127,1128],{},"schema"," — PostgreSQL schema to scan. Defaults to ",[18,1131,1132],{},"public",[11,1134,1135,1138],{},[18,1136,1137],{},"fail-on"," — controls how strict the check is.",[11,1140,1141,1144,1145,1148,1149,1148,1152,1155,1156,224],{},[18,1142,1143],{},"migration-tool"," — one of ",[18,1146,1147],{},"flyway",", ",[18,1150,1151],{},"liquibase",[18,1153,1154],{},"alembic",", or ",[18,1157,1158],{},"auto",[11,1160,1161,1164],{},[18,1162,1163],{},"ignore-patterns"," — comma-separated object name patterns to exclude from the check.",[56,1166,1168,1169],{"id":1167},"understanding-fail-on","Understanding ",[18,1170,1137],{},[11,1172,1173],{},"This is the setting teams spend the most time thinking about, so it is worth being specific.",[11,1175,1176,1179],{},[18,1177,1178],{},"any"," — fail the PR for any detected drift at all. This is the strictest option. It makes sense when your team wants every schema change to flow through migrations with no exceptions, period.",[11,1181,1182,1185],{},[18,1183,1184],{},"breaking"," — fail only when the drift is likely to make the PR's migrations break. Missing tables, conflicting constraints, columns that already exist when the migration assumes they do not. Extra indexes or non-blocking columns still get reported but do not stop the merge. This is probably the right default for most teams.",[11,1187,1188,1191],{},[18,1189,1190],{},"none"," — never fail the check, just report what it finds. A good rollout setting when you want visibility before you start enforcing anything.",[11,1193,1194,1195,1197],{},"If you are not sure where to start, use ",[18,1196,1190],{}," first. See what your environment actually looks like before deciding how strict to be.",[32,1199,1201],{"id":1200},"step-4-read-the-output","Step 4: Read the output",[11,1203,1204],{},"The action produces three kinds of feedback.",[56,1206,1208],{"id":1207},"pr-check-status","PR check status",[11,1210,1211,1212,1215],{},"The workflow passes or fails. With ",[18,1213,1214],{},"fail-on: breaking",", a breaking drift finding fails the check. If you have branch protection rules that require this check to pass, the PR cannot be merged until the issue is addressed.",[56,1217,1219],{"id":1218},"pr-annotations","PR annotations",[11,1221,1222],{},"The action adds annotations directly to the migration files in the PR, pointing to the exact line where the migration assumes a schema state that no longer matches reality. Instead of a vague failure, you get a concrete pointer tied to the SQL in question.",[56,1224,1226],{"id":1225},"drift-report-comment","Drift report comment",[11,1228,1229],{},"The action posts a summary comment on the PR:",[245,1231,1235],{"className":1232,"code":1233,"language":1234,"meta":250,"style":250},"language-markdown shiki shiki-themes github-light github-dark","## Schema Drift Report\n\n**Database:** production (postgres:\u002F\u002F...@prod-db:5432\u002Fmyapp)\n**Schema:** public\n**Scan time:** 342ms\n\n### Breaking Changes (1)\n\n| Object | Expected | Actual | Impact |\n|--------|----------|--------|--------|\n| `users.email_verified` | NOT EXISTS | `boolean DEFAULT false` | Migration V42 assumes column does not exist and will fail on ADD COLUMN |\n\n### Warnings (2)\n\n| Object | Expected | Actual | Impact |\n|--------|----------|--------|--------|\n| `idx_orders_created_at` | NOT EXISTS | `btree (created_at)` | Untracked index, no migration impact |\n| `payments.processor_ref` | NOT EXISTS | `text` | Untracked column, no migration impact |\n\n**Recommendation:** Resolve the 1 breaking change before merging. Create a migration that accounts for the existing `users.email_verified` column, or remove it from production if it was added in error.\n","markdown",[18,1236,1237,1242,1246,1251,1256,1261,1265,1270,1274,1279,1284,1289,1293,1298,1302,1306,1310,1315,1320,1324],{"__ignoreMap":250},[254,1238,1239],{"class":256,"line":257},[254,1240,1241],{},"## Schema Drift Report\n",[254,1243,1244],{"class":256,"line":263},[254,1245,889],{"emptyLinePlaceholder":488},[254,1247,1248],{"class":256,"line":269},[254,1249,1250],{},"**Database:** production (postgres:\u002F\u002F...@prod-db:5432\u002Fmyapp)\n",[254,1252,1253],{"class":256,"line":275},[254,1254,1255],{},"**Schema:** public\n",[254,1257,1258],{"class":256,"line":281},[254,1259,1260],{},"**Scan time:** 342ms\n",[254,1262,1263],{"class":256,"line":287},[254,1264,889],{"emptyLinePlaceholder":488},[254,1266,1267],{"class":256,"line":293},[254,1268,1269],{},"### Breaking Changes (1)\n",[254,1271,1272],{"class":256,"line":299},[254,1273,889],{"emptyLinePlaceholder":488},[254,1275,1276],{"class":256,"line":305},[254,1277,1278],{},"| Object | Expected | Actual | Impact |\n",[254,1280,1281],{"class":256,"line":311},[254,1282,1283],{},"|--------|----------|--------|--------|\n",[254,1285,1286],{"class":256,"line":317},[254,1287,1288],{},"| `users.email_verified` | NOT EXISTS | `boolean DEFAULT false` | Migration V42 assumes column does not exist and will fail on ADD COLUMN |\n",[254,1290,1291],{"class":256,"line":323},[254,1292,889],{"emptyLinePlaceholder":488},[254,1294,1295],{"class":256,"line":329},[254,1296,1297],{},"### Warnings (2)\n",[254,1299,1300],{"class":256,"line":335},[254,1301,889],{"emptyLinePlaceholder":488},[254,1303,1304],{"class":256,"line":341},[254,1305,1278],{},[254,1307,1308],{"class":256,"line":986},[254,1309,1283],{},[254,1311,1312],{"class":256,"line":994},[254,1313,1314],{},"| `idx_orders_created_at` | NOT EXISTS | `btree (created_at)` | Untracked index, no migration impact |\n",[254,1316,1317],{"class":256,"line":1006},[254,1318,1319],{},"| `payments.processor_ref` | NOT EXISTS | `text` | Untracked column, no migration impact |\n",[254,1321,1322],{"class":256,"line":1017},[254,1323,889],{"emptyLinePlaceholder":488},[254,1325,1326],{"class":256,"line":1022},[254,1327,1328],{},"**Recommendation:** Resolve the 1 breaking change before merging. Create a migration that accounts for the existing `users.email_verified` column, or remove it from production if it was added in error.\n",[11,1330,1331],{},"The thing that makes this useful is the separation between \"this will actually break deployment\" and \"this is drift you should probably clean up.\" Not every mismatch needs to block a PR. The dangerous ones absolutely should.",[32,1333,1335],{"id":1334},"step-5-make-it-required-with-branch-protection","Step 5: Make it required with branch protection",[11,1337,1338],{},"Once you trust the signal, make the check required.",[11,1340,1341,1342,1345],{},"Go to your repository Settings → Branches → edit the protection rule for ",[18,1343,1344],{},"main"," → enable Require status checks to pass before merging → add Detect Schema Drift as a required check.",[11,1347,1348],{},"After that, PRs with breaking drift cannot be merged until someone resolves it.",[32,1350,1352],{"id":1351},"what-to-do-when-drift-is-detected","What to do when drift is detected",[56,1354,1356],{"id":1355},"the-migration-is-wrong","The migration is wrong",[11,1358,1359],{},"Sometimes the problem is that the migration assumes a clean state that no longer exists. Make it more defensive:",[245,1361,1363],{"className":247,"code":1362,"language":249,"meta":250,"style":250},"-- Instead of:\nALTER TABLE users ADD COLUMN email_verified boolean DEFAULT false;\n\n-- Use:\nDO $$\nBEGIN\n    IF NOT EXISTS (\n        SELECT 1 FROM information_schema.columns\n        WHERE table_name = 'users'\n        AND column_name = 'email_verified'\n    ) THEN\n        ALTER TABLE users\n          ADD COLUMN email_verified boolean DEFAULT false;\n    END IF;\nEND $$;\n",[18,1364,1365,1370,1375,1379,1384,1389,1394,1399,1404,1409,1414,1419,1424,1429,1434],{"__ignoreMap":250},[254,1366,1367],{"class":256,"line":257},[254,1368,1369],{},"-- Instead of:\n",[254,1371,1372],{"class":256,"line":263},[254,1373,1374],{},"ALTER TABLE users ADD COLUMN email_verified boolean DEFAULT false;\n",[254,1376,1377],{"class":256,"line":269},[254,1378,889],{"emptyLinePlaceholder":488},[254,1380,1381],{"class":256,"line":275},[254,1382,1383],{},"-- Use:\n",[254,1385,1386],{"class":256,"line":281},[254,1387,1388],{},"DO $$\n",[254,1390,1391],{"class":256,"line":287},[254,1392,1393],{},"BEGIN\n",[254,1395,1396],{"class":256,"line":293},[254,1397,1398],{},"    IF NOT EXISTS (\n",[254,1400,1401],{"class":256,"line":299},[254,1402,1403],{},"        SELECT 1 FROM information_schema.columns\n",[254,1405,1406],{"class":256,"line":305},[254,1407,1408],{},"        WHERE table_name = 'users'\n",[254,1410,1411],{"class":256,"line":311},[254,1412,1413],{},"        AND column_name = 'email_verified'\n",[254,1415,1416],{"class":256,"line":317},[254,1417,1418],{},"    ) THEN\n",[254,1420,1421],{"class":256,"line":323},[254,1422,1423],{},"        ALTER TABLE users\n",[254,1425,1426],{"class":256,"line":329},[254,1427,1428],{},"          ADD COLUMN email_verified boolean DEFAULT false;\n",[254,1430,1431],{"class":256,"line":335},[254,1432,1433],{},"    END IF;\n",[254,1435,1436],{"class":256,"line":341},[254,1437,1438],{},"END $$;\n",[11,1440,1441],{},"This is especially useful when cleaning up legacy drift across multiple environments that have diverged over time.",[56,1443,1445],{"id":1444},"the-production-change-was-intentional","The production change was intentional",[11,1447,1448],{},"A DBA added an index to fix a slow query. The change was deliberate but never made it back into versioned migrations. Create a migration that documents it:",[245,1450,1452],{"className":247,"code":1451,"language":249,"meta":250,"style":250},"-- V43__document_existing_index.sql\nCREATE INDEX IF NOT EXISTS idx_orders_created_at\n  ON orders (created_at);\n",[18,1453,1454,1459,1464],{"__ignoreMap":250},[254,1455,1456],{"class":256,"line":257},[254,1457,1458],{},"-- V43__document_existing_index.sql\n",[254,1460,1461],{"class":256,"line":263},[254,1462,1463],{},"CREATE INDEX IF NOT EXISTS idx_orders_created_at\n",[254,1465,1466],{"class":256,"line":269},[254,1467,1468],{},"  ON orders (created_at);\n",[11,1470,1471],{},"Safe to run whether the object exists already or not. Migration history catches up with reality.",[56,1473,1475],{"id":1474},"the-production-change-was-accidental","The production change was accidental",[11,1477,1478],{},"If the drift came from an unintended manual change, revert it in production and restore alignment with your migration history.",[11,1480,1481],{},"Be careful here. Before removing anything, verify that nothing — no application code, no reporting job, no operational script — started depending on the accidental change.",[32,1483,1485],{"id":1484},"ignoring-known-drift","Ignoring known drift",[11,1487,1488],{},"Some drift is expected and permanent. Monitoring infrastructure, extension-managed objects, things your application does not own. Tell the action to skip them:",[245,1490,1492],{"className":866,"code":1491,"language":868,"meta":250,"style":250},"- name: Run Arcnull Schema Drift Scanner\n  uses: arcnull-hq\u002Fschema-drift-action@v1\n  with:\n    database-url: ${{ secrets.DRIFT_DATABASE_URL }}\n    migration-path: src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\n    fail-on: breaking\n    ignore-patterns: \"pg_stat_%,pganalyze_%,idx_monitoring_%\"\n",[18,1493,1494,1505,1514,1521,1530,1539,1548],{"__ignoreMap":250},[254,1495,1496,1499,1501,1503],{"class":256,"line":257},[254,1497,1498],{"class":879},"- ",[254,1500,876],{"class":875},[254,1502,880],{"class":879},[254,1504,1031],{"class":883},[254,1506,1507,1510,1512],{"class":256,"line":263},[254,1508,1509],{"class":875},"  uses",[254,1511,880],{"class":879},[254,1513,1041],{"class":883},[254,1515,1516,1519],{"class":256,"line":269},[254,1517,1518],{"class":875},"  with",[254,1520,898],{"class":879},[254,1522,1523,1526,1528],{"class":256,"line":275},[254,1524,1525],{"class":875},"    database-url",[254,1527,880],{"class":879},[254,1529,1060],{"class":883},[254,1531,1532,1535,1537],{"class":256,"line":281},[254,1533,1534],{"class":875},"    migration-path",[254,1536,880],{"class":879},[254,1538,1071],{"class":883},[254,1540,1541,1544,1546],{"class":256,"line":287},[254,1542,1543],{"class":875},"    fail-on",[254,1545,880],{"class":879},[254,1547,1093],{"class":883},[254,1549,1550,1553,1555],{"class":256,"line":293},[254,1551,1552],{"class":875},"    ignore-patterns",[254,1554,880],{"class":879},[254,1556,1557],{"class":883},"\"pg_stat_%,pganalyze_%,idx_monitoring_%\"\n",[11,1559,1560,1561,1564,1565,1568,1569,224],{},"Note: patterns use SQL ",[18,1562,1563],{},"LIKE"," syntax, not glob syntax. Use ",[18,1566,1567],{},"%"," as the wildcard, not ",[18,1570,1571],{},"*",[11,1573,1574],{},"Use ignore lists sparingly. They grow. Every pattern you add is one more place drift can hide undetected.",[32,1576,1578],{"id":1577},"common-issues","Common issues",[11,1580,1581,1584],{},[653,1582,1583],{},"Action times out connecting to the database","\nYour database firewall may be blocking GitHub Actions IP ranges. Add the GitHub Actions IP ranges to your database allowlist, or use a self-hosted runner inside your VPC.",[11,1586,1587,1590],{},[653,1588,1589],{},"Action reports drift that was just resolved","\nThe action scans the live database at PR time. If drift was fixed after the PR was opened, close and reopen the PR to trigger a fresh scan.",[11,1592,1593,1596,1597,1599,1600,1602,1603,1605,1606,1609,1610,1613],{},[653,1594,1595],{},"Patterns not matching in ignore-patterns","\nUse ",[18,1598,1567],{}," not ",[18,1601,1571],{},". SQL ",[18,1604,1563],{}," syntax, not glob syntax. ",[18,1607,1608],{},"pg_stat_%"," works. ",[18,1611,1612],{},"pg_stat_*"," does not.",[32,1615,1617],{"id":1616},"wrapping-up","Wrapping up",[11,1619,1620],{},"Schema drift checks feel optional right up until the day they save you from a bad production migration. Catching drift in a PR is a lot cheaper than discovering it mid-deploy, and considerably less stressful than debugging a migration failure at 2 AM.",[11,1622,1623,1624,1627,1628,1630],{},"The action handles the tedious work — reading the catalog, comparing expected versus actual state, reporting the differences where your team already works. A sensible way to roll it out is to start with ",[18,1625,1626],{},"fail-on: none",", clean up what you find, and then move to ",[18,1629,1214],{}," once the noise is under control.",[11,1632,1633],{},"That gives you a smoother adoption path and a much better chance of making schema checks something the team actually keeps enabled.",[11,1635,1636,1637,1639],{},"For continuous monitoring beyond CI — scheduled scans, Slack alerts, and historical drift tracking — ",[220,1638,449],{"href":448}," handles all of that.",[452,1641,1642],{},"html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html pre.shiki code .s9eBZ, html code.shiki .s9eBZ{--shiki-default:#22863A;--shiki-dark:#85E89D}html pre.shiki code .sVt8B, html code.shiki .sVt8B{--shiki-default:#24292E;--shiki-dark:#E1E4E8}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html pre.shiki code .sj4cs, html code.shiki .sj4cs{--shiki-default:#005CC5;--shiki-dark:#79B8FF}",{"title":250,"searchDepth":263,"depth":263,"links":1644},[1645,1646,1647,1648,1654,1659,1660,1665,1666,1667],{"id":775,"depth":263,"text":776},{"id":799,"depth":263,"text":800},{"id":855,"depth":263,"text":856},{"id":1102,"depth":263,"text":1103,"children":1649},[1650,1651,1652],{"id":1106,"depth":269,"text":1107},{"id":1122,"depth":269,"text":1123},{"id":1167,"depth":269,"text":1653},"Understanding fail-on",{"id":1200,"depth":263,"text":1201,"children":1655},[1656,1657,1658],{"id":1207,"depth":269,"text":1208},{"id":1218,"depth":269,"text":1219},{"id":1225,"depth":269,"text":1226},{"id":1334,"depth":263,"text":1335},{"id":1351,"depth":263,"text":1352,"children":1661},[1662,1663,1664],{"id":1355,"depth":269,"text":1356},{"id":1444,"depth":269,"text":1445},{"id":1474,"depth":269,"text":1475},{"id":1484,"depth":263,"text":1485},{"id":1577,"depth":263,"text":1578},{"id":1616,"depth":263,"text":1617},"A practical walkthrough for catching unapproved PostgreSQL schema changes in CI before they make it into production.",[1670,1671,1672],"postgresql schema github action","schema drift ci cd","database migration github action",{},"\u002Fblog\u002Fdetect-postgresql-schema-changes-github-action","2026-05-05","7",{"title":751,"description":1668},"detect-postgresql-schema-changes-github-action","blog\u002Fdetect-postgresql-schema-changes-github-action","KO3ZMEM6L4JyjIxvR_FRBjcPKGGcBR8aGwntIKJRSnw"]