[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"blog-articles":3},[4,269,1226,1863,3148],{"id":5,"title":6,"author":7,"body":8,"description":250,"extension":251,"keywords":252,"meta":259,"navigation":260,"path":261,"publishedAt":262,"readTime":263,"seo":264,"slug":265,"stem":266,"updatedAt":267,"__hash__":268},"blog\u002Fblog\u002Fmcp-server-security-governance-2026.md","MCP server security: why governance matters as agent tool use grows","Maxwell Kimaiyo",{"type":9,"value":10,"toc":232},"minimark",[11,15,18,21,24,27,30,35,38,41,44,47,50,54,57,60,63,82,85,89,94,97,100,104,107,110,114,117,120,124,127,130,134,137,140,144,147,150,153,156,160,163,170,176,182,188,194,198,201,204,207,211,214,217,220,223],[12,13,14],"p",{},"The Model Context Protocol makes it much easier for AI agents to use real tools. That is a big step forward. It means the same model can query a database, call an internal API, update a CRM record, or trigger part of a deployment workflow through a common interface.",[12,16,17],{},"That simplicity is exactly why MCP is getting attention.",[12,19,20],{},"It is also why teams need to think more carefully about governance.",[12,22,23],{},"In many early MCP deployments, the focus is naturally on getting tools connected and workflows running. The security model often comes later. That creates a gap: agents can suddenly reach more systems, but the organization still has limited visibility into who is calling what, what data is being accessed, and which actions are being taken.",[12,25,26],{},"This is where governance starts to matter. Not because MCP is broken, but because a protocol for tool use does not automatically solve authentication, authorization, auditability, or rate control. Those still need to be designed.",[12,28,29],{},"This article looks at where the risks show up, why they grow quickly once multiple teams adopt MCP, and why a governance proxy is becoming a practical pattern for production environments.",[31,32,34],"h2",{"id":33},"what-mcp-is-and-why-teams-are-adopting-it","What MCP is, and why teams are adopting it",[12,36,37],{},"MCP gives AI agents a standard way to discover and call tools. An MCP server exposes tools with defined schemas, and an agent can call those tools as part of a conversation or workflow.",[12,39,40],{},"That sounds simple, but it is powerful in practice.",[12,42,43],{},"Once tools are exposed through MCP, an agent can work across multiple systems without custom glue code for every integration. A support assistant might look up a customer, check an order, issue a refund, and send a follow-up email in one flow. A developer assistant might read logs, inspect a schema, and open a ticket.",[12,45,46],{},"That is the appeal. Tool use becomes much easier to standardize.",[12,48,49],{},"The catch is that standardizing tool access also makes it easier to scale access before governance has caught up.",[31,51,53],{"id":52},"where-the-risk-starts","Where the risk starts",[12,55,56],{},"The risk usually does not begin with one obviously dangerous deployment. It starts with something useful and local.",[12,58,59],{},"A team creates an MCP server for one internal system. It helps with debugging, support, or reporting. Then another team starts using it for a different workflow. Then a third team connects it to an internal assistant. Before long, the same server is being used in several contexts, by different people, for different kinds of actions.",[12,61,62],{},"At that point, the question is no longer just whether the server works. The question becomes:",[64,65,66,70,73,76,79],"ul",{},[67,68,69],"li",{},"Who is allowed to call which tools?",[67,71,72],{},"Which actions require approval?",[67,74,75],{},"What gets logged?",[67,77,78],{},"How do you trace a tool call back to a user, a session, or a business purpose?",[67,80,81],{},"What happens when an agent behaves unexpectedly?",[12,83,84],{},"Without a governance layer, those questions usually get answered inconsistently, or not at all.",[31,86,88],{"id":87},"five-practical-risks-of-ungoverned-mcp-servers","Five practical risks of ungoverned MCP servers",[90,91,93],"h3",{"id":92},"_1-prompt-injection-can-turn-tool-access-into-data-exposure","1. Prompt injection can turn tool access into data exposure",[12,95,96],{},"If an agent can read sensitive data and also take external actions, prompt injection becomes much more serious. A malicious instruction hidden in data can push the agent to retrieve information it should not expose, or send it somewhere it should not go.",[12,98,99],{},"What makes this hard is that the individual tool calls may look valid in isolation. The problem is the sequence and the intent behind it.",[90,101,103],{"id":102},"_2-tool-chaining-can-create-privilege-problems","2. Tool chaining can create privilege problems",[12,105,106],{},"One safe-looking tool call can become risky when combined with another. An agent may gather identifiers or context from one system, then use that context to make a higher-impact call somewhere else.",[12,108,109],{},"Traditional authorization checks are often request-by-request. Agent workflows are not always that simple. The surrounding chain matters.",[90,111,113],{"id":112},"_3-audit-trails-are-often-incomplete","3. Audit trails are often incomplete",[12,115,116],{},"Logging that \"tool X was called\" is not enough for most real-world governance needs. Teams usually need more context: who initiated the workflow, what data was touched, why the action happened, and whether a policy decision was involved.",[12,118,119],{},"Without that context, investigations get harder and compliance work gets weaker.",[90,121,123],{"id":122},"_4-runaway-agents-can-overwhelm-downstream-systems","4. Runaway agents can overwhelm downstream systems",[12,125,126],{},"Autonomous workflows can generate more volume than teams expect. Retries, loops, or poor workflow design can flood a server or the systems behind it.",[12,128,129],{},"MCP makes tool use easier. That also means mistakes can scale faster.",[90,131,133],{"id":132},"_5-sensitive-data-can-leak-through-responses-and-errors","5. Sensitive data can leak through responses and errors",[12,135,136],{},"Credentials, stack traces, or overly verbose error messages can escape through tool responses. An agent does not reliably understand that a token or secret is dangerous. It may repeat it, store it, or pass it along in another step.",[12,138,139],{},"That makes response filtering and redaction more important than many early implementations assume.",[31,141,143],{"id":142},"why-a-governance-proxy-helps","Why a governance proxy helps",[12,145,146],{},"A governance proxy sits between the agent and the MCP servers it uses.",[12,148,149],{},"Instead of every server implementing its own access model, logging conventions, and rate controls, the proxy becomes the place where those decisions are applied consistently. It can authenticate the caller, evaluate policy, log the request with context, limit abuse, and filter sensitive data before a response goes back to the agent.",[12,151,152],{},"That does not remove all risk, but it gives teams a much better control point.",[12,154,155],{},"It also matches how organizations usually want to manage production systems: one place for policy, one place for visibility, and one place to investigate what happened.",[31,157,159],{"id":158},"what-that-governance-layer-should-do","What that governance layer should do",[12,161,162],{},"At a minimum, a useful governance layer should handle a few things well.",[12,164,165,169],{},[166,167,168],"strong",{},"Authentication."," It should establish who is behind the request, whether that is a user, service, or agent session.",[12,171,172,175],{},[166,173,174],{},"Authorization."," It should evaluate whether a tool call is allowed based on identity, tool, parameters, and context.",[12,177,178,181],{},[166,179,180],{},"Audit logging."," It should record enough information to reconstruct what happened later, including the policy decision that was applied.",[12,183,184,187],{},[166,185,186],{},"Rate limiting."," It should keep one broken or badly behaved workflow from overwhelming shared systems.",[12,189,190,193],{},[166,191,192],{},"Data filtering."," It should be able to redact or block sensitive fields before they reach the model or the user.",[31,195,197],{"id":196},"why-this-matters-now","Why this matters now",[12,199,200],{},"MCP adoption is growing because it solves a real integration problem. That is a good thing. But once agents move from answering questions to taking actions, governance stops being a nice extra and starts becoming part of the production architecture.",[12,202,203],{},"The teams that handle this well will not necessarily be the ones with the most tools. They will be the ones with the clearest controls around how those tools are used.",[12,205,206],{},"Teams that delay governance will usually end up choosing between slower adoption and weaker controls. Neither is a good position once the workflows are already running in production.",[31,208,210],{"id":209},"conclusion","Conclusion",[12,212,213],{},"MCP makes agent tool use easier to standardize. Governance makes it safer to run at scale.",[12,215,216],{},"As more teams connect agents to databases, APIs, internal systems, and operational workflows, the main challenge is no longer just integration. It is visibility, control, and trust.",[12,218,219],{},"A governance proxy is one practical way to get there. It gives teams a central place to apply policy, capture audit context, and reduce the risk that comes with giving agents access to real systems.",[12,221,222],{},"If you are already experimenting with MCP in production, this is the point where governance starts to move from something to think about later to something worth designing for now.",[12,224,225,226,231],{},"If you are building this kind of control layer, ",[227,228,230],"a",{"href":229},"\u002Fproducts\u002Fmcp-vault","MCP Vault"," is the direction we are exploring at Arcnull.",{"title":233,"searchDepth":234,"depth":234,"links":235},"",2,[236,237,238,246,247,248,249],{"id":33,"depth":234,"text":34},{"id":52,"depth":234,"text":53},{"id":87,"depth":234,"text":88,"children":239},[240,242,243,244,245],{"id":92,"depth":241,"text":93},3,{"id":102,"depth":241,"text":103},{"id":112,"depth":241,"text":113},{"id":122,"depth":241,"text":123},{"id":132,"depth":241,"text":133},{"id":142,"depth":234,"text":143},{"id":158,"depth":234,"text":159},{"id":196,"depth":234,"text":197},{"id":209,"depth":234,"text":210},"As more teams connect AI agents to real tools through MCP, access control, auditability, and oversight become practical production concerns. Here is why a governance layer is starting to matter.","md",[253,254,255,256,257,258],"mcp server security","mcp governance","ai agent security","model context protocol","mcp proxy","ai governance 2026",{},true,"\u002Fblog\u002Fmcp-server-security-governance-2026","2026-05-12","10 min read",{"title":6,"description":250},"mcp-server-security-governance-2026","blog\u002Fmcp-server-security-governance-2026","2026-04-15","FM34I7GmFMb7DzrmfK2i88S8bqZEw5Kg5SWG9GAf-OE",{"id":270,"title":271,"author":7,"body":272,"description":1213,"extension":251,"keywords":1214,"meta":1218,"navigation":260,"path":1219,"publishedAt":1220,"readTime":1221,"seo":1222,"slug":1223,"stem":1224,"updatedAt":267,"__hash__":1225},"blog\u002Fblog\u002Fdetect-postgresql-schema-changes-github-action.md","Detecting PostgreSQL schema changes with a GitHub Action",{"type":9,"value":273,"toc":1188},[274,277,285,292,295,299,302,316,319,323,330,373,379,386,390,397,636,642,646,650,656,662,666,675,681,701,707,713,716,722,728,734,740,744,747,751,758,762,765,769,772,871,874,878,881,888,891,895,899,902,981,984,988,991,1011,1014,1018,1021,1024,1028,1031,1100,1114,1117,1121,1127,1133,1156,1160,1163,1173,1176,1184],[12,275,276],{},"Every team eventually gets burned by schema drift.",[12,278,279,280,284],{},"A migration passes in CI, looks fine in review, and then blows up in production because production is not actually in the state everyone thought it was. Maybe someone ran an ",[281,282,283],"code",{},"ALTER TABLE"," during an incident. Maybe a DBA added an index to calm down a slow query. Either way, your migration history says one thing, and the database says another.",[12,286,287,288,291],{},"The ",[281,289,290],{},"arcnull-hq\u002Fschema-drift-action"," is meant to catch that before a pull request gets merged. It compares the schema changes introduced by your PR against the real state of your target database and flags anything that could break or drift from what your migrations expect.",[12,293,294],{},"In this walkthrough I will show you how to set it up in GitHub Actions, how to configure it safely for PostgreSQL, and what to look for when it reports drift.",[31,296,298],{"id":297},"what-you-need-before-you-start","What you need before you start",[12,300,301],{},"A few basics need to be in place:",[64,303,304,307,310,313],{},[67,305,306],{},"A PostgreSQL database to compare against — usually production or staging",[67,308,309],{},"A read-only PostgreSQL user the action can connect with",[67,311,312],{},"That connection string stored as a GitHub Actions secret",[67,314,315],{},"Migration files in your repository — Flyway, Liquibase, Alembic, or plain SQL all work",[12,317,318],{},"The action only reads schema metadata from PostgreSQL system catalogs. It does not need write access to anything.",[31,320,322],{"id":321},"step-1-create-a-read-only-database-user","Step 1: Create a read-only database user",[12,324,325,326,329],{},"The action needs to inspect ",[281,327,328],{},"pg_catalog"," to understand the current schema state. Give it a dedicated user with the minimum access it actually needs:",[331,332,336],"pre",{"className":333,"code":334,"language":335,"meta":233,"style":233},"language-sql shiki shiki-themes github-light github-dark","CREATE ROLE schema_drift_reader\n  WITH LOGIN PASSWORD 'your-secure-password';\nGRANT CONNECT ON DATABASE your_database\n  TO schema_drift_reader;\nGRANT USAGE ON SCHEMA public\n  TO schema_drift_reader;\n","sql",[281,337,338,346,351,356,362,368],{"__ignoreMap":233},[339,340,343],"span",{"class":341,"line":342},"line",1,[339,344,345],{},"CREATE ROLE schema_drift_reader\n",[339,347,348],{"class":341,"line":234},[339,349,350],{},"  WITH LOGIN PASSWORD 'your-secure-password';\n",[339,352,353],{"class":341,"line":241},[339,354,355],{},"GRANT CONNECT ON DATABASE your_database\n",[339,357,359],{"class":341,"line":358},4,[339,360,361],{},"  TO schema_drift_reader;\n",[339,363,365],{"class":341,"line":364},5,[339,366,367],{},"GRANT USAGE ON SCHEMA public\n",[339,369,371],{"class":341,"line":370},6,[339,372,361],{},[12,374,375,376,378],{},"Note: ",[281,377,328],{}," is readable by all PostgreSQL users by default — no explicit GRANT is needed. The above three statements are sufficient.",[12,380,381,382,385],{},"Store the connection string as a GitHub Actions secret named ",[281,383,384],{},"DRIFT_DATABASE_URL",".",[31,387,389],{"id":388},"step-2-add-the-workflow-file","Step 2: Add the workflow file",[12,391,392,393,396],{},"Create ",[281,394,395],{},".github\u002Fworkflows\u002Fschema-drift.yml",":",[331,398,402],{"className":399,"code":400,"language":401,"meta":233,"style":233},"language-yaml shiki shiki-themes github-light github-dark","name: Schema Drift Check\n\non:\n  pull_request:\n    paths:\n      - 'src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\u002F**'\n      - 'migrations\u002F**'\n      - 'alembic\u002Fversions\u002F**'\n      - 'sql\u002F**'\n\njobs:\n  schema-drift-check:\n    name: Detect Schema Drift\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Checkout repository\n        uses: actions\u002Fcheckout@v4\n\n      - name: Run Arcnull Schema Drift Scanner\n        uses: arcnull-hq\u002Fschema-drift-action@v1\n        with:\n          database-url: ${{ secrets.DRIFT_DATABASE_URL }}\n          migration-path: src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\n          schema: public\n          fail-on: breaking\n","yaml",[281,403,404,418,423,432,439,446,454,462,470,478,483,491,499,510,521,526,534,546,557,562,574,584,592,603,614,625],{"__ignoreMap":233},[339,405,406,410,414],{"class":341,"line":342},[339,407,409],{"class":408},"s9eBZ","name",[339,411,413],{"class":412},"sVt8B",": ",[339,415,417],{"class":416},"sZZnC","Schema Drift Check\n",[339,419,420],{"class":341,"line":234},[339,421,422],{"emptyLinePlaceholder":260},"\n",[339,424,425,429],{"class":341,"line":241},[339,426,428],{"class":427},"sj4cs","on",[339,430,431],{"class":412},":\n",[339,433,434,437],{"class":341,"line":358},[339,435,436],{"class":408},"  pull_request",[339,438,431],{"class":412},[339,440,441,444],{"class":341,"line":364},[339,442,443],{"class":408},"    paths",[339,445,431],{"class":412},[339,447,448,451],{"class":341,"line":370},[339,449,450],{"class":412},"      - ",[339,452,453],{"class":416},"'src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\u002F**'\n",[339,455,457,459],{"class":341,"line":456},7,[339,458,450],{"class":412},[339,460,461],{"class":416},"'migrations\u002F**'\n",[339,463,465,467],{"class":341,"line":464},8,[339,466,450],{"class":412},[339,468,469],{"class":416},"'alembic\u002Fversions\u002F**'\n",[339,471,473,475],{"class":341,"line":472},9,[339,474,450],{"class":412},[339,476,477],{"class":416},"'sql\u002F**'\n",[339,479,481],{"class":341,"line":480},10,[339,482,422],{"emptyLinePlaceholder":260},[339,484,486,489],{"class":341,"line":485},11,[339,487,488],{"class":408},"jobs",[339,490,431],{"class":412},[339,492,494,497],{"class":341,"line":493},12,[339,495,496],{"class":408},"  schema-drift-check",[339,498,431],{"class":412},[339,500,502,505,507],{"class":341,"line":501},13,[339,503,504],{"class":408},"    name",[339,506,413],{"class":412},[339,508,509],{"class":416},"Detect Schema Drift\n",[339,511,513,516,518],{"class":341,"line":512},14,[339,514,515],{"class":408},"    runs-on",[339,517,413],{"class":412},[339,519,520],{"class":416},"ubuntu-latest\n",[339,522,524],{"class":341,"line":523},15,[339,525,422],{"emptyLinePlaceholder":260},[339,527,529,532],{"class":341,"line":528},16,[339,530,531],{"class":408},"    steps",[339,533,431],{"class":412},[339,535,537,539,541,543],{"class":341,"line":536},17,[339,538,450],{"class":412},[339,540,409],{"class":408},[339,542,413],{"class":412},[339,544,545],{"class":416},"Checkout repository\n",[339,547,549,552,554],{"class":341,"line":548},18,[339,550,551],{"class":408},"        uses",[339,553,413],{"class":412},[339,555,556],{"class":416},"actions\u002Fcheckout@v4\n",[339,558,560],{"class":341,"line":559},19,[339,561,422],{"emptyLinePlaceholder":260},[339,563,565,567,569,571],{"class":341,"line":564},20,[339,566,450],{"class":412},[339,568,409],{"class":408},[339,570,413],{"class":412},[339,572,573],{"class":416},"Run Arcnull Schema Drift Scanner\n",[339,575,577,579,581],{"class":341,"line":576},21,[339,578,551],{"class":408},[339,580,413],{"class":412},[339,582,583],{"class":416},"arcnull-hq\u002Fschema-drift-action@v1\n",[339,585,587,590],{"class":341,"line":586},22,[339,588,589],{"class":408},"        with",[339,591,431],{"class":412},[339,593,595,598,600],{"class":341,"line":594},23,[339,596,597],{"class":408},"          database-url",[339,599,413],{"class":412},[339,601,602],{"class":416},"${{ secrets.DRIFT_DATABASE_URL }}\n",[339,604,606,609,611],{"class":341,"line":605},24,[339,607,608],{"class":408},"          migration-path",[339,610,413],{"class":412},[339,612,613],{"class":416},"src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\n",[339,615,617,620,622],{"class":341,"line":616},25,[339,618,619],{"class":408},"          schema",[339,621,413],{"class":412},[339,623,624],{"class":416},"public\n",[339,626,628,631,633],{"class":341,"line":627},26,[339,629,630],{"class":408},"          fail-on",[339,632,413],{"class":412},[339,634,635],{"class":416},"breaking\n",[12,637,287,638,641],{},[281,639,640],{},"paths"," filter matters more than people think. It keeps the workflow from running on every single PR and limits it to changes that actually touch migrations. That saves CI time and keeps the signal cleaner — you do not want drift alerts on a PR that only changed a README.",[31,643,645],{"id":644},"step-3-configure-the-inputs","Step 3: Configure the inputs",[90,647,649],{"id":648},"required","Required",[12,651,652,655],{},[281,653,654],{},"database-url"," — the PostgreSQL connection string for the database you want to compare against.",[12,657,658,661],{},[281,659,660],{},"migration-path"," — path to your migration files, relative to the repository root.",[90,663,665],{"id":664},"optional","Optional",[12,667,668,671,672,385],{},[281,669,670],{},"schema"," — PostgreSQL schema to scan. Defaults to ",[281,673,674],{},"public",[12,676,677,680],{},[281,678,679],{},"fail-on"," — controls how strict the check is.",[12,682,683,686,687,690,691,690,694,697,698,385],{},[281,684,685],{},"migration-tool"," — one of ",[281,688,689],{},"flyway",", ",[281,692,693],{},"liquibase",[281,695,696],{},"alembic",", or ",[281,699,700],{},"auto",[12,702,703,706],{},[281,704,705],{},"ignore-patterns"," — comma-separated object name patterns to exclude from the check.",[90,708,710,711],{"id":709},"understanding-fail-on","Understanding ",[281,712,679],{},[12,714,715],{},"This is the setting teams spend the most time thinking about, so it is worth being specific.",[12,717,718,721],{},[281,719,720],{},"any"," — fail the PR for any detected drift at all. This is the strictest option. It makes sense when your team wants every schema change to flow through migrations with no exceptions, period.",[12,723,724,727],{},[281,725,726],{},"breaking"," — fail only when the drift is likely to make the PR's migrations break. Missing tables, conflicting constraints, columns that already exist when the migration assumes they do not. Extra indexes or non-blocking columns still get reported but do not stop the merge. This is probably the right default for most teams.",[12,729,730,733],{},[281,731,732],{},"none"," — never fail the check, just report what it finds. A good rollout setting when you want visibility before you start enforcing anything.",[12,735,736,737,739],{},"If you are not sure where to start, use ",[281,738,732],{}," first. See what your environment actually looks like before deciding how strict to be.",[31,741,743],{"id":742},"step-4-read-the-output","Step 4: Read the output",[12,745,746],{},"The action produces three kinds of feedback.",[90,748,750],{"id":749},"pr-check-status","PR check status",[12,752,753,754,757],{},"The workflow passes or fails. With ",[281,755,756],{},"fail-on: breaking",", a breaking drift finding fails the check. If you have branch protection rules that require this check to pass, the PR cannot be merged until the issue is addressed.",[90,759,761],{"id":760},"pr-annotations","PR annotations",[12,763,764],{},"The action adds annotations directly to the migration files in the PR, pointing to the exact line where the migration assumes a schema state that no longer matches reality. Instead of a vague failure, you get a concrete pointer tied to the SQL in question.",[90,766,768],{"id":767},"drift-report-comment","Drift report comment",[12,770,771],{},"The action posts a summary comment on the PR:",[331,773,777],{"className":774,"code":775,"language":776,"meta":233,"style":233},"language-markdown shiki shiki-themes github-light github-dark","## Schema Drift Report\n\n**Database:** production (postgres:\u002F\u002F...@prod-db:5432\u002Fmyapp)\n**Schema:** public\n**Scan time:** 342ms\n\n### Breaking Changes (1)\n\n| Object | Expected | Actual | Impact |\n|--------|----------|--------|--------|\n| `users.email_verified` | NOT EXISTS | `boolean DEFAULT false` | Migration V42 assumes column does not exist and will fail on ADD COLUMN |\n\n### Warnings (2)\n\n| Object | Expected | Actual | Impact |\n|--------|----------|--------|--------|\n| `idx_orders_created_at` | NOT EXISTS | `btree (created_at)` | Untracked index, no migration impact |\n| `payments.processor_ref` | NOT EXISTS | `text` | Untracked column, no migration impact |\n\n**Recommendation:** Resolve the 1 breaking change before merging. Create a migration that accounts for the existing `users.email_verified` column, or remove it from production if it was added in error.\n","markdown",[281,778,779,784,788,793,798,803,807,812,816,821,826,831,835,840,844,848,852,857,862,866],{"__ignoreMap":233},[339,780,781],{"class":341,"line":342},[339,782,783],{},"## Schema Drift Report\n",[339,785,786],{"class":341,"line":234},[339,787,422],{"emptyLinePlaceholder":260},[339,789,790],{"class":341,"line":241},[339,791,792],{},"**Database:** production (postgres:\u002F\u002F...@prod-db:5432\u002Fmyapp)\n",[339,794,795],{"class":341,"line":358},[339,796,797],{},"**Schema:** public\n",[339,799,800],{"class":341,"line":364},[339,801,802],{},"**Scan time:** 342ms\n",[339,804,805],{"class":341,"line":370},[339,806,422],{"emptyLinePlaceholder":260},[339,808,809],{"class":341,"line":456},[339,810,811],{},"### Breaking Changes (1)\n",[339,813,814],{"class":341,"line":464},[339,815,422],{"emptyLinePlaceholder":260},[339,817,818],{"class":341,"line":472},[339,819,820],{},"| Object | Expected | Actual | Impact |\n",[339,822,823],{"class":341,"line":480},[339,824,825],{},"|--------|----------|--------|--------|\n",[339,827,828],{"class":341,"line":485},[339,829,830],{},"| `users.email_verified` | NOT EXISTS | `boolean DEFAULT false` | Migration V42 assumes column does not exist and will fail on ADD COLUMN |\n",[339,832,833],{"class":341,"line":493},[339,834,422],{"emptyLinePlaceholder":260},[339,836,837],{"class":341,"line":501},[339,838,839],{},"### Warnings (2)\n",[339,841,842],{"class":341,"line":512},[339,843,422],{"emptyLinePlaceholder":260},[339,845,846],{"class":341,"line":523},[339,847,820],{},[339,849,850],{"class":341,"line":528},[339,851,825],{},[339,853,854],{"class":341,"line":536},[339,855,856],{},"| `idx_orders_created_at` | NOT EXISTS | `btree (created_at)` | Untracked index, no migration impact |\n",[339,858,859],{"class":341,"line":548},[339,860,861],{},"| `payments.processor_ref` | NOT EXISTS | `text` | Untracked column, no migration impact |\n",[339,863,864],{"class":341,"line":559},[339,865,422],{"emptyLinePlaceholder":260},[339,867,868],{"class":341,"line":564},[339,869,870],{},"**Recommendation:** Resolve the 1 breaking change before merging. Create a migration that accounts for the existing `users.email_verified` column, or remove it from production if it was added in error.\n",[12,872,873],{},"The thing that makes this useful is the separation between \"this will actually break deployment\" and \"this is drift you should probably clean up.\" Not every mismatch needs to block a PR. The dangerous ones absolutely should.",[31,875,877],{"id":876},"step-5-make-it-required-with-branch-protection","Step 5: Make it required with branch protection",[12,879,880],{},"Once you trust the signal, make the check required.",[12,882,883,884,887],{},"Go to your repository Settings → Branches → edit the protection rule for ",[281,885,886],{},"main"," → enable Require status checks to pass before merging → add Detect Schema Drift as a required check.",[12,889,890],{},"After that, PRs with breaking drift cannot be merged until someone resolves it.",[31,892,894],{"id":893},"what-to-do-when-drift-is-detected","What to do when drift is detected",[90,896,898],{"id":897},"the-migration-is-wrong","The migration is wrong",[12,900,901],{},"Sometimes the problem is that the migration assumes a clean state that no longer exists. Make it more defensive:",[331,903,905],{"className":333,"code":904,"language":335,"meta":233,"style":233},"-- Instead of:\nALTER TABLE users ADD COLUMN email_verified boolean DEFAULT false;\n\n-- Use:\nDO $$\nBEGIN\n    IF NOT EXISTS (\n        SELECT 1 FROM information_schema.columns\n        WHERE table_name = 'users'\n        AND column_name = 'email_verified'\n    ) THEN\n        ALTER TABLE users\n          ADD COLUMN email_verified boolean DEFAULT false;\n    END IF;\nEND $$;\n",[281,906,907,912,917,921,926,931,936,941,946,951,956,961,966,971,976],{"__ignoreMap":233},[339,908,909],{"class":341,"line":342},[339,910,911],{},"-- Instead of:\n",[339,913,914],{"class":341,"line":234},[339,915,916],{},"ALTER TABLE users ADD COLUMN email_verified boolean DEFAULT false;\n",[339,918,919],{"class":341,"line":241},[339,920,422],{"emptyLinePlaceholder":260},[339,922,923],{"class":341,"line":358},[339,924,925],{},"-- Use:\n",[339,927,928],{"class":341,"line":364},[339,929,930],{},"DO $$\n",[339,932,933],{"class":341,"line":370},[339,934,935],{},"BEGIN\n",[339,937,938],{"class":341,"line":456},[339,939,940],{},"    IF NOT EXISTS (\n",[339,942,943],{"class":341,"line":464},[339,944,945],{},"        SELECT 1 FROM information_schema.columns\n",[339,947,948],{"class":341,"line":472},[339,949,950],{},"        WHERE table_name = 'users'\n",[339,952,953],{"class":341,"line":480},[339,954,955],{},"        AND column_name = 'email_verified'\n",[339,957,958],{"class":341,"line":485},[339,959,960],{},"    ) THEN\n",[339,962,963],{"class":341,"line":493},[339,964,965],{},"        ALTER TABLE users\n",[339,967,968],{"class":341,"line":501},[339,969,970],{},"          ADD COLUMN email_verified boolean DEFAULT false;\n",[339,972,973],{"class":341,"line":512},[339,974,975],{},"    END IF;\n",[339,977,978],{"class":341,"line":523},[339,979,980],{},"END $$;\n",[12,982,983],{},"This is especially useful when cleaning up legacy drift across multiple environments that have diverged over time.",[90,985,987],{"id":986},"the-production-change-was-intentional","The production change was intentional",[12,989,990],{},"A DBA added an index to fix a slow query. The change was deliberate but never made it back into versioned migrations. Create a migration that documents it:",[331,992,994],{"className":333,"code":993,"language":335,"meta":233,"style":233},"-- V43__document_existing_index.sql\nCREATE INDEX IF NOT EXISTS idx_orders_created_at\n  ON orders (created_at);\n",[281,995,996,1001,1006],{"__ignoreMap":233},[339,997,998],{"class":341,"line":342},[339,999,1000],{},"-- V43__document_existing_index.sql\n",[339,1002,1003],{"class":341,"line":234},[339,1004,1005],{},"CREATE INDEX IF NOT EXISTS idx_orders_created_at\n",[339,1007,1008],{"class":341,"line":241},[339,1009,1010],{},"  ON orders (created_at);\n",[12,1012,1013],{},"Safe to run whether the object exists already or not. Migration history catches up with reality.",[90,1015,1017],{"id":1016},"the-production-change-was-accidental","The production change was accidental",[12,1019,1020],{},"If the drift came from an unintended manual change, revert it in production and restore alignment with your migration history.",[12,1022,1023],{},"Be careful here. Before removing anything, verify that nothing — no application code, no reporting job, no operational script — started depending on the accidental change.",[31,1025,1027],{"id":1026},"ignoring-known-drift","Ignoring known drift",[12,1029,1030],{},"Some drift is expected and permanent. Monitoring infrastructure, extension-managed objects, things your application does not own. Tell the action to skip them:",[331,1032,1034],{"className":399,"code":1033,"language":401,"meta":233,"style":233},"- name: Run Arcnull Schema Drift Scanner\n  uses: arcnull-hq\u002Fschema-drift-action@v1\n  with:\n    database-url: ${{ secrets.DRIFT_DATABASE_URL }}\n    migration-path: src\u002Fmain\u002Fresources\u002Fdb\u002Fmigration\n    fail-on: breaking\n    ignore-patterns: \"pg_stat_%,pganalyze_%,idx_monitoring_%\"\n",[281,1035,1036,1047,1056,1063,1072,1081,1090],{"__ignoreMap":233},[339,1037,1038,1041,1043,1045],{"class":341,"line":342},[339,1039,1040],{"class":412},"- ",[339,1042,409],{"class":408},[339,1044,413],{"class":412},[339,1046,573],{"class":416},[339,1048,1049,1052,1054],{"class":341,"line":234},[339,1050,1051],{"class":408},"  uses",[339,1053,413],{"class":412},[339,1055,583],{"class":416},[339,1057,1058,1061],{"class":341,"line":241},[339,1059,1060],{"class":408},"  with",[339,1062,431],{"class":412},[339,1064,1065,1068,1070],{"class":341,"line":358},[339,1066,1067],{"class":408},"    database-url",[339,1069,413],{"class":412},[339,1071,602],{"class":416},[339,1073,1074,1077,1079],{"class":341,"line":364},[339,1075,1076],{"class":408},"    migration-path",[339,1078,413],{"class":412},[339,1080,613],{"class":416},[339,1082,1083,1086,1088],{"class":341,"line":370},[339,1084,1085],{"class":408},"    fail-on",[339,1087,413],{"class":412},[339,1089,635],{"class":416},[339,1091,1092,1095,1097],{"class":341,"line":456},[339,1093,1094],{"class":408},"    ignore-patterns",[339,1096,413],{"class":412},[339,1098,1099],{"class":416},"\"pg_stat_%,pganalyze_%,idx_monitoring_%\"\n",[12,1101,1102,1103,1106,1107,1110,1111,385],{},"Note: patterns use SQL ",[281,1104,1105],{},"LIKE"," syntax, not glob syntax. Use ",[281,1108,1109],{},"%"," as the wildcard, not ",[281,1112,1113],{},"*",[12,1115,1116],{},"Use ignore lists sparingly. They grow. Every pattern you add is one more place drift can hide undetected.",[31,1118,1120],{"id":1119},"common-issues","Common issues",[12,1122,1123,1126],{},[166,1124,1125],{},"Action times out connecting to the database","\nYour database firewall may be blocking GitHub Actions IP ranges. Add the GitHub Actions IP ranges to your database allowlist, or use a self-hosted runner inside your VPC.",[12,1128,1129,1132],{},[166,1130,1131],{},"Action reports drift that was just resolved","\nThe action scans the live database at PR time. If drift was fixed after the PR was opened, close and reopen the PR to trigger a fresh scan.",[12,1134,1135,1138,1139,1141,1142,1144,1145,1147,1148,1151,1152,1155],{},[166,1136,1137],{},"Patterns not matching in ignore-patterns","\nUse ",[281,1140,1109],{}," not ",[281,1143,1113],{},". SQL ",[281,1146,1105],{}," syntax, not glob syntax. ",[281,1149,1150],{},"pg_stat_%"," works. ",[281,1153,1154],{},"pg_stat_*"," does not.",[31,1157,1159],{"id":1158},"wrapping-up","Wrapping up",[12,1161,1162],{},"Schema drift checks feel optional right up until the day they save you from a bad production migration. Catching drift in a PR is a lot cheaper than discovering it mid-deploy, and considerably less stressful than debugging a migration failure at 2 AM.",[12,1164,1165,1166,1169,1170,1172],{},"The action handles the tedious work — reading the catalog, comparing expected versus actual state, reporting the differences where your team already works. A sensible way to roll it out is to start with ",[281,1167,1168],{},"fail-on: none",", clean up what you find, and then move to ",[281,1171,756],{}," once the noise is under control.",[12,1174,1175],{},"That gives you a smoother adoption path and a much better chance of making schema checks something the team actually keeps enabled.",[12,1177,1178,1179,1183],{},"For continuous monitoring beyond CI — scheduled scans, Slack alerts, and historical drift tracking — ",[227,1180,1182],{"href":1181},"\u002Fproducts\u002Fdrift-scanner","Drift Scanner"," handles all of that.",[1185,1186,1187],"style",{},"html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html pre.shiki code .s9eBZ, html code.shiki .s9eBZ{--shiki-default:#22863A;--shiki-dark:#85E89D}html pre.shiki code .sVt8B, html code.shiki .sVt8B{--shiki-default:#24292E;--shiki-dark:#E1E4E8}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html pre.shiki code .sj4cs, html code.shiki .sj4cs{--shiki-default:#005CC5;--shiki-dark:#79B8FF}",{"title":233,"searchDepth":234,"depth":234,"links":1189},[1190,1191,1192,1193,1199,1204,1205,1210,1211,1212],{"id":297,"depth":234,"text":298},{"id":321,"depth":234,"text":322},{"id":388,"depth":234,"text":389},{"id":644,"depth":234,"text":645,"children":1194},[1195,1196,1197],{"id":648,"depth":241,"text":649},{"id":664,"depth":241,"text":665},{"id":709,"depth":241,"text":1198},"Understanding fail-on",{"id":742,"depth":234,"text":743,"children":1200},[1201,1202,1203],{"id":749,"depth":241,"text":750},{"id":760,"depth":241,"text":761},{"id":767,"depth":241,"text":768},{"id":876,"depth":234,"text":877},{"id":893,"depth":234,"text":894,"children":1206},[1207,1208,1209],{"id":897,"depth":241,"text":898},{"id":986,"depth":241,"text":987},{"id":1016,"depth":241,"text":1017},{"id":1026,"depth":234,"text":1027},{"id":1119,"depth":234,"text":1120},{"id":1158,"depth":234,"text":1159},"A practical walkthrough for catching unapproved PostgreSQL schema changes in CI before they make it into production.",[1215,1216,1217],"postgresql schema github action","schema drift ci cd","database migration github action",{},"\u002Fblog\u002Fdetect-postgresql-schema-changes-github-action","2026-05-05","7",{"title":271,"description":1213},"detect-postgresql-schema-changes-github-action","blog\u002Fdetect-postgresql-schema-changes-github-action","KO3ZMEM6L4JyjIxvR_FRBjcPKGGcBR8aGwntIKJRSnw",{"id":1227,"title":1228,"author":7,"body":1229,"description":1850,"extension":251,"keywords":1851,"meta":1855,"navigation":260,"path":1856,"publishedAt":1857,"readTime":1858,"seo":1859,"slug":1860,"stem":1861,"updatedAt":267,"__hash__":1862},"blog\u002Fblog\u002Feu-ai-act-2026-java-enterprise-teams.md","EU AI Act 2026: what Java teams should be preparing for now",{"type":9,"value":1230,"toc":1833},[1231,1238,1241,1244,1247,1249,1255,1258,1261,1265,1268,1271,1275,1278,1282,1285,1289,1292,1296,1299,1303,1306,1309,1313,1316,1322,1328,1334,1340,1346,1350,1353,1356,1359,1376,1379,1382,1386,1389,1392,1418,1421,1424,1601,1609,1737,1744,1748,1751,1754,1757,1760,1763,1769,1773,1780,1783,1786,1790,1796,1802,1808,1814,1818,1821,1824,1827,1830],[12,1232,1233,1234,1237],{},"The EU AI Act reaches full enforcement on ",[166,1235,1236],{},"August 2, 2026",". For Java enterprise teams, that is not a distant legal milestone. It is a near-term engineering deadline.",[12,1239,1240],{},"If your company builds AI features, plugs large language models into business workflows, or uses AI to support decisions that affect people, this law may already apply to you. And if you serve EU customers or process the data of EU residents, it can still apply even if your company is based elsewhere.",[12,1242,1243],{},"That means the next few months matter. Teams need more than legal awareness. They need working controls: clear system inventories, reliable audit logs, stronger access controls, risk tracking, and documentation that can stand up to review.",[12,1245,1246],{},"This is where many organizations are still behind.",[31,1248,197],{"id":196},[12,1250,1251,1252,1254],{},"The EU AI Act follows a phased rollout. Some bans on unacceptable-risk AI practices already took effect earlier, and obligations for general-purpose AI models have also started. But ",[166,1253,1236],{}," is the point when the full framework becomes enforceable for high-risk systems — including conformity assessments, documentation, governance, and penalties.",[12,1256,1257],{},"That is the date Java teams should be planning against.",[12,1259,1260],{},"Waiting until the summer to sort this out will be too late for most enterprise environments. AI is rarely isolated in one place. It is spread across services, internal tools, third-party APIs, data pipelines, dashboards, and decision flows. Pulling that together at the last minute is hard.",[31,1262,1264],{"id":1263},"what-counts-as-a-high-risk-ai-system","What counts as a high-risk AI system",[12,1266,1267],{},"A lot of teams assume \"high-risk\" only applies to very obvious cases. In reality, the threshold can be lower than people think.",[12,1269,1270],{},"For enterprise teams, the biggest exposure often shows up in systems that influence important human outcomes.",[90,1272,1274],{"id":1273},"employment-and-workforce-tools","Employment and workforce tools",[12,1276,1277],{},"If AI is used to screen CVs, rank candidates, support interview scoring, predict attrition, or influence promotion or termination decisions, it is likely in high-risk territory.",[90,1279,1281],{"id":1280},"financial-services-and-essential-access","Financial services and essential access",[12,1283,1284],{},"AI used for credit scoring, loan approval, insurance pricing, or benefit decisions can fall under high-risk obligations. If your Java backend calls a model and the result helps decide whether someone gets access to a financial product, that matters.",[90,1286,1288],{"id":1287},"critical-infrastructure","Critical infrastructure",[12,1290,1291],{},"Systems supporting energy, water, gas, heating, or digital infrastructure can also be covered. Predictive maintenance, operational balancing, and automated capacity decisions may create compliance exposure if AI is involved.",[90,1293,1295],{"id":1294},"education-and-training","Education and training",[12,1297,1298],{},"Admissions, grading, exam monitoring, and AI-driven learning decisions may also qualify.",[90,1300,1302],{"id":1301},"law-enforcement-and-immigration","Law enforcement and immigration",[12,1304,1305],{},"These areas carry even stricter scrutiny.",[12,1307,1308],{},"A good rule of thumb: if the AI system meaningfully affects a person's opportunities, services, rights, or treatment, do not assume it is low-risk.",[31,1310,1312],{"id":1311},"what-the-law-means-in-practical-engineering-terms","What the law means in practical engineering terms",[12,1314,1315],{},"The regulation is written in legal language, but the work it creates is technical.",[12,1317,1318,1321],{},[166,1319,1320],{},"Risk management cannot be a one-time exercise."," You need a process that continuously identifies, evaluates, and reduces risk throughout the life of the system. In practice, that means a living risk register tied to real AI components, not a static PDF forgotten after launch.",[12,1323,1324,1327],{},[166,1325,1326],{},"Data governance matters more than many teams expect."," It is not just about model training data. It can also include fine-tuning datasets, prompt templates, retrieval pipelines, and the knowledge bases feeding RAG systems. If those inputs are incomplete, biased, or poorly governed, the risk does not disappear just because the model came from a third-party provider.",[12,1329,1330,1333],{},[166,1331,1332],{},"Technical documentation must be real."," Authorities need to understand what the system does, what data it uses, how it was tested, and what controls are in place. That means documenting the full path from input to inference to action.",[12,1335,1336,1339],{},[166,1337,1338],{},"Logging is not optional."," If an AI-assisted decision causes harm, regulators will want to know what happened, what data was involved, which model responded, what tools were called, and whether a human reviewed the outcome.",[12,1341,1342,1345],{},[166,1343,1344],{},"Security now includes AI-specific threats."," For teams using LLMs, agent frameworks, or tool-use protocols, that means thinking about prompt injection, abusive tool calls, and unauthorized actions.",[31,1347,1349],{"id":1348},"why-java-teams-are-especially-exposed","Why Java teams are especially exposed",[12,1351,1352],{},"Java teams often work inside large, distributed enterprise environments. That creates a specific compliance problem: nobody has the full picture.",[12,1354,1355],{},"A recommendation service calls one model. A support workflow uses another. A fraud pipeline uses a separate inference endpoint. Each team built their own integration, with their own client, their own logging style, and their own assumptions.",[12,1357,1358],{},"The result is familiar:",[64,1360,1361,1364,1367,1370,1373],{},[67,1362,1363],{},"No central inventory of AI usage",[67,1365,1366],{},"No shared audit standard",[67,1368,1369],{},"No consistent model documentation",[67,1371,1372],{},"No unified access control for AI tools",[67,1374,1375],{},"No reliable record of which AI output influenced which business decision",[12,1377,1378],{},"This is especially common in Spring Boot environments where teams use standard HTTP clients to call model APIs. The request goes out, the response comes back, a decision is made, and only fragments of that journey end up in logs.",[12,1380,1381],{},"That might be enough for debugging. It is not enough for compliance.",[31,1383,1385],{"id":1384},"audit-logging-is-where-the-real-work-starts","Audit logging is where the real work starts",[12,1387,1388],{},"If you do one thing first, make it this.",[12,1390,1391],{},"For high-risk systems, your audit trail should be strong enough to answer these questions clearly:",[64,1393,1394,1397,1400,1403,1406,1409,1412,1415],{},[67,1395,1396],{},"What input was sent?",[67,1398,1399],{},"Which model and version responded?",[67,1401,1402],{},"What output came back?",[67,1404,1405],{},"Were tools called?",[67,1407,1408],{},"What business action followed?",[67,1410,1411],{},"Were any risk flags raised?",[67,1413,1414],{},"Did a human review the result?",[67,1416,1417],{},"Can you prove the log was not altered later?",[12,1419,1420],{},"This is why append-only logging matters. A proper audit log should not be easy to rewrite after the fact. A tamper-evident structure using hash chaining gives you much stronger integrity than ordinary application logs.",[12,1422,1423],{},"Here is a table schema that satisfies Article 12 requirements:",[1425,1426,1427,1443],"table",{},[1428,1429,1430],"thead",{},[1431,1432,1433,1437,1440],"tr",{},[1434,1435,1436],"th",{},"Field",[1434,1438,1439],{},"Type",[1434,1441,1442],{},"Description",[1444,1445,1446,1458,1469,1480,1490,1500,1511,1521,1531,1541,1551,1561,1571,1581,1591],"tbody",{},[1431,1447,1448,1452,1455],{},[1449,1450,1451],"td",{},"event_id",[1449,1453,1454],{},"UUID",[1449,1456,1457],{},"Unique identifier for this entry",[1431,1459,1460,1463,1466],{},[1449,1461,1462],{},"created_at",[1449,1464,1465],{},"TIMESTAMPTZ",[1449,1467,1468],{},"When the event occurred",[1431,1470,1471,1474,1477],{},[1449,1472,1473],{},"system_id",[1449,1475,1476],{},"TEXT",[1449,1478,1479],{},"Identifier for the AI system",[1431,1481,1482,1485,1487],{},[1449,1483,1484],{},"session_id",[1449,1486,1454],{},[1449,1488,1489],{},"Session or conversation ID",[1431,1491,1492,1495,1497],{},[1449,1493,1494],{},"event_type",[1449,1496,1476],{},[1449,1498,1499],{},"inference, tool_call, decision, review",[1431,1501,1502,1505,1508],{},[1449,1503,1504],{},"input_data",[1449,1506,1507],{},"JSONB",[1449,1509,1510],{},"Input sent to the AI system",[1431,1512,1513,1516,1518],{},[1449,1514,1515],{},"output_data",[1449,1517,1507],{},[1449,1519,1520],{},"Response from the AI system",[1431,1522,1523,1526,1528],{},[1449,1524,1525],{},"model_id",[1449,1527,1476],{},[1449,1529,1530],{},"Model identifier and version",[1431,1532,1533,1536,1538],{},[1449,1534,1535],{},"tools_called",[1449,1537,1507],{},[1449,1539,1540],{},"Array of tools invoked",[1431,1542,1543,1546,1548],{},[1449,1544,1545],{},"decision_made",[1449,1547,1507],{},[1449,1549,1550],{},"Business decision influenced by AI",[1431,1552,1553,1556,1558],{},[1449,1554,1555],{},"risk_flags",[1449,1557,1507],{},[1449,1559,1560],{},"Risk indicators detected",[1431,1562,1563,1566,1568],{},[1449,1564,1565],{},"human_reviewer",[1449,1567,1476],{},[1449,1569,1570],{},"Identity of reviewer if applicable",[1431,1572,1573,1576,1578],{},[1449,1574,1575],{},"review_outcome",[1449,1577,1476],{},[1449,1579,1580],{},"approved, rejected, or modified",[1431,1582,1583,1586,1588],{},[1449,1584,1585],{},"prev_hash",[1449,1587,1476],{},[1449,1589,1590],{},"SHA-256 hash of previous entry",[1431,1592,1593,1596,1598],{},[1449,1594,1595],{},"entry_hash",[1449,1597,1476],{},[1449,1599,1600],{},"SHA-256 hash of this entry",[12,1602,287,1603,1605,1606,1608],{},[281,1604,1585],{}," and ",[281,1607,1595],{}," fields create a hash chain. Each entry includes the hash of the previous one, so any modification breaks the chain forward.",[331,1610,1612],{"className":333,"code":1611,"language":335,"meta":233,"style":233},"CREATE TABLE ai_audit_log (\n    event_id      UUID PRIMARY KEY DEFAULT gen_random_uuid(),\n    created_at    TIMESTAMPTZ NOT NULL DEFAULT now(),\n    system_id     TEXT NOT NULL,\n    session_id    UUID NOT NULL,\n    event_type    TEXT NOT NULL,\n    input_data    JSONB NOT NULL,\n    output_data   JSONB,\n    model_id      TEXT NOT NULL,\n    tools_called  JSONB DEFAULT '[]'::jsonb,\n    decision_made JSONB,\n    risk_flags    JSONB DEFAULT '[]'::jsonb,\n    human_reviewer TEXT,\n    review_outcome TEXT,\n    prev_hash     TEXT NOT NULL,\n    entry_hash    TEXT NOT NULL\n);\n\n-- Append-only: application user cannot update or delete\nREVOKE UPDATE, DELETE ON ai_audit_log FROM app_user;\n\nCREATE INDEX idx_audit_system_time\n  ON ai_audit_log (system_id, created_at);\nCREATE INDEX idx_audit_session\n  ON ai_audit_log (session_id);\n",[281,1613,1614,1619,1624,1629,1634,1639,1644,1649,1654,1659,1664,1669,1674,1679,1684,1689,1694,1699,1703,1708,1713,1717,1722,1727,1732],{"__ignoreMap":233},[339,1615,1616],{"class":341,"line":342},[339,1617,1618],{},"CREATE TABLE ai_audit_log (\n",[339,1620,1621],{"class":341,"line":234},[339,1622,1623],{},"    event_id      UUID PRIMARY KEY DEFAULT gen_random_uuid(),\n",[339,1625,1626],{"class":341,"line":241},[339,1627,1628],{},"    created_at    TIMESTAMPTZ NOT NULL DEFAULT now(),\n",[339,1630,1631],{"class":341,"line":358},[339,1632,1633],{},"    system_id     TEXT NOT NULL,\n",[339,1635,1636],{"class":341,"line":364},[339,1637,1638],{},"    session_id    UUID NOT NULL,\n",[339,1640,1641],{"class":341,"line":370},[339,1642,1643],{},"    event_type    TEXT NOT NULL,\n",[339,1645,1646],{"class":341,"line":456},[339,1647,1648],{},"    input_data    JSONB NOT NULL,\n",[339,1650,1651],{"class":341,"line":464},[339,1652,1653],{},"    output_data   JSONB,\n",[339,1655,1656],{"class":341,"line":472},[339,1657,1658],{},"    model_id      TEXT NOT NULL,\n",[339,1660,1661],{"class":341,"line":480},[339,1662,1663],{},"    tools_called  JSONB DEFAULT '[]'::jsonb,\n",[339,1665,1666],{"class":341,"line":485},[339,1667,1668],{},"    decision_made JSONB,\n",[339,1670,1671],{"class":341,"line":493},[339,1672,1673],{},"    risk_flags    JSONB DEFAULT '[]'::jsonb,\n",[339,1675,1676],{"class":341,"line":501},[339,1677,1678],{},"    human_reviewer TEXT,\n",[339,1680,1681],{"class":341,"line":512},[339,1682,1683],{},"    review_outcome TEXT,\n",[339,1685,1686],{"class":341,"line":523},[339,1687,1688],{},"    prev_hash     TEXT NOT NULL,\n",[339,1690,1691],{"class":341,"line":528},[339,1692,1693],{},"    entry_hash    TEXT NOT NULL\n",[339,1695,1696],{"class":341,"line":536},[339,1697,1698],{},");\n",[339,1700,1701],{"class":341,"line":548},[339,1702,422],{"emptyLinePlaceholder":260},[339,1704,1705],{"class":341,"line":559},[339,1706,1707],{},"-- Append-only: application user cannot update or delete\n",[339,1709,1710],{"class":341,"line":564},[339,1711,1712],{},"REVOKE UPDATE, DELETE ON ai_audit_log FROM app_user;\n",[339,1714,1715],{"class":341,"line":576},[339,1716,422],{"emptyLinePlaceholder":260},[339,1718,1719],{"class":341,"line":586},[339,1720,1721],{},"CREATE INDEX idx_audit_system_time\n",[339,1723,1724],{"class":341,"line":594},[339,1725,1726],{},"  ON ai_audit_log (system_id, created_at);\n",[339,1728,1729],{"class":341,"line":605},[339,1730,1731],{},"CREATE INDEX idx_audit_session\n",[339,1733,1734],{"class":341,"line":616},[339,1735,1736],{},"  ON ai_audit_log (session_id);\n",[12,1738,1739,1740,1743],{},"For Java teams, this is achievable with standard tooling. A ",[281,1741,1742],{},"synchronized"," service method maintains the running hash chain, each new entry hashes the previous one, and a verification method can replay the chain to confirm integrity at any point.",[31,1745,1747],{"id":1746},"mcp-and-tool-use-raise-the-stakes","MCP and tool use raise the stakes",[12,1749,1750],{},"If your AI systems can call tools, query internal services, read data, or trigger actions, the risk increases quickly.",[12,1752,1753],{},"Tool calls often happen below the main application flow. Access checks are inconsistent. Some actions are barely visible. In the worst case, an AI agent can touch sensitive systems without a complete, centralized audit trail.",[12,1755,1756],{},"That is a legal problem, but also an operational and security problem.",[12,1758,1759],{},"Tool-use systems need a control layer that can authenticate the caller, authorize tool access, enforce policies, log every call, flag suspicious patterns, rate-limit risky behavior, and require human review for sensitive actions.",[12,1761,1762],{},"Without that layer, teams are depending on scattered controls in places never designed for regulatory-grade oversight.",[12,1764,1765,1766,1768],{},"If you are building this kind of governance layer, ",[227,1767,230],{"href":229}," is the direction we are exploring at Arcnull — a proxy architecture with the audit log and policy engine built in, so you do not have to build it from scratch.",[31,1770,1772],{"id":1771},"what-non-compliance-could-cost","What non-compliance could cost",[12,1774,1775,1776,1779],{},"The penalty structure is serious. The most severe tier can reach ",[166,1777,1778],{},"7% of global annual revenue",". High-risk non-compliance sits in a lower tier but is still significant.",[12,1781,1782],{},"For large companies, the revenue-based calculation is the real concern. And financial penalties are only part of the picture — remediation costs, legal fees, delayed launches, reputational harm, and pressure to withdraw non-compliant systems from the EU market all follow.",[12,1784,1785],{},"For SaaS businesses with EU customers, that is not a theoretical concern.",[31,1787,1789],{"id":1788},"a-realistic-four-month-plan","A realistic four-month plan",[12,1791,1792,1795],{},[166,1793,1794],{},"Month 1 — Find and classify your AI systems."," Create an inventory of every service, workflow, and product feature that uses AI or machine learning. Classify each against the high-risk categories.",[12,1797,1798,1801],{},[166,1799,1800],{},"Month 2 — Build the audit foundation."," Implement structured, append-only logging for high-risk systems. Capture inputs, outputs, model metadata, tool calls, decisions, and reviewer actions.",[12,1803,1804,1807],{},[166,1805,1806],{},"Month 3 — Tighten governance."," Put proper access control around AI tool use. Add human review where needed. Monitor for anomalies, misuse, and unsafe behavior.",[12,1809,1810,1813],{},[166,1811,1812],{},"Month 4 — Finish documentation and test it."," Complete technical documentation and risk assessments. Do not assume your records are enough — test whether someone outside engineering can follow the system and understand its controls.",[31,1815,1817],{"id":1816},"the-bigger-point","The bigger point",[12,1819,1820],{},"The EU AI Act is not just another legal checklist. For Java teams it is a systems design challenge.",[12,1822,1823],{},"The organizations that do well here will be the ones that treat AI governance as production infrastructure. They will know where AI is being used, what it is allowed to do, how its actions are logged, and how risky decisions are reviewed.",[12,1825,1826],{},"Start with visibility. Then lock down logging. Then add governance around tool use.",[12,1828,1829],{},"That work helps with compliance, but it also makes your AI stack safer, clearer, and easier to operate.",[1185,1831,1832],{},"html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",{"title":233,"searchDepth":234,"depth":234,"links":1834},[1835,1836,1843,1844,1845,1846,1847,1848,1849],{"id":196,"depth":234,"text":197},{"id":1263,"depth":234,"text":1264,"children":1837},[1838,1839,1840,1841,1842],{"id":1273,"depth":241,"text":1274},{"id":1280,"depth":241,"text":1281},{"id":1287,"depth":241,"text":1288},{"id":1294,"depth":241,"text":1295},{"id":1301,"depth":241,"text":1302},{"id":1311,"depth":234,"text":1312},{"id":1348,"depth":234,"text":1349},{"id":1384,"depth":234,"text":1385},{"id":1746,"depth":234,"text":1747},{"id":1771,"depth":234,"text":1772},{"id":1788,"depth":234,"text":1789},{"id":1816,"depth":234,"text":1817},"A closer look at the controls, auditability, and operational safeguards teams may need as AI governance requirements get stricter.",[1852,1853,258,1854,254],"eu ai act compliance","eu ai act java","high risk ai systems",{},"\u002Fblog\u002Feu-ai-act-2026-java-enterprise-teams","2026-04-28","10",{"title":1228,"description":1850},"eu-ai-act-2026-java-enterprise-teams","blog\u002Feu-ai-act-2026-java-enterprise-teams","X0HzFhcWdNlbztEJ3M_AZ6dZH41ZlTs9OwDlZqFNK8Y",{"id":1864,"title":1865,"author":7,"body":1866,"description":3136,"extension":251,"keywords":3137,"meta":3140,"navigation":260,"path":3141,"publishedAt":3142,"readTime":3143,"seo":3144,"slug":3145,"stem":3146,"updatedAt":267,"__hash__":3147},"blog\u002Fblog\u002Fpg-catalog-vs-pg-dump-schema-snapshots.md","pg_catalog vs pg_dump for schema snapshots",{"type":9,"value":1867,"toc":3121},[1868,1875,1878,1887,1892,1898,1905,1911,1914,1920,1923,1976,1979,2027,2030,2033,2037,2040,2088,2101,2104,2107,2113,2118,2124,2127,2133,2137,2140,2244,2248,2251,2346,2349,2353,2356,2412,2415,2470,2474,2477,2480,2492,2495,2498,2501,2621,2625,2628,2631,2698,2703,2706,2710,2720,3032,3035,3039,3045,3051,3065,3070,3084,3090,3094,3100,3106,3109,3115,3118],[12,1869,1870,1871,1874],{},"If you have ever tried to detect PostgreSQL schema drift by diffing two ",[281,1872,1873],{},"pg_dump"," outputs, you have probably run into the same frustrating problem: the diff says the schema changed, but nothing actually did.",[12,1876,1877],{},"A column seems to have moved. A constraint looks deleted and re-added. An index appears different for no real reason. The output changes, even though the schema is logically the same.",[12,1879,1880,1881,1883,1884,1886],{},"That is not really a bug in ",[281,1882,1873],{},". It is a side effect of what ",[281,1885,1873],{}," was built to do.",[12,1888,1889,1891],{},[281,1890,1873],{}," is great for backup and restore. It is not great for deterministic comparison.",[12,1893,1894,1895,1897],{},"If your goal is schema drift detection, querying ",[281,1896,328],{}," directly is usually the more reliable approach. With explicit ordering, you can produce stable snapshots that are much easier to diff, hash, and compare over time.",[31,1899,1901,1902,1904],{"id":1900},"why-pg_dump-creates-noisy-diffs","Why ",[281,1903,1873],{}," creates noisy diffs",[12,1906,1907,1908,1910],{},"The key issue is that ",[281,1909,1873],{}," is designed to recreate a database, not to produce stable text output for comparison.",[12,1912,1913],{},"That distinction matters.",[12,1915,1916,1917,1919],{},"When ",[281,1918,1873],{}," generates schema-only SQL, its job is to emit valid DDL in an order that works for restore. It does not promise that objects will always appear in the same order across runs. In simple databases, you might get identical output twice in a row. But as schemas grow more complex, that becomes less reliable.",[12,1921,1922],{},"Take a small example:",[331,1924,1926],{"className":333,"code":1925,"language":335,"meta":233,"style":233},"CREATE TABLE orders (\n    id bigserial PRIMARY KEY,\n    customer_id bigint NOT NULL,\n    total_cents integer NOT NULL,\n    status text NOT NULL DEFAULT 'pending',\n    created_at timestamptz NOT NULL DEFAULT now()\n);\n\nCREATE INDEX idx_orders_customer ON orders (customer_id);\nCREATE INDEX idx_orders_status ON orders (status);\n",[281,1927,1928,1933,1938,1943,1948,1953,1958,1962,1966,1971],{"__ignoreMap":233},[339,1929,1930],{"class":341,"line":342},[339,1931,1932],{},"CREATE TABLE orders (\n",[339,1934,1935],{"class":341,"line":234},[339,1936,1937],{},"    id bigserial PRIMARY KEY,\n",[339,1939,1940],{"class":341,"line":241},[339,1941,1942],{},"    customer_id bigint NOT NULL,\n",[339,1944,1945],{"class":341,"line":358},[339,1946,1947],{},"    total_cents integer NOT NULL,\n",[339,1949,1950],{"class":341,"line":364},[339,1951,1952],{},"    status text NOT NULL DEFAULT 'pending',\n",[339,1954,1955],{"class":341,"line":370},[339,1956,1957],{},"    created_at timestamptz NOT NULL DEFAULT now()\n",[339,1959,1960],{"class":341,"line":456},[339,1961,1698],{},[339,1963,1964],{"class":341,"line":464},[339,1965,422],{"emptyLinePlaceholder":260},[339,1967,1968],{"class":341,"line":472},[339,1969,1970],{},"CREATE INDEX idx_orders_customer ON orders (customer_id);\n",[339,1972,1973],{"class":341,"line":480},[339,1974,1975],{},"CREATE INDEX idx_orders_status ON orders (status);\n",[12,1977,1978],{},"Now run:",[331,1980,1984],{"className":1981,"code":1982,"language":1983,"meta":233,"style":233},"language-bash shiki shiki-themes github-light github-dark","pg_dump --schema-only mydb > dump1.sql\npg_dump --schema-only mydb > dump2.sql\ndiff dump1.sql dump2.sql\n","bash",[281,1985,1986,2004,2017],{"__ignoreMap":233},[339,1987,1988,1991,1994,1997,2001],{"class":341,"line":342},[339,1989,1873],{"class":1990},"sScJk",[339,1992,1993],{"class":427}," --schema-only",[339,1995,1996],{"class":416}," mydb",[339,1998,2000],{"class":1999},"szBVR"," >",[339,2002,2003],{"class":416}," dump1.sql\n",[339,2005,2006,2008,2010,2012,2014],{"class":341,"line":234},[339,2007,1873],{"class":1990},[339,2009,1993],{"class":427},[339,2011,1996],{"class":416},[339,2013,2000],{"class":1999},[339,2015,2016],{"class":416}," dump2.sql\n",[339,2018,2019,2022,2025],{"class":341,"line":241},[339,2020,2021],{"class":1990},"diff",[339,2023,2024],{"class":416}," dump1.sql",[339,2026,2016],{"class":416},[12,2028,2029],{},"Sometimes you will get no diff. Sometimes you will. The more tables, foreign keys, indexes, and constraints you add, the more likely it is that ordering differences start to show up.",[12,2031,2032],{},"This gets worse when you compare environments. Production has years of history behind it. Staging may have been recreated last week. Even if the logical schema is the same, the catalog layout underneath may not be. That can be enough to produce different output ordering.",[31,2034,2036],{"id":2035},"a-common-failure-mode","A common failure mode",[12,2038,2039],{},"Imagine a table with multiple check constraints:",[331,2041,2043],{"className":333,"code":2042,"language":335,"meta":233,"style":233},"CREATE TABLE payments (\n    id bigserial PRIMARY KEY,\n    amount_cents integer NOT NULL,\n    currency char(3) NOT NULL,\n    status text NOT NULL,\n    CONSTRAINT chk_amount CHECK (amount_cents > 0),\n    CONSTRAINT chk_currency CHECK (currency IN ('USD', 'EUR', 'GBP')),\n    CONSTRAINT chk_status CHECK (status IN ('pending', 'processed', 'failed'))\n);\n",[281,2044,2045,2050,2054,2059,2064,2069,2074,2079,2084],{"__ignoreMap":233},[339,2046,2047],{"class":341,"line":342},[339,2048,2049],{},"CREATE TABLE payments (\n",[339,2051,2052],{"class":341,"line":234},[339,2053,1937],{},[339,2055,2056],{"class":341,"line":241},[339,2057,2058],{},"    amount_cents integer NOT NULL,\n",[339,2060,2061],{"class":341,"line":358},[339,2062,2063],{},"    currency char(3) NOT NULL,\n",[339,2065,2066],{"class":341,"line":364},[339,2067,2068],{},"    status text NOT NULL,\n",[339,2070,2071],{"class":341,"line":370},[339,2072,2073],{},"    CONSTRAINT chk_amount CHECK (amount_cents > 0),\n",[339,2075,2076],{"class":341,"line":456},[339,2077,2078],{},"    CONSTRAINT chk_currency CHECK (currency IN ('USD', 'EUR', 'GBP')),\n",[339,2080,2081],{"class":341,"line":464},[339,2082,2083],{},"    CONSTRAINT chk_status CHECK (status IN ('pending', 'processed', 'failed'))\n",[339,2085,2086],{"class":341,"line":472},[339,2087,1698],{},[12,2089,2090,2091,690,2094,690,2097,2100],{},"In one dump, the constraints may appear in this order: ",[281,2092,2093],{},"chk_amount",[281,2095,2096],{},"chk_currency",[281,2098,2099],{},"chk_status",". In another, they may appear differently.",[12,2102,2103],{},"A text diff will make that look like change, even though the schema has not actually changed at all.",[12,2105,2106],{},"Multiply that across hundreds of tables and constraints, and you end up with pages of noise. The real drift signal gets buried inside false positives.",[31,2108,1901,2110,2112],{"id":2109},"why-pg_catalog-works-better",[281,2111,328],{}," works better",[12,2114,2115,2117],{},[281,2116,328],{}," is PostgreSQL's system catalog. It stores metadata about tables, columns, constraints, indexes, functions, types, and more.",[12,2119,2120,2121,385],{},"The advantage is simple: you can query it directly and apply your own ",[281,2122,2123],{},"ORDER BY",[12,2125,2126],{},"That gives you deterministic output.",[12,2128,2129,2130,2132],{},"If the schema has not changed, the query result will come back in the same order every time. That makes it much better for drift detection than comparing raw ",[281,2131,1873],{}," output.",[31,2134,2136],{"id":2135},"core-catalog-tables-worth-querying","Core catalog tables worth querying",[12,2138,2139],{},"For schema snapshots, these are the most useful catalog tables:",[1425,2141,2142,2152],{},[1428,2143,2144],{},[1431,2145,2146,2149],{},[1434,2147,2148],{},"Catalog table",[1434,2150,2151],{},"What it contains",[1444,2153,2154,2164,2174,2184,2194,2204,2214,2224,2234],{},[1431,2155,2156,2161],{},[1449,2157,2158],{},[281,2159,2160],{},"pg_namespace",[1449,2162,2163],{},"Schemas",[1431,2165,2166,2171],{},[1449,2167,2168],{},[281,2169,2170],{},"pg_class",[1449,2172,2173],{},"Tables, views, indexes, sequences",[1431,2175,2176,2181],{},[1449,2177,2178],{},[281,2179,2180],{},"pg_attribute",[1449,2182,2183],{},"Columns",[1431,2185,2186,2191],{},[1449,2187,2188],{},[281,2189,2190],{},"pg_constraint",[1449,2192,2193],{},"Primary keys, foreign keys, unique and check constraints",[1431,2195,2196,2201],{},[1449,2197,2198],{},[281,2199,2200],{},"pg_index",[1449,2202,2203],{},"Index metadata",[1431,2205,2206,2211],{},[1449,2207,2208],{},[281,2209,2210],{},"pg_proc",[1449,2212,2213],{},"Functions and procedures",[1431,2215,2216,2221],{},[1449,2217,2218],{},[281,2219,2220],{},"pg_trigger",[1449,2222,2223],{},"Triggers",[1431,2225,2226,2231],{},[1449,2227,2228],{},[281,2229,2230],{},"pg_extension",[1449,2232,2233],{},"Installed extensions",[1431,2235,2236,2241],{},[1449,2237,2238],{},[281,2239,2240],{},"pg_type",[1449,2242,2243],{},"Custom types and enums",[31,2245,2247],{"id":2246},"deterministic-column-snapshot","Deterministic column snapshot",[12,2249,2250],{},"A column snapshot query looks like this:",[331,2252,2254],{"className":333,"code":2253,"language":335,"meta":233,"style":233},"SELECT\n    n.nspname AS schema_name,\n    c.relname AS table_name,\n    a.attname AS column_name,\n    a.attnum AS ordinal_position,\n    pg_catalog.format_type(a.atttypid, a.atttypmod) AS data_type,\n    a.attnotnull AS is_not_null,\n    pg_catalog.pg_get_expr(d.adbin, d.adrelid) AS column_default\nFROM pg_catalog.pg_attribute a\nJOIN pg_catalog.pg_class c ON a.attrelid = c.oid\nJOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\nLEFT JOIN pg_catalog.pg_attrdef d\n    ON a.attrelid = d.adrelid AND a.attnum = d.adnum\nWHERE n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')\n  AND c.relkind IN ('r', 'p')\n  AND a.attnum > 0\n  AND NOT a.attisdropped\nORDER BY n.nspname, c.relname, a.attnum;\n",[281,2255,2256,2261,2266,2271,2276,2281,2286,2291,2296,2301,2306,2311,2316,2321,2326,2331,2336,2341],{"__ignoreMap":233},[339,2257,2258],{"class":341,"line":342},[339,2259,2260],{},"SELECT\n",[339,2262,2263],{"class":341,"line":234},[339,2264,2265],{},"    n.nspname AS schema_name,\n",[339,2267,2268],{"class":341,"line":241},[339,2269,2270],{},"    c.relname AS table_name,\n",[339,2272,2273],{"class":341,"line":358},[339,2274,2275],{},"    a.attname AS column_name,\n",[339,2277,2278],{"class":341,"line":364},[339,2279,2280],{},"    a.attnum AS ordinal_position,\n",[339,2282,2283],{"class":341,"line":370},[339,2284,2285],{},"    pg_catalog.format_type(a.atttypid, a.atttypmod) AS data_type,\n",[339,2287,2288],{"class":341,"line":456},[339,2289,2290],{},"    a.attnotnull AS is_not_null,\n",[339,2292,2293],{"class":341,"line":464},[339,2294,2295],{},"    pg_catalog.pg_get_expr(d.adbin, d.adrelid) AS column_default\n",[339,2297,2298],{"class":341,"line":472},[339,2299,2300],{},"FROM pg_catalog.pg_attribute a\n",[339,2302,2303],{"class":341,"line":480},[339,2304,2305],{},"JOIN pg_catalog.pg_class c ON a.attrelid = c.oid\n",[339,2307,2308],{"class":341,"line":485},[339,2309,2310],{},"JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\n",[339,2312,2313],{"class":341,"line":493},[339,2314,2315],{},"LEFT JOIN pg_catalog.pg_attrdef d\n",[339,2317,2318],{"class":341,"line":501},[339,2319,2320],{},"    ON a.attrelid = d.adrelid AND a.attnum = d.adnum\n",[339,2322,2323],{"class":341,"line":512},[339,2324,2325],{},"WHERE n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')\n",[339,2327,2328],{"class":341,"line":523},[339,2329,2330],{},"  AND c.relkind IN ('r', 'p')\n",[339,2332,2333],{"class":341,"line":528},[339,2334,2335],{},"  AND a.attnum > 0\n",[339,2337,2338],{"class":341,"line":536},[339,2339,2340],{},"  AND NOT a.attisdropped\n",[339,2342,2343],{"class":341,"line":548},[339,2344,2345],{},"ORDER BY n.nspname, c.relname, a.attnum;\n",[12,2347,2348],{},"The important part is not just what you query, but how you order it. Once the ordering is explicit, the output becomes stable enough for reliable comparison.",[31,2350,2352],{"id":2351},"constraints-and-indexes-follow-the-same-pattern","Constraints and indexes follow the same pattern",[12,2354,2355],{},"Constraints:",[331,2357,2359],{"className":333,"code":2358,"language":335,"meta":233,"style":233},"SELECT\n    n.nspname AS schema_name,\n    c.relname AS table_name,\n    con.conname AS constraint_name,\n    con.contype AS constraint_type,\n    pg_catalog.pg_get_constraintdef(con.oid, true) AS definition\nFROM pg_catalog.pg_constraint con\nJOIN pg_catalog.pg_class c ON con.conrelid = c.oid\nJOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\nWHERE n.nspname NOT IN ('pg_catalog', 'information_schema')\nORDER BY n.nspname, c.relname, con.conname;\n",[281,2360,2361,2365,2369,2373,2378,2383,2388,2393,2398,2402,2407],{"__ignoreMap":233},[339,2362,2363],{"class":341,"line":342},[339,2364,2260],{},[339,2366,2367],{"class":341,"line":234},[339,2368,2265],{},[339,2370,2371],{"class":341,"line":241},[339,2372,2270],{},[339,2374,2375],{"class":341,"line":358},[339,2376,2377],{},"    con.conname AS constraint_name,\n",[339,2379,2380],{"class":341,"line":364},[339,2381,2382],{},"    con.contype AS constraint_type,\n",[339,2384,2385],{"class":341,"line":370},[339,2386,2387],{},"    pg_catalog.pg_get_constraintdef(con.oid, true) AS definition\n",[339,2389,2390],{"class":341,"line":456},[339,2391,2392],{},"FROM pg_catalog.pg_constraint con\n",[339,2394,2395],{"class":341,"line":464},[339,2396,2397],{},"JOIN pg_catalog.pg_class c ON con.conrelid = c.oid\n",[339,2399,2400],{"class":341,"line":472},[339,2401,2310],{},[339,2403,2404],{"class":341,"line":480},[339,2405,2406],{},"WHERE n.nspname NOT IN ('pg_catalog', 'information_schema')\n",[339,2408,2409],{"class":341,"line":485},[339,2410,2411],{},"ORDER BY n.nspname, c.relname, con.conname;\n",[12,2413,2414],{},"Indexes:",[331,2416,2418],{"className":333,"code":2417,"language":335,"meta":233,"style":233},"SELECT\n    n.nspname AS schema_name,\n    c.relname AS table_name,\n    i.relname AS index_name,\n    pg_catalog.pg_get_indexdef(ix.indexrelid) AS index_definition\nFROM pg_catalog.pg_index ix\nJOIN pg_catalog.pg_class c ON ix.indrelid = c.oid\nJOIN pg_catalog.pg_class i ON ix.indexrelid = i.oid\nJOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\nWHERE n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')\nORDER BY n.nspname, c.relname, i.relname;\n",[281,2419,2420,2424,2428,2432,2437,2442,2447,2452,2457,2461,2465],{"__ignoreMap":233},[339,2421,2422],{"class":341,"line":342},[339,2423,2260],{},[339,2425,2426],{"class":341,"line":234},[339,2427,2265],{},[339,2429,2430],{"class":341,"line":241},[339,2431,2270],{},[339,2433,2434],{"class":341,"line":358},[339,2435,2436],{},"    i.relname AS index_name,\n",[339,2438,2439],{"class":341,"line":364},[339,2440,2441],{},"    pg_catalog.pg_get_indexdef(ix.indexrelid) AS index_definition\n",[339,2443,2444],{"class":341,"line":370},[339,2445,2446],{},"FROM pg_catalog.pg_index ix\n",[339,2448,2449],{"class":341,"line":456},[339,2450,2451],{},"JOIN pg_catalog.pg_class c ON ix.indrelid = c.oid\n",[339,2453,2454],{"class":341,"line":464},[339,2455,2456],{},"JOIN pg_catalog.pg_class i ON ix.indexrelid = i.oid\n",[339,2458,2459],{"class":341,"line":472},[339,2460,2310],{},[339,2462,2463],{"class":341,"line":480},[339,2464,2325],{},[339,2466,2467],{"class":341,"line":485},[339,2468,2469],{},"ORDER BY n.nspname, c.relname, i.relname;\n",[31,2471,2473],{"id":2472},"a-fingerprint-is-even-better-than-a-raw-diff","A fingerprint is even better than a raw diff",[12,2475,2476],{},"Once your snapshot output is deterministic, you can go one step further and compute a schema fingerprint.",[12,2478,2479],{},"The idea is straightforward:",[2481,2482,2483,2486,2489],"ol",{},[67,2484,2485],{},"Capture the ordered metadata",[67,2487,2488],{},"Convert it into a canonical string",[67,2490,2491],{},"Hash it with SHA-256",[12,2493,2494],{},"If the fingerprint is unchanged, the schema is unchanged. If it differs, you know something moved and you can run a deeper diff.",[12,2496,2497],{},"That approach is much more efficient for continuous monitoring. Most of the time you only need to compare a short hash instead of full schema text.",[12,2499,2500],{},"You can compute the fingerprint directly in SQL:",[331,2502,2504],{"className":333,"code":2503,"language":335,"meta":233,"style":233},"SELECT encode(\n    sha256(\n        string_agg(\n            row_to_text,\n            E'\\n' ORDER BY row_to_text\n        )::bytea\n    ),\n    'hex'\n) AS schema_fingerprint\nFROM (\n    SELECT format('%s.%s.%s.%s.%s.%s',\n        n.nspname, c.relname, a.attname, a.attnum,\n        pg_catalog.format_type(a.atttypid, a.atttypmod),\n        a.attnotnull\n    ) AS row_to_text\n    FROM pg_catalog.pg_attribute a\n    JOIN pg_catalog.pg_class c ON a.attrelid = c.oid\n    JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\n    WHERE n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')\n      AND c.relkind IN ('r', 'p')\n      AND a.attnum > 0\n      AND NOT a.attisdropped\n) sub;\n",[281,2505,2506,2511,2516,2521,2526,2531,2536,2541,2546,2551,2556,2561,2566,2571,2576,2581,2586,2591,2596,2601,2606,2611,2616],{"__ignoreMap":233},[339,2507,2508],{"class":341,"line":342},[339,2509,2510],{},"SELECT encode(\n",[339,2512,2513],{"class":341,"line":234},[339,2514,2515],{},"    sha256(\n",[339,2517,2518],{"class":341,"line":241},[339,2519,2520],{},"        string_agg(\n",[339,2522,2523],{"class":341,"line":358},[339,2524,2525],{},"            row_to_text,\n",[339,2527,2528],{"class":341,"line":364},[339,2529,2530],{},"            E'\\n' ORDER BY row_to_text\n",[339,2532,2533],{"class":341,"line":370},[339,2534,2535],{},"        )::bytea\n",[339,2537,2538],{"class":341,"line":456},[339,2539,2540],{},"    ),\n",[339,2542,2543],{"class":341,"line":464},[339,2544,2545],{},"    'hex'\n",[339,2547,2548],{"class":341,"line":472},[339,2549,2550],{},") AS schema_fingerprint\n",[339,2552,2553],{"class":341,"line":480},[339,2554,2555],{},"FROM (\n",[339,2557,2558],{"class":341,"line":485},[339,2559,2560],{},"    SELECT format('%s.%s.%s.%s.%s.%s',\n",[339,2562,2563],{"class":341,"line":493},[339,2564,2565],{},"        n.nspname, c.relname, a.attname, a.attnum,\n",[339,2567,2568],{"class":341,"line":501},[339,2569,2570],{},"        pg_catalog.format_type(a.atttypid, a.atttypmod),\n",[339,2572,2573],{"class":341,"line":512},[339,2574,2575],{},"        a.attnotnull\n",[339,2577,2578],{"class":341,"line":523},[339,2579,2580],{},"    ) AS row_to_text\n",[339,2582,2583],{"class":341,"line":528},[339,2584,2585],{},"    FROM pg_catalog.pg_attribute a\n",[339,2587,2588],{"class":341,"line":536},[339,2589,2590],{},"    JOIN pg_catalog.pg_class c ON a.attrelid = c.oid\n",[339,2592,2593],{"class":341,"line":548},[339,2594,2595],{},"    JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\n",[339,2597,2598],{"class":341,"line":559},[339,2599,2600],{},"    WHERE n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')\n",[339,2602,2603],{"class":341,"line":564},[339,2604,2605],{},"      AND c.relkind IN ('r', 'p')\n",[339,2607,2608],{"class":341,"line":576},[339,2609,2610],{},"      AND a.attnum > 0\n",[339,2612,2613],{"class":341,"line":586},[339,2614,2615],{},"      AND NOT a.attisdropped\n",[339,2617,2618],{"class":341,"line":594},[339,2619,2620],{},") sub;\n",[31,2622,2624],{"id":2623},"performance-is-another-advantage","Performance is another advantage",[12,2626,2627],{},"This approach is not just cleaner. It is often faster.",[12,2629,2630],{},"On a database with 350 tables, 1,200 columns, 400 constraints, and 500 indexes:",[1425,2632,2633,2649],{},[1428,2634,2635],{},[1431,2636,2637,2640,2643,2646],{},[1434,2638,2639],{},"Approach",[1434,2641,2642],{},"Time",[1434,2644,2645],{},"Output size",[1434,2647,2648],{},"Deterministic",[1444,2650,2651,2667,2683],{},[1431,2652,2653,2658,2661,2664],{},[1449,2654,2655],{},[281,2656,2657],{},"pg_dump --schema-only",[1449,2659,2660],{},"1.8s",[1449,2662,2663],{},"245 KB",[1449,2665,2666],{},"No",[1431,2668,2669,2674,2677,2680],{},[1449,2670,2671,2673],{},[281,2672,328],{}," queries",[1449,2675,2676],{},"0.3s",[1449,2678,2679],{},"82 KB",[1449,2681,2682],{},"Yes",[1431,2684,2685,2690,2693,2696],{},[1449,2686,2687,2689],{},[281,2688,328],{}," + SHA-256",[1449,2691,2692],{},"0.4s",[1449,2694,2695],{},"64 bytes",[1449,2697,2682],{},[12,2699,2700,2702],{},[281,2701,1873],{}," has to resolve dependencies, order DDL for restore, and format everything as valid SQL. Catalog queries skip that overhead and pull only the metadata you actually need.",[12,2704,2705],{},"The fingerprint comparison is the key insight for continuous monitoring at scale. You are comparing a 64-character string, not 245KB of schema text. If it matches, you are done in milliseconds. If it differs, you run the full queries to find what changed.",[31,2707,2709],{"id":2708},"a-practical-java-approach","A practical Java approach",[12,2711,2712,2713,2716,2717,2719],{},"At Arcnull, this pattern is implemented in Java through a ",[281,2714,2715],{},"CatalogService"," that queries ",[281,2718,328],{},", builds canonical snapshots, and computes a fingerprint:",[331,2721,2725],{"className":2722,"code":2723,"language":2724,"meta":233,"style":233},"language-java shiki shiki-themes github-light github-dark","@Service\npublic class CatalogService {\n\n    private final JdbcTemplate jdbc;\n\n    public List\u003CColumnSnapshot> captureColumns(String schema) {\n        return jdbc.query(\"\"\"\n            SELECT\n                n.nspname AS schema_name,\n                c.relname AS table_name,\n                a.attname AS column_name,\n                a.attnum AS ordinal_position,\n                pg_catalog.format_type(a.atttypid, a.atttypmod) AS data_type,\n                a.attnotnull AS is_not_null,\n                pg_catalog.pg_get_expr(d.adbin, d.adrelid) AS column_default\n            FROM pg_catalog.pg_attribute a\n            JOIN pg_catalog.pg_class c ON a.attrelid = c.oid\n            JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\n            LEFT JOIN pg_catalog.pg_attrdef d\n                ON a.attrelid = d.adrelid AND a.attnum = d.adnum\n            WHERE n.nspname = ?\n              AND c.relkind IN ('r','p')\n              AND a.attnum > 0\n              AND NOT a.attisdropped\n            ORDER BY n.nspname, c.relname, a.attnum\n            \"\"\",\n            (rs, rowNum) -> new ColumnSnapshot(\n                rs.getString(\"schema_name\"),\n                rs.getString(\"table_name\"),\n                rs.getString(\"column_name\"),\n                rs.getInt(\"ordinal_position\"),\n                rs.getString(\"data_type\"),\n                rs.getBoolean(\"is_not_null\"),\n                rs.getString(\"column_default\")\n            ),\n            schema\n        );\n    }\n\n    public String computeFingerprint(String schema) {\n        String combined = Stream.of(\n                captureColumns(schema).stream()\n                    .map(ColumnSnapshot::toCanonicalString),\n                captureConstraints(schema).stream()\n                    .map(ConstraintSnapshot::toCanonicalString),\n                captureIndexes(schema).stream()\n                    .map(IndexSnapshot::toCanonicalString)\n            )\n            .flatMap(Function.identity())\n            .collect(Collectors.joining(\"\\n\"));\n\n        return Hashing.sha256()\n            .hashString(combined, StandardCharsets.UTF_8)\n            .toString();\n    }\n}\n","java",[281,2726,2727,2732,2737,2741,2746,2750,2755,2760,2765,2770,2775,2780,2785,2790,2795,2800,2805,2810,2815,2820,2825,2830,2835,2840,2845,2850,2855,2861,2867,2873,2879,2885,2891,2897,2903,2909,2915,2921,2927,2932,2938,2944,2950,2956,2962,2968,2974,2980,2986,2992,2998,3003,3009,3015,3021,3026],{"__ignoreMap":233},[339,2728,2729],{"class":341,"line":342},[339,2730,2731],{},"@Service\n",[339,2733,2734],{"class":341,"line":234},[339,2735,2736],{},"public class CatalogService {\n",[339,2738,2739],{"class":341,"line":241},[339,2740,422],{"emptyLinePlaceholder":260},[339,2742,2743],{"class":341,"line":358},[339,2744,2745],{},"    private final JdbcTemplate jdbc;\n",[339,2747,2748],{"class":341,"line":364},[339,2749,422],{"emptyLinePlaceholder":260},[339,2751,2752],{"class":341,"line":370},[339,2753,2754],{},"    public List\u003CColumnSnapshot> captureColumns(String schema) {\n",[339,2756,2757],{"class":341,"line":456},[339,2758,2759],{},"        return jdbc.query(\"\"\"\n",[339,2761,2762],{"class":341,"line":464},[339,2763,2764],{},"            SELECT\n",[339,2766,2767],{"class":341,"line":472},[339,2768,2769],{},"                n.nspname AS schema_name,\n",[339,2771,2772],{"class":341,"line":480},[339,2773,2774],{},"                c.relname AS table_name,\n",[339,2776,2777],{"class":341,"line":485},[339,2778,2779],{},"                a.attname AS column_name,\n",[339,2781,2782],{"class":341,"line":493},[339,2783,2784],{},"                a.attnum AS ordinal_position,\n",[339,2786,2787],{"class":341,"line":501},[339,2788,2789],{},"                pg_catalog.format_type(a.atttypid, a.atttypmod) AS data_type,\n",[339,2791,2792],{"class":341,"line":512},[339,2793,2794],{},"                a.attnotnull AS is_not_null,\n",[339,2796,2797],{"class":341,"line":523},[339,2798,2799],{},"                pg_catalog.pg_get_expr(d.adbin, d.adrelid) AS column_default\n",[339,2801,2802],{"class":341,"line":528},[339,2803,2804],{},"            FROM pg_catalog.pg_attribute a\n",[339,2806,2807],{"class":341,"line":536},[339,2808,2809],{},"            JOIN pg_catalog.pg_class c ON a.attrelid = c.oid\n",[339,2811,2812],{"class":341,"line":548},[339,2813,2814],{},"            JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\n",[339,2816,2817],{"class":341,"line":559},[339,2818,2819],{},"            LEFT JOIN pg_catalog.pg_attrdef d\n",[339,2821,2822],{"class":341,"line":564},[339,2823,2824],{},"                ON a.attrelid = d.adrelid AND a.attnum = d.adnum\n",[339,2826,2827],{"class":341,"line":576},[339,2828,2829],{},"            WHERE n.nspname = ?\n",[339,2831,2832],{"class":341,"line":586},[339,2833,2834],{},"              AND c.relkind IN ('r','p')\n",[339,2836,2837],{"class":341,"line":594},[339,2838,2839],{},"              AND a.attnum > 0\n",[339,2841,2842],{"class":341,"line":605},[339,2843,2844],{},"              AND NOT a.attisdropped\n",[339,2846,2847],{"class":341,"line":616},[339,2848,2849],{},"            ORDER BY n.nspname, c.relname, a.attnum\n",[339,2851,2852],{"class":341,"line":627},[339,2853,2854],{},"            \"\"\",\n",[339,2856,2858],{"class":341,"line":2857},27,[339,2859,2860],{},"            (rs, rowNum) -> new ColumnSnapshot(\n",[339,2862,2864],{"class":341,"line":2863},28,[339,2865,2866],{},"                rs.getString(\"schema_name\"),\n",[339,2868,2870],{"class":341,"line":2869},29,[339,2871,2872],{},"                rs.getString(\"table_name\"),\n",[339,2874,2876],{"class":341,"line":2875},30,[339,2877,2878],{},"                rs.getString(\"column_name\"),\n",[339,2880,2882],{"class":341,"line":2881},31,[339,2883,2884],{},"                rs.getInt(\"ordinal_position\"),\n",[339,2886,2888],{"class":341,"line":2887},32,[339,2889,2890],{},"                rs.getString(\"data_type\"),\n",[339,2892,2894],{"class":341,"line":2893},33,[339,2895,2896],{},"                rs.getBoolean(\"is_not_null\"),\n",[339,2898,2900],{"class":341,"line":2899},34,[339,2901,2902],{},"                rs.getString(\"column_default\")\n",[339,2904,2906],{"class":341,"line":2905},35,[339,2907,2908],{},"            ),\n",[339,2910,2912],{"class":341,"line":2911},36,[339,2913,2914],{},"            schema\n",[339,2916,2918],{"class":341,"line":2917},37,[339,2919,2920],{},"        );\n",[339,2922,2924],{"class":341,"line":2923},38,[339,2925,2926],{},"    }\n",[339,2928,2930],{"class":341,"line":2929},39,[339,2931,422],{"emptyLinePlaceholder":260},[339,2933,2935],{"class":341,"line":2934},40,[339,2936,2937],{},"    public String computeFingerprint(String schema) {\n",[339,2939,2941],{"class":341,"line":2940},41,[339,2942,2943],{},"        String combined = Stream.of(\n",[339,2945,2947],{"class":341,"line":2946},42,[339,2948,2949],{},"                captureColumns(schema).stream()\n",[339,2951,2953],{"class":341,"line":2952},43,[339,2954,2955],{},"                    .map(ColumnSnapshot::toCanonicalString),\n",[339,2957,2959],{"class":341,"line":2958},44,[339,2960,2961],{},"                captureConstraints(schema).stream()\n",[339,2963,2965],{"class":341,"line":2964},45,[339,2966,2967],{},"                    .map(ConstraintSnapshot::toCanonicalString),\n",[339,2969,2971],{"class":341,"line":2970},46,[339,2972,2973],{},"                captureIndexes(schema).stream()\n",[339,2975,2977],{"class":341,"line":2976},47,[339,2978,2979],{},"                    .map(IndexSnapshot::toCanonicalString)\n",[339,2981,2983],{"class":341,"line":2982},48,[339,2984,2985],{},"            )\n",[339,2987,2989],{"class":341,"line":2988},49,[339,2990,2991],{},"            .flatMap(Function.identity())\n",[339,2993,2995],{"class":341,"line":2994},50,[339,2996,2997],{},"            .collect(Collectors.joining(\"\\n\"));\n",[339,2999,3001],{"class":341,"line":3000},51,[339,3002,422],{"emptyLinePlaceholder":260},[339,3004,3006],{"class":341,"line":3005},52,[339,3007,3008],{},"        return Hashing.sha256()\n",[339,3010,3012],{"class":341,"line":3011},53,[339,3013,3014],{},"            .hashString(combined, StandardCharsets.UTF_8)\n",[339,3016,3018],{"class":341,"line":3017},54,[339,3019,3020],{},"            .toString();\n",[339,3022,3024],{"class":341,"line":3023},55,[339,3025,2926],{},[339,3027,3029],{"class":341,"line":3028},56,[339,3030,3031],{},"}\n",[12,3033,3034],{},"The design choice that matters: each snapshot object produces a stable string representation. Once you have that, the fingerprint is simple. Capture ordered metadata, serialize consistently, hash it, compare against the previous scan, and only run a full semantic diff when the fingerprint changes.",[31,3036,3038],{"id":3037},"when-to-use-each-tool","When to use each tool",[12,3040,3041,3042,3044],{},"The real takeaway is not that ",[281,3043,1873],{}," is bad. It solves a different problem.",[12,3046,3047,3048,3050],{},"Use ",[281,3049,1873],{}," when you need to:",[64,3052,3053,3056,3059,3062],{},[67,3054,3055],{},"Create backups for disaster recovery",[67,3057,3058],{},"Move schemas between PostgreSQL versions",[67,3060,3061],{},"Generate SQL for manual inspection",[67,3063,3064],{},"Clone or restore databases",[12,3066,3047,3067,3069],{},[281,3068,328],{}," queries when you need to:",[64,3071,3072,3075,3078,3081],{},[67,3073,3074],{},"Detect schema drift between environments",[67,3076,3077],{},"Compute deterministic fingerprints",[67,3079,3080],{},"Build automated schema comparison pipelines",[67,3082,3083],{},"Monitor production schemas for untracked changes",[12,3085,3086,3087,3089],{},"These tools are complementary. Problems start when ",[281,3088,1873],{}," is used for something it was never meant to do.",[31,3091,3093],{"id":3092},"final-takeaway","Final takeaway",[12,3095,3096,3097,3099],{},"The issue with ",[281,3098,1873],{}," is not something you can fully fix with better diff tooling. The noise comes from the nature of the output itself.",[12,3101,3102,3103,3105],{},"If you want reliable schema snapshots, start closer to the source. Query ",[281,3104,328],{},", order the results explicitly, and compare deterministic output instead of restore-oriented SQL.",[12,3107,3108],{},"That gives you cleaner diffs, fewer false alarms, and a much more dependable foundation for drift detection.",[12,3110,3111,3112,3114],{},"If you are building this yourself, start there. If you want it production-ready with scanning, diffing, CI integration, and alerts, that is what ",[227,3113,1182],{"href":1181}," is built to handle.",[12,3116,3117],{},"Your future self debugging a 3 AM deployment failure will thank you.",[1185,3119,3120],{},"html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html pre.shiki code .sScJk, html code.shiki .sScJk{--shiki-default:#6F42C1;--shiki-dark:#B392F0}html pre.shiki code .sj4cs, html code.shiki .sj4cs{--shiki-default:#005CC5;--shiki-dark:#79B8FF}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html pre.shiki code .szBVR, html code.shiki .szBVR{--shiki-default:#D73A49;--shiki-dark:#F97583}",{"title":233,"searchDepth":234,"depth":234,"links":3122},[3123,3125,3126,3128,3129,3130,3131,3132,3133,3134,3135],{"id":1900,"depth":234,"text":3124},"Why pg_dump creates noisy diffs",{"id":2035,"depth":234,"text":2036},{"id":2109,"depth":234,"text":3127},"Why pg_catalog works better",{"id":2135,"depth":234,"text":2136},{"id":2246,"depth":234,"text":2247},{"id":2351,"depth":234,"text":2352},{"id":2472,"depth":234,"text":2473},{"id":2623,"depth":234,"text":2624},{"id":2708,"depth":234,"text":2709},{"id":3037,"depth":234,"text":3038},{"id":3092,"depth":234,"text":3093},"Why snapshot ordering matters for schema drift detection, and why querying PostgreSQL metadata directly is often the more reliable approach.",[328,1873,3138,3139],"postgresql schema snapshot","deterministic schema diff",{},"\u002Fblog\u002Fpg-catalog-vs-pg-dump-schema-snapshots","2026-04-21","6",{"title":1865,"description":3136},"pg-catalog-vs-pg-dump-schema-snapshots","blog\u002Fpg-catalog-vs-pg-dump-schema-snapshots","cHqUrY_M7AqfHT6TlFRC7V7wXKMebXxXZhviPxKd3bY",{"id":3149,"title":3150,"author":7,"body":3151,"description":3568,"extension":251,"keywords":3569,"meta":3574,"navigation":260,"path":3575,"publishedAt":3576,"readTime":3577,"seo":3578,"slug":3579,"stem":3580,"updatedAt":267,"__hash__":3581},"blog\u002Fblog\u002Fwhat-is-postgresql-schema-drift.md","What PostgreSQL schema drift looks like in production",{"type":9,"value":3152,"toc":3544},[3153,3156,3163,3166,3169,3172,3176,3179,3182,3185,3188,3192,3195,3199,3202,3206,3213,3217,3220,3224,3227,3231,3234,3238,3241,3245,3251,3255,3258,3262,3265,3269,3272,3276,3279,3282,3293,3296,3299,3313,3316,3322,3325,3328,3331,3336,3339,3350,3356,3360,3366,3369,3372,3439,3442,3445,3449,3452,3455,3469,3472,3476,3479,3482,3485,3488,3492,3495,3498,3514,3517,3521,3524,3527,3533,3536,3542],[12,3154,3155],{},"Schema drift usually does not announce itself clearly. It hides for weeks or months, then shows up at the worst possible time: during a deployment, in the middle of an incident, or right when a critical migration runs.",[12,3157,3158,3159,3162],{},"Your migration files say the ",[281,3160,3161],{},"users"," table has twelve columns. Production has fourteen. Nobody is fully sure when the extra two were added, who added them, or whether any service depends on them. The next release assumes the migration history is correct, runs against production, and fails.",[12,3164,3165],{},"Now the deployment is stuck, the application is unstable, and the team is scrambling to understand what changed.",[12,3167,3168],{},"That is schema drift.",[12,3170,3171],{},"It is one of the most common causes of PostgreSQL deployment failures, but it does not get talked about nearly as much as query performance, replication issues, or connection pooling.",[31,3173,3175],{"id":3174},"what-schema-drift-actually-is","What schema drift actually is",[12,3177,3178],{},"Schema drift happens when the database schema you think you have is no longer the schema you actually have.",[12,3180,3181],{},"The expected schema usually lives in migration files inside version control. The real schema lives in PostgreSQL itself. When those two stop matching, drift has already started.",[12,3183,3184],{},"A simple way to think about it: your migration files are the blueprint, and production is the building. If someone quietly adds a room, moves a wall, or changes the wiring without updating the blueprint, the next contractor is going to run into problems. The same thing happens with databases.",[12,3186,3187],{},"At first, nothing may look broken. The app still runs. Queries still work. But over time, that gap between expectation and reality grows. Eventually, a migration, rollback, or deployment trips over it.",[31,3189,3191],{"id":3190},"what-schema-drift-looks-like-in-practice","What schema drift looks like in practice",[12,3193,3194],{},"Schema drift is not just one thing. It shows up in a few common ways.",[90,3196,3198],{"id":3197},"column-drift","Column drift",[12,3200,3201],{},"A column gets added, removed, renamed, or changed directly in production, but no migration file is ever checked in. It often happens during incident response, when someone makes a quick fix under pressure.",[90,3203,3205],{"id":3204},"constraint-drift","Constraint drift",[12,3207,3208,3209,3212],{},"Constraints are added, removed, or changed outside the normal migration path. This is especially risky because it can affect data quality without being immediately visible. A dropped ",[281,3210,3211],{},"NOT NULL"," or foreign key constraint may not trigger an outage right away, but it can quietly let bad data into the system.",[90,3214,3216],{"id":3215},"index-drift","Index drift",[12,3218,3219],{},"Someone creates an index in production to fix a slow query, but the change never makes it back into source control. Everything seems fine until a future migration rebuilds the table and the index disappears.",[90,3221,3223],{"id":3222},"extension-and-function-drift","Extension and function drift",[12,3225,3226],{},"Extensions, stored procedures, and custom functions are sometimes changed directly in production. These changes are easy to miss and can make environments behave differently in subtle ways.",[90,3228,3230],{"id":3229},"permission-drift","Permission drift",[12,3232,3233],{},"Roles, grants, and access controls can drift too. That is not just an operational issue — it can also become a security and compliance problem.",[31,3235,3237],{"id":3236},"how-drift-usually-starts","How drift usually starts",[12,3239,3240],{},"Schema drift does not usually happen because a team is careless. It happens because production systems are busy, real environments, and sometimes the migration pipeline is not the only place where changes happen.",[90,3242,3244],{"id":3243},"the-late-night-hotfix","The late-night hotfix",[12,3246,3247,3248,3250],{},"A production issue needs an immediate fix. An engineer logs in, runs an ",[281,3249,283],{},", solves the problem, and plans to backfill the migration later. Later never quite comes.",[90,3252,3254],{"id":3253},"environment-mismatch","Environment mismatch",[12,3256,3257],{},"Staging and production look similar, but not identical. A migration passes in staging because staging is missing a manual change that production already has. The exact same migration then fails in production.",[90,3259,3261],{"id":3260},"automatic-schema-changes-from-tooling","Automatic schema changes from tooling",[12,3263,3264],{},"Some frameworks and tools make schema changes easier, but that convenience can create problems if multiple services or teams are involved. When several actors touch the same database, migration history alone may stop telling the full story.",[90,3266,3268],{"id":3267},"dba-or-ops-side-changes","DBA or ops-side changes",[12,3270,3271],{},"DBAs and platform teams sometimes make direct schema changes for perfectly valid reasons: performance tuning, partitioning, operational fixes, or emergency adjustments. If those changes are not reflected back into the main migration history, drift starts building.",[31,3273,3275],{"id":3274},"why-this-becomes-a-production-problem","Why this becomes a production problem",[12,3277,3278],{},"Schema drift is dangerous because it stays invisible until something depends on the schema being exactly what the migration history says it is.",[12,3280,3281],{},"That usually happens during one of three moments:",[64,3283,3284,3287,3290],{},[67,3285,3286],{},"A deployment",[67,3288,3289],{},"A rollback",[67,3291,3292],{},"A production incident",[12,3294,3295],{},"That is why it feels so disruptive. Drift often accumulates quietly, then surfaces when the team is already under pressure.",[12,3297,3298],{},"Instead of a straightforward release, you get a confusing failure:",[64,3300,3301,3304,3307,3310],{},[67,3302,3303],{},"A migration tries to create something that already exists",[67,3305,3306],{},"A column is missing in one environment but not another",[67,3308,3309],{},"A constraint behaves differently than expected",[67,3311,3312],{},"A rollback script assumes a state the database no longer has",[12,3314,3315],{},"The real problem is not just that the schema changed. It is that the team lost a reliable picture of reality.",[31,3317,1901,3319,3321],{"id":3318},"why-pg_dump-is-not-a-great-way-to-detect-drift",[281,3320,1873],{}," is not a great way to detect drift",[12,3323,3324],{},"A lot of teams start with what seems like the obvious approach: dump the schema from two environments, diff the files, and look for differences.",[12,3326,3327],{},"On paper, that sounds reasonable.",[12,3329,3330],{},"In practice, it creates noise.",[12,3332,3333,3335],{},[281,3334,1873],{}," is built for backup and restore. Its job is to generate SQL that can recreate the schema, not to produce a stable representation for comparison. That means ordering can vary even when nothing meaningful has changed.",[12,3337,3338],{},"When you diff two dumps, you may end up seeing changes that are not really changes at all:",[64,3340,3341,3344,3347],{},[67,3342,3343],{},"Constraints appearing in a different order",[67,3345,3346],{},"Index definitions grouped differently",[67,3348,3349],{},"Statements reordered in a way that makes the diff look louder than reality",[12,3351,3352,3353,385],{},"That makes it harder to spot the drift that actually matters. For a deeper look at why, see the companion article on ",[227,3354,3355],{"href":3141},"pg_catalog vs pg_dump",[31,3357,1901,3358,2112],{"id":2109},[281,3359,328],{},[12,3361,3362,3363,3365],{},"PostgreSQL already stores its schema metadata in ",[281,3364,328],{},". That gives you a much better foundation for drift detection because you can query the metadata directly and apply your own deterministic ordering.",[12,3367,3368],{},"If you query columns and explicitly sort by schema, table, and column position, the output stays stable for the same schema. That makes it much more reliable for comparison.",[12,3370,3371],{},"A simplified example:",[331,3373,3375],{"className":333,"code":3374,"language":335,"meta":233,"style":233},"SELECT\n    n.nspname AS schema_name,\n    c.relname AS table_name,\n    a.attname AS column_name,\n    pg_catalog.format_type(a.atttypid, a.atttypmod) AS data_type,\n    a.attnotnull AS is_not_null,\n    a.attnum AS ordinal_position\nFROM pg_catalog.pg_attribute a\nJOIN pg_catalog.pg_class c ON a.attrelid = c.oid\nJOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\nWHERE n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')\n  AND c.relkind = 'r'\n  AND a.attnum > 0\n  AND NOT a.attisdropped\nORDER BY n.nspname, c.relname, a.attnum;\n",[281,3376,3377,3381,3385,3389,3393,3397,3401,3406,3410,3414,3418,3422,3427,3431,3435],{"__ignoreMap":233},[339,3378,3379],{"class":341,"line":342},[339,3380,2260],{},[339,3382,3383],{"class":341,"line":234},[339,3384,2265],{},[339,3386,3387],{"class":341,"line":241},[339,3388,2270],{},[339,3390,3391],{"class":341,"line":358},[339,3392,2275],{},[339,3394,3395],{"class":341,"line":364},[339,3396,2285],{},[339,3398,3399],{"class":341,"line":370},[339,3400,2290],{},[339,3402,3403],{"class":341,"line":456},[339,3404,3405],{},"    a.attnum AS ordinal_position\n",[339,3407,3408],{"class":341,"line":464},[339,3409,2300],{},[339,3411,3412],{"class":341,"line":472},[339,3413,2305],{},[339,3415,3416],{"class":341,"line":480},[339,3417,2310],{},[339,3419,3420],{"class":341,"line":485},[339,3421,2325],{},[339,3423,3424],{"class":341,"line":493},[339,3425,3426],{},"  AND c.relkind = 'r'\n",[339,3428,3429],{"class":341,"line":501},[339,3430,2335],{},[339,3432,3433],{"class":341,"line":512},[339,3434,2340],{},[339,3436,3437],{"class":341,"line":523},[339,3438,2345],{},[12,3440,3441],{},"The same idea applies to constraints, indexes, triggers, functions, and extensions. Once you capture them in a consistent order, you can build a stable snapshot of the schema.",[12,3443,3444],{},"From there, you can hash the result and turn the entire schema into a single fingerprint. If the fingerprint changes, something changed. If it does not, the schema is the same. That removes a lot of false positives.",[31,3446,3448],{"id":3447},"what-good-drift-detection-should-do","What good drift detection should do",[12,3450,3451],{},"A useful drift detection system should do more than say \"something is different.\"",[12,3453,3454],{},"It should tell you:",[64,3456,3457,3460,3463,3466],{},[67,3458,3459],{},"What changed",[67,3461,3462],{},"Where it changed",[67,3464,3465],{},"Whether the change is likely to break a deployment",[67,3467,3468],{},"How serious it is",[12,3470,3471],{},"That is the real difference between noisy comparison and actionable detection.",[31,3473,3475],{"id":3474},"why-catching-drift-early-matters","Why catching drift early matters",[12,3477,3478],{},"The direct cost of schema drift is downtime, failed releases, and engineering time spent debugging under pressure.",[12,3480,3481],{},"But there is a bigger cost: trust.",[12,3483,3484],{},"When teams stop trusting that staging matches production, every deployment feels riskier. Every migration needs more caution. Every rollback becomes more stressful. The delivery pipeline slows down because the schema is no longer something people feel confident about.",[12,3486,3487],{},"That affects far more than the database team. It affects release speed across the whole company.",[31,3489,3491],{"id":3490},"a-simple-way-to-get-started","A simple way to get started",[12,3493,3494],{},"If you are not checking for drift today, you do not need a big platform project to begin.",[12,3496,3497],{},"Start by:",[2481,3499,3500,3505,3508,3511],{},[67,3501,3502,3503],{},"Capturing a baseline snapshot from production using ",[281,3504,328],{},[67,3506,3507],{},"Storing that output in a canonical form",[67,3509,3510],{},"Running the same snapshot regularly",[67,3512,3513],{},"Comparing the results for changes",[12,3515,3516],{},"Even that basic approach is far better than waiting for the next failed deployment to reveal the drift for you.",[31,3518,3520],{"id":3519},"final-thought","Final thought",[12,3522,3523],{},"Schema drift is not rare. It is just easy to ignore until it becomes expensive.",[12,3525,3526],{},"Every team running PostgreSQL in production will eventually deal with it. The real choice is whether you discover it during a calm review cycle or during a stressful outage.",[12,3528,3529,3530,3532],{},"If you build detection on deterministic ",[281,3531,328],{}," snapshots instead of noisy schema dumps, you can catch the real changes early and avoid a lot of painful surprises.",[12,3534,3535],{},"And in production, that kind of visibility is not just nice to have. It is part of staying deployable.",[12,3537,3538,3539,3541],{},"If you want this running automatically with CI gates, Slack alerts, and historical drift tracking, ",[227,3540,1182],{"href":1181}," is built around exactly this approach.",[1185,3543,1832],{},{"title":233,"searchDepth":234,"depth":234,"links":3545},[3546,3547,3554,3560,3561,3563,3564,3565,3566,3567],{"id":3174,"depth":234,"text":3175},{"id":3190,"depth":234,"text":3191,"children":3548},[3549,3550,3551,3552,3553],{"id":3197,"depth":241,"text":3198},{"id":3204,"depth":241,"text":3205},{"id":3215,"depth":241,"text":3216},{"id":3222,"depth":241,"text":3223},{"id":3229,"depth":241,"text":3230},{"id":3236,"depth":234,"text":3237,"children":3555},[3556,3557,3558,3559],{"id":3243,"depth":241,"text":3244},{"id":3253,"depth":241,"text":3254},{"id":3260,"depth":241,"text":3261},{"id":3267,"depth":241,"text":3268},{"id":3274,"depth":234,"text":3275},{"id":3318,"depth":234,"text":3562},"Why pg_dump is not a great way to detect drift",{"id":2109,"depth":234,"text":3127},{"id":3447,"depth":234,"text":3448},{"id":3474,"depth":234,"text":3475},{"id":3490,"depth":234,"text":3491},{"id":3519,"depth":234,"text":3520},"What schema drift actually is, how it happens, and why it tends to surface at the worst possible time.",[3570,3571,3572,3573],"postgresql schema drift","database schema monitoring","production outage prevention","postgresql migration",{},"\u002Fblog\u002Fwhat-is-postgresql-schema-drift","2026-04-14","8",{"title":3150,"description":3568},"what-is-postgresql-schema-drift","blog\u002Fwhat-is-postgresql-schema-drift","GHhOfTrbFIkN8eDWk1yURvj7tzY_Xb8nW9rgeUpuFJI"]