Drift Scanner
Alerts
Configure Slack, webhook, and email alerts for drift events. Includes the exact webhook payload shape, Slack message format, and resilience behaviour.
Drift Scanner can push alerts to three channels per environment. All three are optional — you can enable any combination, and each event is delivered once per enabled channel.
Channels
Slack
Paste an incoming webhook URL into the environment's alert configuration. Messages are posted as plain markdown with a coloured attachment bar matching severity:
BREAKING→ redWARNING→ orangeINFO→ green
Example payload (the message body we render into the Slack webhook):
*Schema Drift Detected*
Environment: `production`
Severity: *BREAKING*
Changes: 1 breaking, 0 warning, 2 info
*Top Changes:*
* [BREAKING] COLUMN_DROPPED — users.email column dropped
* [INFO] INDEX_ADDED — orders.created_at_idx added
* [INFO] INDEX_ADDED — sessions.user_id_idx added
Only the first five changes are included in the message body to keep Slack digestible — the full diff is always available via the API or dashboard.
Webhook
A generic JSON webhook we POST to on every drift event. The payload matches the shape returned by GET /api/v1/drift/events/{id}:
{
"id": "f6c3a3f0-7c5e-4a8c-b4b2-9f3b1f0c4a55",
"envId": "18a2d7be-4f52-4a86-92b0-52df7d39c7a1",
"baselineId": "0ce3b9a2-3a8b-4f40-a6a2-4a1b92b8d2c6",
"currentId": "b7d1a3c4-2a3e-4b7e-9a4f-1c2e9b0d3a77",
"severity": "BREAKING",
"breakingCount": 1,
"warningCount": 0,
"infoCount": 2,
"environmentName": "production",
"acknowledged": false,
"detectedAt": "2026-04-20T15:10:22.187Z",
"items": [
{
"severity": "BREAKING",
"table": "users",
"column": "email",
"changeType": "COLUMN_DROPPED",
"description": "users.email column dropped",
"recommendation": "Restore the column or update callers before rolling forward.",
"estimatedImpact": "High — authentication flows depend on this column."
}
]
}
The request is a standard POST with Content-Type: application/json. HTTP 2xx from your endpoint is treated as success; anything else triggers the retry policy below.
Email alerts are delivered via Resend to the email on file for your tenant. There is no per-environment override today — it lives on your tenant record. On the roadmap: per-environment recipients and digest mode.
Delivery guarantees
Each sender is isolated as its own Spring bean with Resilience4j wrapped around it, so a flaky Slack webhook can't degrade email or the generic webhook.
| Channel | Retry policy | Circuit breaker |
|---|---|---|
| Webhook | 3 attempts, 2s base with exponential backoff (x2) | Opens at 70% failure rate across a 10-call window; stays open 30s |
| Slack | 3 attempts, 2s base with exponential backoff (x2) | Opens at 70% failure rate across a 10-call window; stays open 30s |
| 3 attempts, 5s base | Opens at 50% failure rate across a 5-call window; stays open 120s |
All three share a bulkhead of 5 concurrent calls per channel to keep one slow endpoint from blocking others.
Circuit-broken alerts are logged but not queued — if your webhook is down long enough to trip the breaker, backfill from GET /api/v1/drift/events once it recovers.
Idempotency
Each dispatch uses a SHA-256 key derived from envId + baselineId + currentId, so duplicate events (e.g. a retried scan) are detectable in logs.
Testing alerts
From the dashboard, each channel has a Send test event action that fires a synthetic INFO diff through the configured sender. Use this to verify your Slack channel or webhook endpoint is reachable before relying on it in production.
Roadmap
- Per-environment email recipients
- Digest mode (batched alerts every N minutes)
- PagerDuty, Opsgenie, and Microsoft Teams channels
- Signed webhook payloads (HMAC-SHA256 header)