Email, Slack, Discord, Webhooks, smart cooldowns, retries, and full audit history. Alerts that matter, without the noise.
Built for makers, agencies, and SaaS teams
Getting bombarded with notifications is as bad as missing critical alerts. You need fast signal with zero noise. Notifications should reach you where you work, not where they're convenient for the tool.
Most monitoring tools either spam you with every minor event or force you to check a dashboard manually. You get duplicate alerts, notification fatigue, and missed incidents. When everything is urgent, nothing is.
You need flexibility: different channels for different teams, smart filtering, and guaranteed delivery. Not one-size-fits-all email blasts.
PerkyDash sends alerts via Email, Slack, Discord, and Custom Webhooks. Each channel is configurable: set cooldowns, choose which events to receive, route to specific monitors, and validate with test messages.
Anti-spam cooldowns prevent duplicate alerts. Retry logic guarantees delivery. Audit logs show what was sent and why. You stay informed without the chaos.
Four powerful channels working together
Responsive HTML templates with severity colors, full incident details, and dashboard links.
Real-time Block Kit messages with interactive buttons and structured fields.
Rich embed messages with severity colors, structured fields, and direct dashboard links.
Send incidents to any system: PagerDuty, Opsgenie, Zapier, n8n, or custom integrations.
Modern HTML templates powered by Brevo for high deliverability. Color-coded by severity: red for DOWN, green for recovery, yellow for warnings. Each email includes monitor name, URL, type, response time, status code, and a clear CTA button linking to your incident dashboard.
Multi-recipient support means you can notify entire teams or stakeholders. Perfect for management visibility, backup alerting, and non-technical stakeholders who need status updates.
Example subject lines:
[PerkyDash] 🔴 api.yoursite.com is DOWN
[PerkyDash] ✅ api.yoursite.com is back UP
Use cases:
Management visibility, backup channel, stakeholder updates, audit trails
Monitor: Production API
URL: api.yoursite.com
Type: HTTP
Status: 503 Service Unavailable
Error: Connection timeout after 30s
Monitor: Production API
Downtime: 6 minutes
Status: 200 OK
Response Time: 142ms
🔴 Monitor Down: Production API
Monitor: Production API
Type: HTTP
URL: api.yoursite.com
Severity: Critical
Started: 2 minutes ago
Error: Connection timeout after 30s
Channel targeting available
Send alerts to #incidents, #on-call, or any channel
Slack Block Kit formatting with severity color strips, structured fields, and interactive buttons. Target specific channels for different teams: send critical alerts to #on-call, degradation warnings to #engineering, and recovery notifications to #status.
Uses Slack's incoming webhook URL. Setup takes 2 minutes. Test validation ensures your configuration works before production alerts arrive.
Perfect for:
DevOps teams, SRE, on-call engineers, fast-moving startups
Instant incident response:
Get notified in seconds, not minutes. Act fast.
Discord Embed messages with severity colors, structured fields, timestamps, and direct dashboard links. Footer branding ensures clarity. Simple webhook configuration, no OAuth complexity.
Perfect for indie developers, gaming teams, open source maintainers, and communities running their operations on Discord. Get incident alerts without switching tools.
Use cases:
Indie devs, gaming teams, communities, open source projects
Simple setup:
Just add your Discord webhook URL and test
⚠️ Monitor Degraded: API Response Slow
PerkyDash Monitoring
JSON Payload Example
{
"event": "incident.created",
"timestamp": "2025-12-11T14:32:00Z",
"severity": "critical",
"monitor": {
"name": "Production API",
"type": "http",
"url": "api.yoursite.com"
},
"incident": {
"status_code": 503,
"error": "Connection timeout",
"response_time": null
},
"probe": {
"location": "US-East",
"ip": "192.0.2.1"
}
}
HMAC-SHA256 Signature Verification
Header: X-PerkyDash-Signature: sha256=...
Optional custom headers supported
Authorization, API-Key, or any custom header
Send incident events to PagerDuty, Opsgenie, Telegram bots, internal dashboards, n8n, Zapier, Make, or any HTTPS endpoint. Full incident context in every payload: event type, timestamp, severity, monitor data, probe details, and error messages.
HMAC-SHA256 signature verification ensures webhook authenticity. Custom headers support authorization tokens. Retry system with exponential backoff guarantees delivery even when your endpoint is temporarily unavailable.
Integrate with anything:
PagerDuty, Opsgenie, Telegram, Zapier, n8n, Make, custom dashboards
Secure by default:
HMAC-SHA256 signatures + custom auth headers
Guaranteed delivery:
3 retries with exponential backoff (1min → 5min → 15min)
Cooldowns apply only to consecutive events of the SAME type. State changes always break through. This means you'll never miss a critical DOWN alert or recovery notification, even during cooldown periods.
Default cooldown is 5 minutes, configurable per channel. If a monitor flaps between UP and DOWN repeatedly, you get notified on each state change, but won't receive duplicate DOWN alerts within the cooldown window.
State changes always notify:
DOWN → UP, UP → DOWN, DEGRADED → DOWN always send alerts
Configurable per channel:
Set different cooldowns for email, Slack, Discord, webhooks
Cooldown Example Timeline (5min cooldown):
DOWN
Alert sent ✓
DOWN
Blocked (cooldown active)
UP
Alert sent ✓ (state change)
DOWN
Alert sent ✓ (state change)
Result: You get notified on every state change, but duplicate consecutive events are blocked. Signal without noise.
Retry Timeline Example:
Failed: Connection timeout
Failed: 502 Bad Gateway
Success: 200 OK
Result: Critical alert delivered despite temporary failures. Guaranteed reliability.
If a notification fails (Slack is down, webhook endpoint unavailable, email server hiccup), PerkyDash automatically retries with exponential backoff: 1 minute → 5 minutes → 15 minutes.
Three attempts give your systems time to recover from temporary issues. All failures are logged in notification history with detailed error messages. You never lose critical alerts due to transient network problems.
3 delivery attempts:
Exponential backoff handles temporary outages
Full error logging:
Every failure tracked in notification history
Never miss critical alerts:
Resilient delivery despite transient issues
Each notification channel is independently configurable. Route specific events to specific channels. Control what gets sent and when.
Recipients / Endpoints
Email addresses, Slack channel URL, Discord webhook, custom HTTPS endpoint
Event Filtering
Choose which events to receive: DOWN, UP, DEGRADED
Scope Control
Apply to all monitors OR target a single monitor for fine-grained routing
Cooldown Setting
Default 5 minutes, adjustable per channel (0-60 minutes)
Send Test Button
Validate configuration before production alerts arrive
Example 1: DevOps Team
Send all critical DOWN alerts to Slack #on-call channel. Send recovery notifications to Slack #incidents. Email weekly summary to management. Webhook to PagerDuty for after-hours escalation.
Example 2: Agency
Email client on DOWN events only (no spam). Slack internal team on all events. Discord webhook to #client-status for transparency. Different channels per client project.
Example 3: Solo Maker
Email for all events (backup channel). Discord #monitoring-bot for real-time alerts (where you already are). Webhook to Telegram bot for mobile push notifications.
Example 4: SaaS Team
Slack #incidents for all monitors. Separate webhook to Opsgenie for critical production API only. Email to engineering leads for critical incidents.
Choose which events trigger notifications for each channel
Incident Created (DOWN)
Monitor becomes unavailable or fails health check
Incident Resolved (UP)
Monitor recovers and returns to healthy state
Severity Escalated
Incident severity increases (e.g., warning → critical)
Monitor Degraded
Performance below threshold but not fully down
Every notification attempt is logged with status, timestamp, delivery channel, recipient, event type, and error messages (if failed). Retention: 90 days (Agency), 30 days (Pro), 7 days (Trial).
Notification history is essential for debugging (why didn't I receive that alert?), compliance (prove notification was sent), and reliability analysis (how often do retries succeed?).
Status tracking:
Sent, Failed, Skipped (cooldown), Retrying
Debugging made easy:
Filter by channel, status, event type, date range
Compliance & reporting:
Prove notifications were sent for audits and SLAs
Notification History Example:
Slack: #on-call
Event: Incident Created (DOWN)
Monitor: Production API
Webhook: PagerDuty
Event: Incident Created (DOWN)
Error: Connection timeout after 30s
Will retry in 1 minute
Email: team@yoursite.com
Event: Incident Created (DOWN)
Reason: Cooldown active (3 min remaining)
Webhook: PagerDuty
Event: Incident Created (DOWN)
Retry attempt 2 succeeded
Every notification channel includes a "Send Test" button. Validate your configuration before real incidents arrive. No surprises.
Test Email delivery
Verify email addresses are correct and receiving alerts
Test Slack channels
Confirm webhook URL is valid and messages appear in the right channel
Test Discord webhooks
Validate embed formatting and webhook permissions
Test custom webhooks
Verify HTTPS endpoint accepts payloads and auth headers work
Why test messages matter:
Best Practice: Always send a test message after configuring a new notification channel. It takes 5 seconds and prevents missed alerts.
Flexible channels that scale with your needs
| Feature | Trial | Pro | Agency |
|---|---|---|---|
| Notification Channels | 3 | 10 | Unlimited |
| Email Notifications | |||
| Slack Integration | |||
| Discord Integration | |||
| Custom Webhooks | |||
| Anti-Spam Cooldowns | |||
| Retry with Exponential Backoff | |||
| Granular Event Routing | |||
| Send Test Validation | |||
| Notification History | 7 days | 30 days | 90 days |
Different builders, same need for reliable notifications
You're building solo or with a tiny team. You need alerts where you already work, not another inbox to check.
How you use alerts:
Why it matters:
Stay informed without context switching. Alerts that fit your workflow.
You manage dozens of client sites. Different clients need different notification strategies.
How you use alerts:
Why it matters:
Proactive client communication. Internal team coordination. Professional reputation.
You have users depending on your service. Fast incident response is critical. Different teams need different alerts.
How you use alerts:
Why it matters:
Fast incident response. On-call coverage. Uptime = revenue.
Everything you need to know about alerts and notifications
Stay in control with smart, flexible, multi-channel notifications. Signal without noise. Reliability without complexity.
Free account. No credit card. Setup takes less than 5 minutes.