Cloudflare’s Record-Breaking 14.1 Bpps DDoS Attack Is a Warning Shot for the Whole Internet
Cloudflare just disclosed that it mitigated a new record-breaking DDoS attack from the Aisuru botnet that peaked at an absurd 14.1 billion packets per second (Bpps), smashing previous records.[1][2] This wasn’t just trivia for security nerds: the same botnet has also been hammering Microsoft Azure and AI companies, and it’s a live-fire test of how fragile our web apps and APIs really are.[1][2]
The attack, dubbed Aisuru, is powered largely by armies of compromised home routers and IP cameras—classic “set it and forget it” devices that never see firmware updates and ship with trash defaults.[1][2] Cloudflare reports that this latest wave didn’t just push bandwidth; it pushed packet rates to the point where it could easily choke edge infrastructure, load balancers, and even inline firewalls that aren’t built for that kind of flood.[1][2]
Packet-per-second floods at this scale are especially nasty for app stacks because they aim to overwhelm connection tracking and CPU, not just raw bandwidth, meaning even services with fat pipes can still fall over if their edge isn’t tuned.[2] Aisuru has already been tied to a separate record-breaking attack against Microsoft Azure, which means this isn’t a “one vendor” story—it’s the new baseline of what real attackers can throw at cloud apps.[2]
On top of that, Cloudflare’s telemetry shows a noticeable shift in targeting: AI companies and API-heavy SaaS platforms are becoming prime DDoS victims, likely because their outages are high-visibility and high-impact.[2] Combine that with cheap-for-attackers, expensive-for-defenders volumetric traffic, and you get a perfect storm where scrappy botnets can pressure even hyperscalers if defenses aren’t automated and close to the edge.[1][2]
If you build or run anything on the internet—from a side project on a VPS to a multi-region microservice zoo—this matters directly to you. The days when DDoS was only a problem for banks and big gaming networks are gone; API gateways, login pages, GraphQL endpoints, and even status pages are all soft choke points that can be taken out if you’re not rate-limiting and fronting them with something smarter than a vanilla reverse proxy.[2]
The bigger takeaway: “we’re behind Cloudflare/Azure, we’re fine” is not a strategy. You still need basic hygiene—separate critical control planes from public traffic, enforce strict per-IP and per-token limits, and make sure your infra fails gracefully instead of cascading when your edge is under stress.[2] Also, if your app depends on IoT or consumer gear in any way, assume some of it is already part of someone else’s botnet and design accordingly.[1][2]
Here’s a minimal example of how a team might harden a public API endpoint behind Nginx when they know volumetric and HTTP-flood attacks are getting nastier:
# nginx.conf snippet: basic rate limiting & DDoS hardening
http {
# Define a rate limit zone based on client IP
limit_req_zone $binary_remote_addr zone=api_rate:10m rate=10r/s;
limit_conn_zone $binary_remote_addr zone=ip_conn:10m;
server {
listen 443 ssl http2;
server_name api.example.com;
# Limit concurrent connections per IP
limit_conn ip_conn 20;
# Apply request rate limit with burst tolerance
location /v1/ {
limit_req zone=api_rate burst=40 nodelay;
# Drop obvious garbage fast
if ($request_method !~ ^(GET|POST|PUT|DELETE|PATCH|OPTIONS)$) {
return 405;
}
# Tight timeouts so slowloris-style attacks hurt less
client_body_timeout 5s;
client_header_timeout 5s;
keepalive_timeout 30s;
proxy_pass https://backend_pool;
proxy_read_timeout 15s;
}
}
}
My take: this is another reminder that DDoS is evolving faster than a lot of teams’ threat models. If your platform still treats DDoS as a “network team problem” instead of an explicit part of app and API design, you’re basically shipping feature-rich targets and hoping someone else’s edge will save you when the next Aisuru-level botnet comes knocking.[1][2]

