Restricted Internet Connectivity

Minor incident General Core Network Infrastructure
2020-09-01 14:30 CEST · 1 hour, 13 minutes

Updates

Post-mortem

Timeline

On 2020-09-01 at 14:33 CEST, our monitoring systems started to alert us of high levels of incoming traffic. By 14:35 CEST, the majority of our uplink connections were fully utilized, which impacted legitimate traffic using these paths as announced on our status page.

At 14:37 CEST, we identified the target IP address of this unusual traffic - which was clearly to be classified as a DDoS attack. We immediately started our procedure to blackhole the target IP address, i.e. drop traffic destined to that IP address at our upstream providers to free up capacity for legitimate traffic again. Unfortunately, the blackholing was ineffective at first due to a mistake, which we discovered a few minutes later.

By 14:57 CEST, blackholing was active and traffic levels went back to normal within the following two minutes. This resulted in legitimate traffic flowing again as usual for all servers and systems, except for the identified target address, which remained blackholed while still being under attack.

At 16:36 CEST, we noticed that traffic to the blackholed IP address decreased, and removed the blackholing. While Internet access to/from this IP address was restored for most connections, connections going through one of our upstream providers remained blocked. After intervening at 17:20 CEST, this block was lifted as well by around 17:50 CEST, restoring full connectivity.

Root Cause

While the observed traffic pattern was already a clear indication of a DDoS attack, this suspicion was confirmed by a blackmail message which had been delivered to our inbox just moments before the attack started.

The attack targeted an IP address of one of our customers and fully utilized some of our upstream links, thereby also impacting other customers as a side effect. This IP address was subsequently blackholed to fend off the attack traffic and protect our other customers in accordance with our TOS and internal procedures.

Measures Taken

Once connectivity was fully restored, we started analyzing the attack and our response. We reviewed our processes and improved our internal documentation so that in similar cases we will be able to react even faster and avoid misunderstandings.

We are also in contact with our upstream providers, finding ways to improve communication and making sure we can fully leverage their capabilities for cases like this.

While most of the pieces have been in place for a long time, we are confident that with the additional improvements we are even better prepared to quickly and effectively handle potential attacks in the future.

Please accept our apologies for the inconvenience this incident may have caused you and your customers.

September 3, 2020 · 11:55 CEST
Resolved

Internet connectivity is stable again. We keep watching the state and will update this incident ticket if necessary.
Please accept our apologies for the inconvenience this issue may have caused you and your customers.

September 1, 2020 · 15:43 CEST
Issue

We are facing inbound traffic at unusual patterns and volumes.

Traffic to/from certain external targets might be affected by degraded performance (throughput, latency, packet loss) to varying degrees. This includes traffic of virtual servers, DNS lookups using our resolvers, requests to our object storage from external sources as well as access to our website, Cloud Control Panel, and API.

Our engineers are investigating the issue and are working to fully restore our services. We will keep you posted.

September 1, 2020 · 14:30 CEST

← Back