Last week the website Code Spaces (link) was attacked by a distributed denial of service attack (DDoS). This is a pretty normal occurrence that gets handled by systems and normal access is back soon. What makes the Code Spaces attack interesting is that a person had gained access to the EC2 control panel for the company and wanted a ransom to stop the attack.
There are numerous details on the link above to find out what happened next.
What can be learned from an attack like this?
DDoS attacks are still active and happen frequently. Evernote was hit earlier this month with the attack causing at least four hours of outages. A video game company’s website was hit this week as well with traffic peaking at 110 gigabytes per second. Estimates are that DDoS attacks will be in the range of terabit sized attacks in the near future.
Many organizations believe that everything is safe in the cloud. Basic functions are handed off to the cloud vendor who must prioritize clients: entrusting backups, restores, disaster recovery. Best practices dictate that your organization’s business continuity plans take these risks and assumptions into consideration. Anytime you give up those controls, risk is added into the equation.
Another risk in moving mission critical functions to the cloud is Internet connectivity and lack of access to production systems if Internet is down.
• Testing backups to ensure restores work and expectations are met.
• Implement business continuity planning and determine how cloud providers play into those plans– test your disasters, be prepared.
• Determine connectivity issue frequency – build contingency plans to reach the cloud during outages.