Account Takeover Goes Blue and Takes Out University of Michigan
Everyone's favorite attack at the beginning of 2015 was the social media account takeover, though they seem to be dying down in recent months. Most of these attacks were in the spring, and encompassed the likes of CENTCOM, Chipotle and T-Swizzle (Taylor Swift for all you Tay-Tay Haters). As summer came to its final months, my “time since last social network account takeover” counter was giving me hope that the world was doing OK. But that changed a few weeks ago with the Wolverines out in Michigan.
On Wednesday, August 12, the University of Michigan's official Facebook pages began sending malicious content to fans of their football, basketball and athletics programs. A fantastic blog post by the college's Social Media Director was published about a week later, outlining the attack. The malicious content came from a compromised administrative account; not a huge surprise considering social account takeover has become popular as a form of infosec-style vandalism. But, unlike the previous Twitter takeovers, there were a few elements in the timeline of this attack that were alarming:
- A member of the University of Michigan's Social Media Department was phished, giving the attackers full administrative access to the page.
- Their ITS/social media teams were locked out of the page for hours because the attack happened in the early morning (~3am), and support teams were asleep/not at work.
- The only way they found out about the attack was through user submitted reports the campus IT department.
Although the University eventually recovered the accounts, it wasn’t until about five hours after the compromise. Imagine losing control of your company’s website, domain controller or a database for five hours while an attacker runs amok. Imagine the attackers being able to directly interact with your customers and partners. In these cases, you would at least be able to pull a plug from a system or close outgoing traffic to prevent data from leaving the network. Sadly, these last-ditch efforts do not exist for social networks. As a security practitioner, you could ban all social media use at your company, but it’s 2015. You won't, and you can't. Here's why:
- Social media has a 100% higher lead-to-close rate than outbound marketing
- 80% of US social network users prefer to connect to brands through Facebook
- 53% of Americans who follow brands in social are more loyal to those brands
For a plethora of social media supporting statistics, Hubspot has plenty to sink your teeth into. With so much emphasis being placed on social media as a marketing and outreach tool, it’s critical to keep in mind the security perspective. It’s easy for security teams to pretend social media is not their jurisdiction. But it’s a domain that needs to be protected just like anything else. Security teams often fail to recognize the vulnerabilities introduced through social networks, and sites like Twitter and Facebook need to be in a security engineer's risk management plan.
Social networks offer a (mostly) free platform that removes the complexities of running a high traffic website. Network infrastructure, complex service software, database management, SLAs, patching, development teams and deployments are all abstracted away, and responsibilities are shifted to the social networks themselves to manage. This reduction of infrastructure burden means that the attack surface shrinks significantly, but it is wrong to assume that the risk of an attack also shrinks. The associated lack of control over the infrastructure leaves organizations helpless if any of these accounts become compromised. The takeover is easily extended to further assets such as email accounts, internal corporate credentials, smartphones and other devices. The proverbial power plug won’t be there to pull once you lose control. So, increasing the cost to attack an account, a page, a tweet or even a hashtag is a problem that security teams need to start worrying about. The University of Michigan learned the implications of these risks the hard way and quickly realized that they need to ramp up their security practices to encompass social.
When building an infosec protection plan or policy around social media, security teams should focus on securing two things: accounts and content. Account security is where U of M first failed: they did not have 2 Factor Authentication enabled for administrative accounts of their pages (which they enabled after), and the account that was phished seemed to be a staff member’s personal account. If that is the case, even the best protections on campus for preventing phishing pages wouldn’t have protected this person from being phished at home on a personal computer or on their phone while jogging.
Here’s a short but effective checklist that could have helped U of M’s situation:
- Reduce the number of people with access to official accounts as much as possible.
- All social logins should be routed through a burner email address with a robust password and 2FA.
- For networks like LinkedIn and Facebook, which associate a company’s page with a personal account, that admin should have equally extensive security controls. They should limit access to these accounts from a trusted, on-campus device.
- All authentication should come through a single securely managed device. This, of course, applies to publishing platforms as well.
The key word here is ‘managed’. The manager should be a security engineer working for the organization that can provision access appropriate to the current operational needs. If a managed account does get compromised, it should not have more privileges than the manager, so that the team can respond accordingly to the compromise.
Once account security is addressed, the much harder problem to address is content security. U of M appeared not to have proper content inspection to assure that published material adheres to basic security and policy requirements (aka - malicious content, unsanctioned materials, etc) and instead found out via user reporting.
Content security involves the continuous safeguarding of inbound and outbound content. Once the Wolverines’ administrative account was compromised, it began posting malicious content to their almost 1.5 million followers. These pages have an outreach that caters to 50,000 current students, hundreds of thousands of alumni and fans of their 27 varsity teams. For comparison, my beloved Buffalo Bills (yes everyone, they are a professional sports team) only has 700k fans on Facebook. With Michigan’s potential reach and the lewdness of the content that the attackers posted, this compromise was extremely effective in spreading its malicious content to the most amount of users in the shortest amount of time. The below image shows the reach of the Michigan Football page.
Earlier this year, I did a presentation at Shmoocon with Johns Hopkins University. We presented the findings of a study which measured the potential impact that an individual malicious user has on fans of specific colleges. The students, broken up into teams, were assigned a 'target' university, and were instructed to use an emulated attack link to measure engagement and prove the effectiveness of their social media red team attack strategies. Interestingly enough, University of Michigan was one of the target universities assigned to a team.
The team managed to get into closed, student-run Facebook groups and began posting a link to a fake UMich Jobs website. They designed this website to phish user information (e-mail, phone number, etc) as well as to load the emulated attack link we gave them. Instead of a tracker, they could have easily bought infrastructure for an exploit kit and replaced the hidden link with something far more evil...
From the fewer than a dozen posts, over 1000 unique visitors came to the fake site, 1 in 8 of which filled out the survey. The site provided more than enough information to further pretext victims. The students helped collect valuable data that confirmed our hypothesis: protecting content is just as important as account security for social media. These networks offer a way for attackers to target particular demographics of people. If these attackers wanted to target college campuses, this vector is extremely effective in doing so. Additionally, a potential attack vector that was not utilized could have prompted the students to “authorize” the website to access their social media profiles, essentially siphoning additional personal information from their accounts.
Social media attacks against content and accounts will continue indefinitely until fans and customers use a different medium to get news and interact with these organizations. This, of course, won’t be any time soon. The time it takes news outlets to report an account takeover is extremely small, and it’s only exacerbated by the speed of social media. This rapid public reach makes account takeover an attractive option for hacktivists and groups pushing a political agenda. Content-based attacks are attractive for those that want to compromise a specific userbase for monetary gain or spread malware to hit as many victims as possible. The lessons learned from both the U of M hack and the Hopkins research is that security teams must include social assets as part of their security posture.
Tags: Breaches