Mar 03 2019

Interview with a large healthcare provider’s VP of IT

Keith Wiley, the Director of Advisory Services for Layer 8 Security, recently sat down with the VP of IT at a large healthcare provider after a major breach occurred. The following perspectives and lessons learned are meant to benefit all.

From a leadership perspective, sometimes risk management and cybersecurity can be pushed to the side in favor of other business objectives. While it may be easy for those who have spent their entire careers working in risk management and cybersecurity to understand how critically important it truly is to the business, leadership may choose to put focus on money generating areas. Yet when an attack happens, and a breach occurs the leadership screams at IT/risk management departments, “Why were we not ready?!” because the end result is that financials are impacted.

‘Emotet’ – this type of malware may seem somewhat innocuous from the name, but as a large healthcare provider found out, it is anything but. The Emotet banking trojan was first identified by security researchers in 2014. Emotet was originally designed as banking malware that attempted to sneak onto your computer and steal sensitive and private information. Later versions of the software saw the addition of spamming and malware delivery services—including other banking trojans.

I recently sat down with the VP of IT at a large healthcare provider to discuss the experience, lessons learned, and what should have been done differently, post-Emotet infection. It’s important to identify that the situation existed prior to this person’s appointment and the actions taken utilized every internal and external resource available at the time.

How did the infection happen?

Phishing was the attack vector. An email link was clicked, which caused a download containing Emotet. It quickly got the employee’s credentials because everyone had local admin rights on their machines. Plus, there was no network segmentation, and external ports were open, which allowed Emotet to spread like wildfire, creating new versions of itself and adding Trickbot, another banking trojan, to the mix.

What was the state of your infrastructure leading up to the infection?

We had antivirus software but it was not configured to protect against Emotet or other malware. We had no SIEM or endpoint security, and as mentioned, most external ports were open with no network segmentation. Additionally, few systems were on the latest patches and we had limited password policies. We didn’t have an incident response policy or plan in place. PC’s were open to the Internet, allowing for them to be freely accessible for Remote Desktop Protocol (RDP). This was a great environment for a fast and easy-spreading Emotet.

Did leadership understand information security and compliance risks? And what would you do differently in the future?

No, there was a complete lack of understanding. Retroactively, they have been surprised by the amount of attempted attacks now that we have active security monitoring in place. They didn’t understand why hackers do what they do or understand why a complex password or multi-factor authentication is so important. The accountability for decision-making was in decentralized owners’ hands that chose to ignore issues they were not equipped to understand.

In the future, I will work to change the operating model to support clear rolls, responsibilities, and embedding a culture of accountability with an understanding of how each action impacts the business. IT will work with the business to apply the right solutions and stop putting ad-hoc Band-Aids in place.

What efforts have now been made to address information security and regulatory compliance requirements in a post-infection business environment?

Now security and infrastructure are centrally owned. We’re creating an awareness program and consistently educating stakeholders on the risks our business faces and the exploits that are out in the wild. We are working on making proactive decisions to protect the company now and in the future. We’re making sure that partners have SOC2 type 2 (or similar) in place. We’ve deployed CrowdStrike for endpoint protection, we’re adding network segmentation and Infoblox to manage IP addresses, and SCCM to manage endpoints and patching. We are putting strict password policies in place and we’ve closed our external ports.

What was the total impact in terms of cost?

Still under review, conservative estimate is in the millions.

Any last words?

As people are the biggest asset and biggest risk, taking the time to truly show appreciation for the work required to mitigate and close out this issue is critical. In our case it took over five weeks of non-stop work. I think this was a big miss on the company’s part.

Note from the interviewer: The person interviewed was, in my opinion, the sole reason the company did not suffer more from this attack which was a potentially business-ending event. Post event, the interviewee was made the VP of Cybersecurity, with the directive to develop and implement business aligned IT security.

This person is available to have open discussions if requested.

BACK TO BLOGS