How I Responded to a Cybersecurity Incident as a Junior Sysadmin

Please find attached an EFT Remittance advice from __ for your attention. <br> This link will not work for recipient. [Open]
Suspicious?

A quick note: I do not have formal training — yet — in incident response and am just the most junior of a team of three. That being said, feel free to tear my response apart — but let me know so I can learn.

The Incident

[Friday, 13 Nov 2020]

[8:30 AM] Two emails in my inbox stand out:

  • An email titled “…Wire/EFT…” with a big “Open” button sent at 4:30 AM. An obvious scam, and nothing new at first glance — we have had emails from “@gmail.com” with the name set as the CEO’s to try and impersonate them. I click on the name anyway to check the domain it has come from just to be sure… and it is our domain. I feel uneasy.
  • A mass email at 8:15 AM from an administrator saying Microsoft Online notified of suspicious activity and not to click on any links. My unease is placated for the time being.

Step 1: Assess damage potential

[9:30 AM] I (unwisely, as I would later find out) assume that passwords have been reset because of the second email and decide to assess how serious a threat the phishing email itself is. After driving to my site for the day, I disconnect my laptop from the organisation network and connect it to my personal hotspot, start & snapshot a VM, and click on the phishing link.

Honestly, hovering over the link already shows that it will bring me to a Canva-hosted design, but why be sorry rather than safe? The link does bring me to a Canva design made to look like a Microsoft Online sign-in page containing a disguised link. Hovering over this other link shows the expected suspicious-sounding domain which, when checked on a SEO site, is indeed a redirect.

Knowing that users are safe if they only clicked on the first link is enough for me so I restore my snapshot, write up my findings, and email it to the team.

Step 2: Contain the spread

[10:00 AM] But I just want to be sure, because I am still uneasy that we do not know how a phishing email was sent using our domain. I make a call to ask if passwords have been reset. Apparently not! I explain my reasoning then call the user to explain that I am going to reset their password, and do so.

A number of users have emailed the helpdesk by this time in response to the second email so I reset their passwords as well.

Step 3: Determine breach method

[11:00 AM] Now that I can rest easy, I have an early lunch and move on to the fun part.

First, I want to be sure that the phishing email truly did come from within our domain. I check the headers in my copy:

[in macOS Mail] View => Message => Raw Source

But it looks like it came from the provider that handles the helpdesk ticketing. Did they get breached? Why did only certain users get the email? It does not make sense. I ask around for another copy of the email and check the headers in that one. This copy came directly from the breached email address, and exactly at 4:30 AM as well. Finally, I check the alert in Microsoft Online and it makes sense.

[12:30 PM] I conclude that the user’s Office 365 account was indeed breached and the perpetrator emailed the entire address book all at once at 4:30 AM. Exchange intercepted such an obvious mass of spam so only a number got through. The helpdesk email address was one of them, which is why I got a copy, being on the mailing list.

Because the organisation policy allows it, I log-in to the user’s Office 365 account (with permission) to ascertain once and for all:

Screenshot of Office 365 account sign-ins showing Unusual Activity from New York on 01 Nov 2020 and Lagos on 02 Nov 2020.
Apparently the user signed-in from New York on 01 Nov 2020 then Lagos a day later.

Especially with COVID-19, I am pretty confident that the user was not in New York and Lagos on those dates.

Step 4: Patch holes

  • Check Palo Alto for the two IP address to see if they ever hit our network.
  • Run Malwarebytes on the user’s Mac to make sure that there is nothing harvesting passwords on the local computer (I would prefer a clean install, but business needs…)

Aaand, that’s it! I wish there was more to this list, including and especially enabling MFA, but the reality of “business needs” is real. And I don’t say this in a negative way because it is true — how much time and money do you spend because of one breached account? Is it a big enough risk to justify the cost? Should a business spend all its revenue on cybersecurity? It’s a balance in the end.

Ex-petroleum geologist. Fell in love with Linux and the CLI. Became a sysadmin. Fell in love with information security. Tech keeps me curious, humble, learning.