What’s at stake if your employees post too much online

What’s at stake if your employees post too much online

From LinkedIn to X, GitHub to Instagram, there are plenty of opportunities to share work-related information. But posting could also get your company into trouble.

Oversharing is not caring: What’s at stake if your employees post too much online

Employee advocacy has been around as a concept for over a decade. But what started out as a well-intentioned way to enhance corporate profile, thought leadership and marketing, also has some unintended consequences. When professionals post about their work, their company and their role, they’re hoping to reach likeminded professionals, as well as prospects and partners. But threat actors are also paying attention.

Once that information is in the public domain, it is often used to help build convincing spearphishing or business email compromise (BEC)-style attacks. The more information, the more opportunity for nefarious activity that could end up hitting your organization hard.

Where are your employees sharing?

The main platforms for sharing such information are the usual suspects. LinkedIn is perhaps the most obvious. It could feasibly be described as the largest open database of corporate information in the world: a veritable treasure trove of job titles, roles, responsibilities and internal relationships. It’s also where recruiters post job listings, which may overshare technical details that can be leveraged later on in spearphishing attacks.

GitHub is perhaps better known in a cybersecurity context as a place where absent-minded developers post hardcoded secrets, IP and customer details. But they might also share more innocuous information about project names, CI/CD pipeline names and information on what tech stacks and open source libraries they’re using. They might also share corporate email addresses in Git commit configurations.

Then there are the classic consumer-facing social platforms like Instagram and X. This is where employees are likely to share details on their travel plans to conferences, and other events which could be weaponized against them and their organization. Even information on your company website could be useful to a would-be fraudster or hacker. Think: details on technical platforms, vendors and partners, or major corporate announcements such as M&A activity. It could all provide a pretext for sophisticated phishing.

RELATED READING: Is your LinkedIn profile revealing too much?

Weaponizing information

The first stage of a typical social engineering attack is intelligence gathering. The next is weaponizing that intelligence in a spearphishing attack designed to trick the recipient into unwittingly installing malware to their device. Or potentially to sharing their corporate credentials for initial access. This could be achieved via an email, text or even a phone call. Alternatively, they might use information to impersonate a C-level executive or supplier in an email, phone or video call requesting an urgent wire transfer.

These efforts usually require a blend of impersonation, urgency and relevance. Here are some hypothetical examples:

  • An adversary finds LinkedIn information on a new starter in an IT role at company A, including their core role and responsibilities. They impersonate a key tech vendor claiming that an urgent security update is required, referencing the target’s name, contact details and role. The update link is malicious.
  • A threat actor finds information on two colleagues in GitHub, including the project they’re working on. They impersonate one in an email asking the other to review an attached document, which is booby-trapped with malware.
  • A fraudster finds a video of an executive on LinkedIn, or a corporate website. They see on that target’s Instagram/X feed that they’re going to be presenting at a conference and will be away from the office. Knowing that the exec may be hard to contact, they launch a deepfake BEC attack using video or audio, to trick a finance team member to wire some urgent funds to a new vendor.

Cautionary tales

The above are only hypotheticals. But plenty of real examples exist of threat actors using “open source intelligence” (OSINT) techniques in the early stages of attacks. They include:

  • A BEC attack which cost Children’s Healthcare of Atlanta (CHOA) $3.6m: Threat actors likely scoured press releases about a newly-announced campus, to find out more details including the hospital’s construction partner. They would then have used LinkedIn and/or the corporate website to identify key executives and finance team members of the construction firm involved (JE Dunn). Finally, they impersonated the CFO in an email to the CHOA finance team requesting they update their payment details for JE Dunn.
  • Russia-based SEABORGIUM and Iran-aligned TA453 groups use OSINT for reconnaissance ahead of spearphishing attacks on pre-selected targets. According to the UK NCSC, they use social media and professional networking platforms to “research their [targets’] interests and identify their real-world social or professional contacts.” Once trust and rapport have been established over email, they send a link to harvest victims’ credentials.

Stop the share? How to mitigate spearphishing risk

The risks of oversharing are real, but fortunately the remedies are straightforward. The most potent weapon in your armory is education. Update security awareness programs to ensure that all employees, from executives down, understand the importance of not oversharing on social media. In some cases, this will require a careful rebalancing of priorities, away from employee advocacy at all costs. Warn staff to avoid sharing via unsolicited DMs, even if they recognize the user (as their account may have been hijacked). And ensure they can spot phishing, BEC and deepfake attempts.

Back this up with a strict policy on social media use, defining red lines on what can and can’t be shared, and applying clear boundaries between personal and professional/official accounts. Corporate websites and accounts may also need to be reviewed and updated to remove any information that could be weaponized.

Multi-factor authentication (MFA) and strong passwords (stored in a password manager) should also be a given across all social media accounts, in case professional accounts are hijacked to target colleagues.

Finally, monitor publicly accessible accounts where possible for any information that could be leveraged for spearphishing and BEC. And run red team exercises against employees to test their awareness.

Unfortunately, AI is making it faster and easier than ever for threat actors to profile targets, collect OSINT and then craft convincing emails/messages in perfect natural language. AI-powered deepfakes increase their options yet further. The bottom line should be, if it’s in the public domain, expect a cybercriminal also knows about it … and will come knocking soon.



Source link