Security-Portal.cz je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.

Kategorie

Fake Claude AI website delivers new 'Beagle' Windows malware

Bleeping Computer - 40 min 1 sek zpět
A fake version for the Claude AI website offers a malicious Claude-Pro Relay download that pushes a previously undocumented backdoor for Windows named Beagle. [...]
Kategorie: Hacking & Security

vm2 Node.js Library Vulnerabilities Enable Sandbox Escape and Arbitrary Code Execution

The Hacker News - 6 hodin 27 min zpět
A dozen critical security vulnerabilities have been disclosed in the vm2 Node.js library that could be exploited by bad actors to break out of the sandbox and execute arbitrary code on susceptible systems. vm2 is an open-source library used to run untrusted JavaScript code inside a secure sandbox by intercepting and proxying JavaScript objects to prevent sandboxed code from accessing the host
Kategorie: Hacking & Security

vm2 Node.js Library Vulnerabilities Enable Sandbox Escape and Arbitrary Code Execution

The Hacker News - 6 hodin 27 min zpět
A dozen critical security vulnerabilities have been disclosed in the vm2 Node.js library that could be exploited by bad actors to break out of the sandbox and execute arbitrary code on susceptible systems. vm2 is an open-source library used to run untrusted JavaScript code inside a secure sandbox by intercepting and proxying JavaScript objects to prevent sandboxed code from accessing the host Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

US government agency to safety test frontier AI models before release

Computerworld.com [Hacking News] - 7 hodin 52 min zpět

The Center for AI Standards and Innovation (CAISI), a division of the US Department of Commerce, has signed agreements with Google DeepMind, Microsoft, and xAI that would give the agency the ability to vet AI models from these organizations and others prior to their being made publicly available.

According to a release from CAISI, which is part of the department’s National Institute of Standards and Technology (NIST), it will “conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security.”

The three join Anthropic and OpenAI, which signed similar agreements almost two years ago during the Biden administration, when CAISI was known as the US Artificial Intelligence Safety Institute.

An August 2024 release about those agreements indicated that the institute planned to provide feedback to both companies on “potential safety improvements to their models, in close collaboration with its partners at the UK AI Safety Institute (AISI).”

Microsoft said Tuesday in a blog about the latest agreement that it, and others like it, are essential to building trust and confidence in advanced AI systems. As AI capabilities advance, it said, so too must the rigor of the testing and safeguards that underpin them.

A shift toward proactive security

Fritz Jean-Louis, principal cybersecurity advisor at Info-Tech Research Group, said the CAISI agreements signal a shift toward proactive security for agentic AI by enabling government-led testing of advanced models before and after deployment.

This should, he said, “help strengthen visibility into autonomous behaviors while accelerating the development of standards to mitigate risks. By combining early access, continuous evaluation, and cross-sector collaboration, the initiative pushes the industry toward security-by-design for increasingly autonomous AI systems.”  

However, added Jean-Louis, “there are a few potential hurdles to consider, for example: how would intellectual property be protected under this approach? Regardless, I believe this is a positive step for the industry.”

Executive order ‘taking shape’

Following the announcement from CAISI, a published report on Wednesday indicated that the White House is on the verge of preparing an executive order that would see the creation of a vetting system for all new artificial intelligence models, key among them Anthropic’s Mythos.

Bloomberg reported, “the directive is taking shape weeks after Anthropic revealed that its breakthrough Mythos model was adept at finding network vulnerabilities and could pose a global cybersecurity risk.”

Significant change in policy direction

Carmi Levy, an independent technology analyst, said, “it is patently obvious that this week’s announcement that establishes the Center for AI Standards and Innovation as the testing ground for frontier AI models is directly linked to the potential executive order that would lead to a vetting system for AI models.”

It isn’t coincidental, he said, “that the announcements were made in rapid succession, and it reinforces the growing urgency for governments in the US and elsewhere to tighten partnerships with key AI vendors to maximize AI-related security and minimize the potential for systemic risk.”

This latest flurry of activity from Washington marks a significant shift in policy direction from an administration that up until recently had been following a more laissez-faire approach to regulation, Levy pointed out.

Concerns around Anthropic’s Claude Mythos model, and the relative ease with which it could discover and exploit vulnerabilities in digital systems, “might have helped shift the federal government’s position on AI-related regulation, particularly around the renewed push to enforce standards for AI-related deployments across government infrastructure,” he said.

AI vendors like Google, Microsoft, and xAI, Levy added, “must walk a political highwire of sorts as they balance the need to release models into the marketplace in a timely, cost-effective manner with increasingly defined rules around AI-related cybersecurity and safety. The industry can’t afford a scenario where vendors themselves make up the rules as they go along.”

At the same time, he said, the recent showdown between Anthropic and the Pentagon illustrates why the vendors might be forgiven for viewing the federal government’s growing interest in AI testing and regulation with at least a certain degree of caution.

According  to Levy, “while the administration’s efforts to centralize testing and oversight should streamline the go-to-market process for vendors and accelerate the development of best practices around frontier model development, the political overtones of recent government-industry partnerships cannot be ignored.”

This article originally appeared on CIO.com.

Kategorie: Hacking & Security

Hackers abuse Google ads for GoDaddy ManageWP login phishing

Bleeping Computer - 6 Květen, 2026 - 23:36
A phishing campaign delivered through Google sponsored search results is targeting credentials for ManageWP, GoDaddy's platform for managing fleets of WordPress websites. [...]
Kategorie: Hacking & Security

Mirai-Based xlabs_v1 Botnet Exploits ADB to Hijack IoT Devices for DDoS Attacks

The Hacker News - 6 Květen, 2026 - 22:21
Cybersecurity researchers have exposed a new Mirai-derived botnet that self-identifies as xlabs_v1 and targets internet-exposed devices running Android Debug Bridge (ADB) to enlist them in a network capable of carrying out distributed denial-of-service (DDoS) attacks. Hunt.io, which detailed the malware, said it made the discovery after identifying an exposed directory on a Netherlands-hosted
Kategorie: Hacking & Security

Mirai-Based xlabs_v1 Botnet Exploits ADB to Hijack IoT Devices for DDoS Attacks

The Hacker News - 6 Květen, 2026 - 22:21
Cybersecurity researchers have exposed a new Mirai-derived botnet that self-identifies as xlabs_v1 and targets internet-exposed devices running Android Debug Bridge (ADB) to enlist them in a network capable of carrying out distributed denial-of-service (DDoS) attacks. Hunt.io, which detailed the malware, said it made the discovery after identifying an exposed directory on a Netherlands-hosted Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Chrome’s AI features can take up to 4GB of space on your computer

Computerworld.com [Hacking News] - 6 Květen, 2026 - 21:11

Google Chrome can automatically download a local AI model that takes up to 4 gigabytes of hard drive space on a computer when certain AI features are enabled, according to The Verge.

The file, called weights.bin, is used by Google’s Gemini Nano AI model to provide writing assistance, autocomplete, and fraud protection directly on the device. (Nano has been around since Gemini was introduced in late 2023.)

Since the model runs locally, the AI data is stored on the computer instead of in the cloud, which can provide better privacy, but also takes up storage space. Users can check whether the file is present by looking for the OptGuideOnDeviceModel folder in Chrome’s system files.

To free up the space, users need to disable the on-device feature in Chrome’s settings under Settings > System.

Kategorie: Hacking & Security

Critical vm2 sandbox bug lets attackers execute code on hosts

Bleeping Computer - 6 Květen, 2026 - 20:38
A critical vulnerability in the popular Node.js sandboxing library vm2 allows escaping the sandbox and executing arbitrary code on the host system. [...]
Kategorie: Hacking & Security

New Cisco DoS flaw requires manual reboot to revive devices

Bleeping Computer - 6 Květen, 2026 - 20:06
Cisco patched a Crosswork Network Controller and Network Services Orchestrator denial-of-service vulnerability that requires manually rebooting targeted systems for recovery. [...]
Kategorie: Hacking & Security

ServiceNow continues its AI transformation with an integrated experience

Computerworld.com [Hacking News] - 6 Květen, 2026 - 19:31

ServiceNow has unveiled updates to its workflow management platform advancing its redefinition of itself as the “AI control tower for business reinvention” at its Knowledge customer event this week.

The AI Control Tower product itself, introduced at last year’s event, gets new integrations with Microsoft Azure, Amazon Web Services (AWS), Google Cloud Platform (GCP) and other LLM providers to extend governance and observability of enterprise infrastructure, adding to its existing links with OpenAI and Anthropic. The integrations also span applications such as SAP, Oracle, and Workday. In addition, Control Tower can now discover non-human identities and connected devices to bring OT and IoT under the same governance as AI agents and cloud services.

All this ties in to the ServiceNow Action Fabric, which opens the platform to any AI agent, whether built on ServiceNow or from another source, via a Model Context Protocol (MCP) server, the company said.

And thanks to the recent acquisition of Traceloop, Control Tower now provides more extensive observability into agent behavior at runtime. Five new risk frameworks aligned with NIST and EU Act standards offer compliance controls.

Autonomous workforce

To expand the reach of what ServiceNow calls the Autonomous Workforce, a group of specialist AI agents announced in February that began with a single L1 IT service desk agent, it has added “AI teammates” that work alongside humans in CRM, IT, employee services, and security and risk management.

The autonomous IT cohort includes an AIOps agent that detects anomalies, correlates events, and triggers remediation, and a specialist for site reliability engineering (SRE) that performs incident triage and postmortem documentation. Other new agents assist with asset lifecycle management and portfolio planning.

Autonomous CRM offers specialist agents for sales qualification and quoting, order fulfillment, managing invoice disputes, and service and renewal, and in the world of employee services, AI specialists act as digital employees with role-specific skills in HR, workplace services, legal, finance, procurement, supplier management, and health and safety.

To round out the offerings, ServiceNow announced Autonomous Security & Risk, designed to span the entire threat landscape from finding and remediating vulnerabilities through examining third party vendor risk.

Employee experience

ServiceNow EmployeeWorks, the previously announced “conversational front door for the enterprise”, is now generally available. In addition, ServiceNow announced Otto, an AI assistant that unifies Now Assist, Moveworks, and AI Experience, and operates across the enterprise.

“Rather than living inside a single application, ServiceNow Otto sits across the entire enterprise, understanding intent, routing work to the right agent, and executing it to completion,” the company said. “Employees, customers, and support teams talk, chat, search, browse, analyze, and build. ServiceNow Otto is designed to handle the rest, adapting to each employee’s role and location without requiring them to know which system handles their request. Actions are governed by AI Control Tower, which can log each AI interaction, enforce enterprise policies, and provide explainability for every decision.”

Otto is already available in EmployeeWorks and the AI Control Tower, and will be rolled out in all other products “in the year ahead.”

According to Nenshad Bardoliwalla, ServiceNow’s group VP of AI products, all this means that “together with a new commercial model that bundles everything customers need to deploy AI quickly, we’ve made it clear the era of sidecar AI is over.”

What technology analyst Carmi Levy finds most interesting in these announcements is how quickly we’re seeing AI-enabled workflows extend beyond their initial entry point in IT.

“What was once the exclusive domain of senior IT leaders and planners is now filtering across all operational areas of the typical organization, including CRM, HR, IT operations, security and risk,” he said. “AI is also deeply embedded in the average worker’s desktop and is rewriting their work experiences in the process. Likewise, it puts highly autonomous tools in the hands of organizations intent on improving productivity, sharpening customer responsiveness, and driving operational efficiencies.”

Stephen Elliot, group VP at IDC, added, “The agentic focus is critical as the company continues to expand its specialist agent library. Customers can adopt these across core workflows to realize business value and increase productivity. The recent commercial pricing model complements the agentic capabilities. It meets customers where they are in their AI maturity journey enabling a pragmatic approach to adoption.”

But, he added, “Customers should consider the combination of workflows, AI, data, governance, and security as they deploy AI capabilities. No one model can do it all.”

Indeed, he said, “We are hearing from some CIOs that they are pausing some AI use cases because of the security and governance risks.”

Charles Betz, VP principal analyst at Forrester, said that ServiceNow is on the right track, especially with its continued focus on data. “The data governance, provenance, and currency issues are not trivial. Agents reasoning at machine speed over a stale graph are going to produce wrong outputs, and it’ll be data-quality-based hallucination,” he said. In addition, “documenting decision traces within the AI domain is super important.”

Levy agreed. “ServiceNow’s offerings reflect a keen understanding of where AI can drive optimal benefit throughout all areas of the business, what those workflows might look like, and how the tools and supports need to evolve,” he said.

This story originally appeared on CIO.com.

Kategorie: Hacking & Security

DAEMON Tools devs confirm breach, release malware-free version

Bleeping Computer - 6 Květen, 2026 - 18:43
Disc Soft Limited, the maker of DAEMON Tools Lite, confirmed that the software had been trojanized in a supply chain attack and released a new, malware-free version. [...]
Kategorie: Hacking & Security

Why Linux Supply Chain Attacks Are Becoming a Nightmare for DevOps Teams

LinuxSecurity.com - 6 Květen, 2026 - 18:26
Linux has long carried a reputation for resilience, bolstered by open-source reviews, hardened kernels, and transparent development pipelines. While that trust is well-founded, attackers have shifted their focus to a more vulnerable target: the surrounding software supply chain.
Kategorie: Hacking & Security

Linux Systems Running Wireshark May Be Exposed to Remote Attacks

LinuxSecurity.com - 6 Květen, 2026 - 16:04
Wireshark is one of those tools Linux teams quietly depend on everywhere: SOC pipelines, packet capture nodes, incident response systems, and long-running forensic environments. That's what makes the newly disclosed vulnerabilities in Wireshark 4.6.5 more serious than a routine software update.
Kategorie: Hacking & Security

Why ransomware attacks succeed even when backups exist

Bleeping Computer - 6 Květen, 2026 - 16:04
Backups don't fail because they're missing, they fail because attackers destroy them first. Acronis explains how ransomware targets backup systems before encryption, leaving no path to recovery. [...]
Kategorie: Hacking & Security

Apple Intelligence hype cost the company $250M

Computerworld.com [Hacking News] - 6 Květen, 2026 - 15:55

The mishaps around Apple Intelligence have gone beyond denting Apple’s reputation – they have also cost the company $250 million in damages over smarter Siri delays.

Think back to the original introduction of Apple Intelligence and you might recall a promotional video that explained how a new and smarter Siri would act as your contextually-smart AI companion, helping you get things done. Almost two years later, that smarter Siri still hasn’t shipped — and while Apple has made major changes in management, AI strategy, and approach, this contextual companion isn’t now expected until later this year.

Hopefully.

Apple Intelligence can be seen as a Maps-launch style debacle on the part of the company. (Apple even had to deny that the video presentation for those features shown at WWDC 2024 (no longer officially available) was made up.)

Apple Intelligence’s $250M punishment

The entire affair left some iPhone users unhappy, so they launched a class action lawsuit against the company for delaying introduction of the “more personalized Siri.” Apple agreed to pay $250 million to settle the case last December – a figure that works out to between $25 and $95 per device, depending on how many iPhone customers submit claims. 

(Compensation is available to US customers who purchased an iPhone 15 Pro, 15 Pro Max or the iPhone 16 family of devices between June 2024 and March 2025.)

The case against the company claimed it “Promoted AI capabilities that did not exist at the time, do not exist, and will not exist for two or more years.” 

To make matters worse, Apple pushed these new features after their introduction at WWDC — even linking a later iPhone update to its AI. Looking back now, this wasn’t a great idea since it made things much more embarrassing once Apple failed to deliver. That’s why the class action succeeded.

Don’t promise too much

The lesson here is that, in general, even a snake oil salesman needs to kill a couple of snakes before putting the essence in a bottle; in this case, the snake hadn’t yet been located. Apple has not admitted any wrongdoing as part of the settlement, saying it acted in “good faith” and “reasonably” thought it had complied with all applicable rules and regulations.

Apple now appears committed to a new partner-based strategy in which it builds the very best hardware on which to run AI, allowing users to choose whichever brand of AI they want to use on-device. At the same time, Apple is focused on building Apple Intelligence as a viable alternative. This will take time, but Apple will no doubt press ahead until it gets Apple Intelligence right.

In a statement, the company told 9to5Mac: “Since the launch of Apple Intelligence, we have introduced dozens of features across many languages that are integrated across Apple’s platforms, relevant to what users do every day, and built with privacy protections at every step.”

The plot thickens (intelligently)

That statement also stressed Apple’s continued focus on building those Apple Intelligence features. You could see that claim as an inevitable reaction to the criticism the company faces. I prefer to see it as confirmation that Apple has adopted the AI+ strategy, (best hardware and a choice of AI, including its own increasingly competitive Apple Intelligence brand). During last week’s earnings call, Apple CEO Tim Cook described the sheer importance of AI to its ecosystem.

“What truly sets Apple apart is how Apple Intelligence is woven into the core of our platforms, powered by Apple Silicon, and designed from the ground up to deliver intelligence that is fast, personal, and private,” he said. “This is not AI as a standalone feature, but AI as an essential intuitive part of the experience across our devices.”

More importantly, in the long run, Apple believes that by providing the best development platform it can also attract the AI developers and services it needs on which to build its future. I think this approach will succeed.

Still, as the class action settlement shows, the original introduction of Apple Intelligence may enter the history books as a classic case of hype over substance. Under internal and external pressure to regain the initiative in AI development, the company abandoned its usual conservative approach to making big claims, in which it tends to under-promise then over-deliver. Customers like nice surprises more than they enjoy empty promises; Apple usually knows that.

You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.

Kategorie: Hacking & Security

Ars Asks: Share your shell and show us your tricked-out terminals!

Ars Technica - 6 Květen, 2026 - 15:32

I spend more time today than ever before interacting with terminal windows, which is something I don't think Past Me would have believed in the early '90s. Back then, poor MS-DOS was the staid whipping boy of the industry, and at least on the consumer side, graphical environments like Windows (and maybe even odder creatures like AmigaOS) seemed poised to stamp the command line into oblivion, leaving text interfaces behind as we all blasted into the ooey-GUI future.

As it turns out, though, the command line is still the best tool for some jobs—many jobs, in fact. I read a wise post some years ago (probably on Slashdot) arguing that a mouse-driven point-and-click interface essentially reduces the user to pointing at something on the screen and grunting, "DO! DO THAT!" at the computer. (The rise of right-click context menus adds the ability for the user to also grunt "MORE THINGS!" but doesn't otherwise add vocabulary.)

The command line, by contrast, gives the user the opportunity to precisely tell the computer what they want done, using words instead of one or two gestalts that the computer must interpret based on context.

Read full article

Comments

MuddyWater hackers use Chaos ransomware as a decoy in attacks

Bleeping Computer - 6 Květen, 2026 - 15:02
The MuddyWater Iranian hackers disguised their operations as a Chaos ransomware attack, relying on  Microsoft Teams social engineering to gain access and establish persistence. [...]
Kategorie: Hacking & Security

OceanLotus suspected of using PyPI to deliver ZiChatBot malware

Kaspersky Securelist - 6 Květen, 2026 - 15:00

Introduction

Through our daily threat hunting, we noticed that, beginning in July 2025, a series of malicious wheel packages were uploaded to PyPI (the Python Package Index). We shared this information with the public security community, and the malware was removed from the repository. We submitted the samples to Kaspersky Threat Attribution Engine (KTAE) for analysis. Based on the results, we believe the packages may be linked to malware discussed in a Threat Intelligence report on OceanLotus.

While these wheel packages do implement the features described on their PyPI web pages, their true purpose is to covertly deliver malicious files. These files can be either .DLL or .SO (Linux shared library), indicating the packages’ ability to target both Windows and Linux platforms. They function as droppers, delivering the final payload – a previously unknown malware family that we have named ZiChatBot. Unlike traditional malware, ZiChatBot does not communicate with a dedicated command and control (C2) server, but instead uses a series of REST APIs from the public team chat app Zulip as its C2 infrastructure.

To conceal the malicious package containing ZiChatBot, the attacker created another benign-looking package that included the malicious package as a dependency. Based on these facts, we confirm that this campaign is a carefully planned and executed PyPI supply chain attack.

Technical details Spreading

The attacker created three projects on PyPI and uploaded malicious wheel packages designed to imitate popular libraries, tricking users into downloading them. This is a clear example of a supply chain attack via PyPI. See below for detailed information about the fake libraries and their corresponding wheel packages.

Malicious wheel packages

The packages added by the attacker and listed on PyPI’s download pages are:

  • uuid32-utils library for generating a 32-character random string as a UUID
  • colorinal library for implementing cross-platform color terminal text
  • termncolor library for ANSI color format for terminal output

The key metadata for these packages are as follows:

Pip install command File name First upload date Author / Email pip install uuid32-utils uuid32_utils-1.x.x-py3-none-[OS platform].whl 2025-07-16 laz**** / laz****@tutamail.com pip install colorinal colorinal-0.1.7-py3-none-[OS platform].whl 2025-07-22 sym**** / sym****@proton.me pip install termncolor termncolor-3.1.0-py3-none-any.whl 2025-07-22 sym**** / sym****@proton.me

Based on the distribution information on the PyPI web page, we can see that it offers X86 and X64 versions for Windows, as well as an x86_64 version for Linux. The colorinal project, for example, provides the following download options:

Distribution information of the colorinal project

Initial infection

The uuid32-utils and colorinal libraries employ similar infection chains and malicious payloads. As a result, this analysis will focus on the colorinal library as a representative example.

A quick look at the code of the third library, termncolor, reveals no apparent malicious content. However, it imports the malicious colorinal library as a dependency. This method allows attackers to deeply conceal malware, making the termncolor library appear harmless when distributing it or luring targets.

The termncolor library imports the malicious colorinal library

During the initial infection stage, the Python code is nearly identical across both Windows and Linux platforms. Here, we analyze the Windows version as an example.

Windows version

Once a Python user downloads and installs the colorinal-0.1.7-py3-none-win_amd64.whl wheel package file, or installs it using the pip tool, the ZiChatBot’s dropper (a file named terminate.dll) will be extracted from the wheel package and placed on the victim’s hard drive.

After that, if the colorinal library is imported into the victim’s project, the Python script file at [Python library installation path]\colorinal-0.1.7-py3-none-win_amd64\colorinal\__init__.py will be executed first.

The __init__.py script imports the malicious file unicode.py

This Python script imports and executes another script located at [python library install path]\colorinal-0.1.7-py3-none-win_amd64\colorinal\unicode.py. The is_color_supported() function in unicode.py is called immediately.

The code loads the dropper into the host Python process

The comment in the is_color_supported() function states that the highlighted code checks whether the user’s terminal environment supports color. The code actually loads the terminate.dll file into the Python process and then invokes the DLL’s exported function envir, passing the UTF-8-encoded string xterminalunicod as a parameter. The DLL acts as a dropper, delivering the final payload, ZiChatBot, and then self-deleting. At the end of the is_color_supported() function, the unicode.py script file is also removed. These steps eliminate all malicious files in the library and deploy ZiChatBot.
For the Linux platform, the wheel package and the unicode.py Python script are nearly identical to the Windows version. The only difference is that the dropper file is named “terminate.so”.

Dropper for ZiChatBot

From the previous analysis, we learned that the dropper is loaded into the host Python process by a Python script and then activated. The main logic of the dropper is implemented in the envir export function to achieve three objectives:

  1. Deploy ZiChatBot.
  2. Establish an auto-run mechanism.
  3. Execute shellcode to remove the dropper file (terminate.dll) and the malicious script file from the installed library folder.

The dropper first decrypts sensitive strings using AES in CBC mode. The key is the string-type parameter “xterminalunicode” of the exported function. The decrypted strings are “libcef.dll”, “vcpacket”, “pkt-update”, and “vcpktsvr.exe”.

Next, the malware uses the same algorithm to decrypt the embedded data related to ZiChatBot. It then decompresses the decrypted data with LZMA to retrieve the files vcpktsvr.exe and libcef.dll associated with ZiChatBot. The malware creates a folder named vcpacket in the system directory %LOCALAPPDATA%, and places these files into it.

To establish persistence for ZiChatBot, the dropper creates the following auto-run entry in the registry:

[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run] "pkt-update"="C:\Users\[User name]\AppData\Local\vcpacket\vcpktsvr.exe"

Once preparations are complete, the malware uses the XOR algorithm to decrypt the embedded shellcode with the three-byte key 3a7. It then searches the decrypted shellcode’s memory for the string Policy.dllcppage.dll and replaces it with its own file name, terminate.dll, and redirects execution to the shellcode’s memory space.

The shellcode employs a djb2-like hash method to calculate the names of certain APIs and locate their addresses. Using these APIs, it finds the dropper file with the name terminate.dll that was previously passed by the DLL before unloading and deleting it.

Linux version

The Linux version of the dropper places ZiChatBot in the path /tmp/obsHub/obs-check-update and then creates an auto-run job using crontab. Unlike the Windows version, the Linux version of ZiChatBot only consists of one ELF executable file.

system("chmod +x /tmp/obsHub/obs-check-update") system("echo \"5 * * * * /tmp/obsHub/obs-check-update" | crontab - ")

ZiChatBot

The Windows version of ZiChatBot is a DLL file (libcef.dll) that is loaded by the legitimate executable vcpktsvr.exe (hash: 48be833b0b0ca1ad3cf99c66dc89c3f4). The DLL contains several export functions, with the malicious code implemented in the cef_api_mash export. Once the DLL is loaded, this function is invoked by the EXE file. ZiChatBot uses the REST APIs from Zulip, a public team chat application, as its command and control server.

ZiChatBot is capable of executing shellcode received from the server and only supports this one control command. Once it runs, it initiates a series of sequential HTTP requests to the Zulip REST API.

In each HTTP request, an API authentication token is included as an HTTP header for server-side authentication, as shown below.

// Auth token: TW9yaWFuLWJvdEBoZWxwZXIuenVsaXBjaGF0LmNvbTpVOFJFWGxJNktmOHFYQjlyUXpPUEJpSUE0YnJKNThxRw== // Decoded Auth token [email protected]:U8REXlI6Kf8qXB9rQzOPBiIA4brJ58qG

ZiChatBot utilizes two separate channel-topic pairs for its operations. One pair transmits current system information, and the other retrieves a message containing shellcode. Once the shellcode is received, a new thread is created to execute it. After executing the command, a heart emoji is sent in response to the original message to indicate the execution was successful.

Infrastructure

We did not find any traditional infrastructure, such as compromised servers or commercial VPS services and their associated IPs and domains. Instead, the malicious wheel packages were uploaded to the Python Package Index (PyPI), a public, shared Python library. The malware, ZiChatBot, leverages Zulip’s public team chat REST APIs as its command and control server.

The “helper” organization that the attacker had registered on the Zulip service has now been officially deactivated by Zulip. However, infected devices may still attempt to connect to the service, so to help you locate and cure them, we recommend adding the full URL helper.zulipchat.com to your denylist.

Victims

The malware was uploaded in July 2025. Upon discovering these attacks, we quickly released an update for our product to detect the relevant files and shared the necessary information with the public security community. As a result, the malicious software was swiftly removed from PyPI, and the organization registered on the Zulip service was officially deactivated. To date, we have not observed any infections based on our telemetry or public reports.

Zulip has officially deactivated the “helper” organization

Attribution

Based on the results from our KTAE system, the dropper used by ZiChatBot shows a 64% similarity to another dropper we analyzed in a TI report, which was linked to OceanLotus. Reverse engineering shows that both droppers use nearly identical algorithms and logic for to decrypt and decompress their embedded payloads.

Analysis results of dropper using KTAE system

Conclusions

As an active APT organization, OceanLotus primarily targets victims in the Asia-Pacific region. However, our previous reports have highlighted a growing trend of the group expanding its activities into the Middle East. Moreover, the attacks described in this report – executed through PyPI – target Python users worldwide. This demonstrates OceanLotus’s ongoing effort to broaden its attack scope.

In the first half of 2025, a public report revealed that the group launched a phishing campaign using GitHub. The recent PyPI-based supply chain attack likely continues this strategy. Although phishing emails are still a common initial infection method for OceanLotus, the group is also actively exploring new ways to compromise victims through diverse supply chain attacks.

Indicators of compromise

Additional information about this activity, including indicators of compromise, is available to customers of the Kaspersky Intelligence Reporting Service. If you are interested, please contact [email protected].

Malicious wheel packages
termncolor-3.1.0-py3-none-any.whl
5152410aeef667ffaf42d40746af4d84

uuid32_utils-1.x.x-py3-none-xxxx.whl
0a5a06fa2e74a57fd5ed8e85f04a483a
e4a0ad38fd18a0e11199d1c52751908b
5598baa59c716590d8841c6312d8349e
968782b4feb4236858e3253f77ecf4b0
b55b6e364be44f27e3fecdce5ad69eca
02f4701559fc40067e69bb426776a54f
e200f2f6a2120286f9056743bc94a49d
22538214a3c917ff3b13a9e2035ca521

colorinal-0.1.7-py3-none-xxxx.whl
ba2f1868f2af9e191ebf47a5fab5cbab

Dropper for ZiChatBot
Backward.dll
c33782c94c29dd268a42cbe03542bca5
454b85dc32dc8023cd2be04e4501f16a

Backward.so
fce65c540d8186d9506e2f84c38a57c4
652f4da6c467838957de19eed40d39da

terminate.dll
1995682d600e329b7833003a01609252

terminate.so
38b75af6cbdb60127decd59140d10640

ZiChatBot
libcef.dll
a26019b68ef060e593b8651262cbd0f6

MuddyWater Uses Microsoft Teams to Steal Credentials in False Flag Ransomware Attack

The Hacker News - 6 Květen, 2026 - 15:00
The Iranian state-sponsored hacking group known as MuddyWater (aka Mango Sandstorm, Seedworm, and Static Kitten) has been attributed to a ransomware attack in what has been described as a "false flag" operation. The attack, observed by Rapid7 in early 2026, has been found to leverage social engineering techniques via Microsoft Teams to initiate the infection sequence. Although the incident
Kategorie: Hacking & Security
Syndikovat obsah