Agregátor RSS
Google Links China, Iran, Russia, North Korea to Coordinated Defense Sector Cyber Operations
Four new reasons why Windows LNK files cannot be trusted
The number of ways that Windows shortcut (.LNK) files can be abused just keeps growing: A cybersecurity researcher has documented four new techniques to trick Windows users into running malicious actions through innocent-looking shortcuts.
Wietze Beukema demonstrated how to spoof the visible LNK destination, hide command-line arguments, and execute a different program than the one shown to the user, potentially offering attackers new vectors for phishing, USB-borne attacks, or initial access operations.
The disclosure adds to longstanding concerns about a flaw in LNK handling that has been repeatedly exploited by threat actors yet has proven difficult to fully eliminate.
Although Microsoft did not immediately respond to a request for comment on the disclosure, it has previously acknowledged risks in this area through security guidance, including a November 2025 advisory.
Until now, Microsoft has always stopped short of classifying Windows’ behavior with LNK files as a conventional “vulnerability,” but the sheer number of exploits that Beukema has demonstrated makes Microsoft’s position that this is just a UI issue harder to defend.
Bait and switchWindows shortcuts serve as pointers to programs or documents, but they can store more than simple file paths. The LNK files can specify command-line arguments, working directories, icons, and other execution parameters, effectively acting as a launcher.
Beukema identified multiple previously undisclosed ways to create mismatches between what a Windows shortcut appears to target and what it actually launches. Because the LNK format allows the target path to be stored in several structures, including the “TargetIDList”, “EnvironmentVariableDataBlock”, and “LinkInfo” fields, Windows must choose which value to trust. That decision process can be manipulated.
According to Beukema, under normal conditions, Windows Explorer prioritizes the EnvironmentVariableDataBlock entry when both it and the TargetIDList are present, displaying and executing that path. However, if the Environmental Variable path is a syntactically invalid Windows file path, Explorer still displays it in the Properties dialog but silently falls back to the hidden TargetIDList path at runtime.
This allows a shortcut to present a harmless-looking destination while executing a different program entirely.
Additionally, Beukema-disclosed flaws exploit other fallback behaviors arising from conflicting metadata. If an EnvironmentVariableDataBlock is present but the LinkTargetIDList is non-matching, Windows instead runs the executable from the LinkInfo structure while continuing to display the Environment Variable path.
In a variant on this exploit, supplying only the ANSI target value while leaving the paired Unicode field empty causes Explorer to treat the data as inconsistent. It displays a different path from the LinkTargetIDList, disables the editable Target field, and hides arguments. Yet the concealed ANSI path is executed.
Together, these behaviors can potentially enable attackers to spoof the visible target, conceal the real one, and mislead users into launching unintended programs.
Hidden command-line argumentsBeyond target spoofing, Beukema demonstrated a technique for hiding malicious command-line instructions behind legitimate executables. LNK files can launch trusted Windows binaries while passing attacker-controlled instructions through embedded arguments, enabling “living-off-the-land” (LOLBINs) execution without pointing directly to malware.
According to the researcher, this can be done by manipulating the input passed into certain fields within the LNK “ExtraData” section that determines additional target metadata. Enabling the “HasExpString” flag and configuring the “EnvironmentVariableDataBlock” with “TargetANSI/TargetUnicode” fields filled with null bytes produces what he described as “unexpected” results.
“First, it disables the target field, meaning the target field becomes read-only and cannot be selected,” Beukema said. “Secondly, it hides the command-line arguments; yet when the LNK is opened, it still passes them on.” The behavior can be exploited to launch a harmless system component while secretly executing arbitrary commands like downloading payloads or running scripts.
According to the disclosure, this is a better approach attackers than exploiting CVE-2025-9491 because it is harder to detect due to the absence of visible, padded command lines.
Beukema noted that this technique, like the others he described, relies on Windows’ normal shortcut handling rather than being patchable bugs, meaning mitigation largely depends on treating untrusted LNK files as potentially dangerous and preventing users from opening them. “Microsoft argues that as it requires the user to do something, without breaking any security boundaries, it is not a security vulnerability,” he said. “This is not entirely unreasonable as ultimately, most of these boil down to being UI bugs.”
This article first appeared on CSO.
Takhle vypadá vylepšené Kingdom Come na PS5. Kvalitnější textury a nasvícení, ale stále to není dokonalé
Meshový router od TP-Linku se třemi jednotkami pokryje celý dům. Teď stojí méně než 2500 Kč
České dráhy denně sešrotují jeden starý vůz. Letos se jich promění v suť až pět stovek
UAT-9921 Deploys VoidLink Malware to Target Technology and Financial Sectors
UAT-9921 Deploys VoidLink Malware to Target Technology and Financial Sectors
Turning IBM QRadar Alerts into Action with Criminal IP
Starcloud prepares to launch AWS Outpost into space
Hot on the heels of Starlink’s plan for a million data centers in space, Starcloud’s next launch will put hardware from AWS in orbit.
“Starcloud will be the first to launch the Amazon Web Services (AWS) Outpost hardware to space on our second satellite launching in October,” Starcloud CEO Philip Johnston wrote in a LinkedIn post. Outpost is AWS’s on-premises private cloud offering.
Starcloud put an Nvidia H100 GPU in space aboard its first satellite, Starcloud-1, in October 2025. Chinese company Guoxing Aerospace had already launched a computing network into space a year earlier, and since then Starlink and Google have indicated that they plan to follow suit.
One executive skeptical of the idea of data centers in space is AWS’s own CEO, Matt Garman. “There are not enough rockets to launch a million satellites yet, so we’re, like, pretty far from that. If you think about the cost of getting a payload in space today, it’s massive,” Garman told attendees at the Cisco AI summit, according to a Reuters report.
Garman is just one of many critics of the notion that data centers can be a viable alternative. Issues such as collisions with space debris, the difficulty of supplying water as a coolant, the impossibility of fixing hardware issues and latency have all been highlighted as potential problems.
This article first appeared on Network World.
Dronaři z olympiády ukázali, jak natáčejí na sjezdovkách a proč v ledovém korytě zvládnou jen tři zatáčky
Amid the AI onslaught, a few silver linings for US tech jobs
AI continues gobbling up IT jobs, but hints about how the technology is now influencing hiring are becoming more visible.
About 130,000 jobs were created in the broader US economy in January, according to data from the US Bureau of Labor Statistics (BLS) released Wednesday. The growth was driven by hiring in the healthcare, social assistance, and construction sectors.
But tech-related jobs declined by 20,155 in January, affecting workers in both technical and non-technical occupations, according to figures compiled by industry association CompTIA.
Overall, the unemployment rate for tech jobs rose to 3.6% in January, with 6.6 million employed in such roles. The telecom sector was the hardest hit, seeing a 15% decline, according to the BLS.
Amid job market uncertainty, companies are relying on job postings to get a sense of how AI is influencing changing roles.
CompTIA said tech job postings in January rose to 465,000, up 4% from December, with more job postings for software and systems engineers and tech support personnel. There were also 8,765 listings for AI engineers, up by 1,353 from December.
There were 15% more postings for technical roles in January, including an 18% rise in listings for software developers, said Bekir Atahan, vice president at Experis.
IT postings were lower throughout much of 2025, though the company’s data showed that January brought a 15% increase in new technical role postings, including an 18% rise for software developers. That shift reflects conversations Experis is having with employers reactivating projects that were on hold late last year.
“One of the clearest signals is the growth in roles asking for artificial intelligence-related skills,” Atahan said.
Job postings for AI-related skills jumped more than 50% in January — and software developer positions that include AI skills grew at an even faster pace. “Companies are moving from early exploration to practical implementation, which is creating steady demand for multidisciplinary technologists,” he said.
That’s a major swing from last year, with AI skills becoming an increasingly big factor in technical roles. “Organizations continue to prioritize roles in cloud engineering, data architecture, cybersecurity, and product development,” Atahan said.
AI may be seen as a job destroyer, but it is also changing the supply and demand dynamics of the workforce, Nela Richardson, chief economist of ADP, said in a blog post. “As artificial intelligence takes on more workplace activities, our traditional ways of thinking about job creation and destruction will tell only part of the story,” she said.
In the future, employers will reconsider the content of their jobs and roles. The focus will no longer be on repetitive work, but on value and growth. That, in turn, is expected to change job listings, training programs, research focus, and productivity.
“We call it the great job unbundling,” Richardson said.
ADP releases its own set of employment numbers. The payroll processing company reported only 22,000 jobs created by private employers in January, a decline from 44,000 jobs in December.
The BLS data also accounts for federal, state, and local government jobs, which was the biggest loser with about 438,000 jobs lost.
At last month’s World Economic Forum in Davos, attendees sounded the alarm about how AI is eating into white-collar and entry-level jobs. “We expect over the next years, in advanced economies, 60% of jobs to be affected by AI — either enhanced, eliminated, or transformed — and 40% globally. This is like a tsunami hitting the labor market,” said Kristalina Georgieva, managing director of the International Monetary Fund.
White-collar workers include knowledge workers in professional roles such as software, finance, research, and science, said Anthropic CEO Dario Amodei. “I think maybe we’re starting to see just the little beginnings of it in software and coding,” he said.
Amodei said he can envision a time when Anthropic will need fewer people at the junior and intermediate levels. “We’re thinking about how to deal with that within Anthropic in a sensible way,” he said.
In many ways, the latest job data shows that view is becoming more common across the tech sector.
Apple chce čtečku Touch ID konečně dostat pod displej. Nejen u iPhonů, ale i u Apple Watch
Apple chce čtečku Touch ID konečně dostat pod displej. Nejen u iPhonů, ale i u Apple Watch
CISA flags critical Microsoft SCCM flaw as exploited in attacks
Top Dutch telco Odido admits 6.2M customers caught in contact system caper
The Netherlands' largest mobile network operator (MNO) has admitted that a breach of its customer contact system may have affected around 6.2 million people.…
AMD se v procesorech daří jako nikdy předtím. Přes Vánoce posílilo vůči Intelu nejvíce v historii
Google fears massive attempt to clone Gemini AI through model extraction
Google detected and blocked a campaign involving more than 100,000 prompts that it claimed were designed to copy the proprietary reasoning capabilities of its Gemini AI model, according to a quarterly threat report released by Google Threat Intelligence Group.
The prompts looked like a coordinated attempt to perform model extraction or distillation, a machine-learning process in which a smaller model is created with the essential traits of a much larger one. Google systems caught the prompts in real time and “lowered the risk of this particular attack, protecting internal reasoning traces,” it said.
Google is keen to prevent competitors from profiting from its investment in AI model development to train their own models — while still needing to allow users to access the models that power its services.
“Model extraction and subsequent knowledge distillation enable an attacker to accelerate AI model development quickly and at a significantly lower cost,” Google said in the report. “This activity effectively represents a form of intellectual property theft.”
In the campaign Google detected, attackers instructed Gemini to keep “the language used in the thinking content strictly consistent with the main language of the user input” — a technique it said is aimed at extracting the model’s reasoning processes across multiple languages. “The breadth of questions suggests an attempt to replicate Gemini’s reasoning ability in non-English target languages across a wide variety of tasks,” the company said in the report.
Google said it detected frequent model extraction attempts from private sector entities worldwide and researchers seeking to clone proprietary AI capabilities. The company said these attacks violate its terms of service and may be subject to takedowns and legal action.
However, researchers and potential customers might want to obtain large samples of Gemini’s reasoning for other, legitimate, purposes such as comparing models’ performance or evaluating its suitability and reliability for a task before purchasing.
Model providers see growing threat of IP theftGoogle is not the only one seeing what it supposes are ill-intentioned attempts at model extraction in its logs. On Thursday, OpenAI told US lawmakers that Chinese AI firm DeepSeek has deployed “new, obfuscated methods” to extract results from leading American AI models to train its own systems, according to a memo reviewed by Bloomberg. OpenAI accused DeepSeek in the memo of trying to “free-ride on the capabilities developed by OpenAI and other US frontier labs,” highlighting how model theft has become a worry for companies that have invested billions in AI development.
Corsica Technologies CISO Ross Filipek sees a change in cybersecurity threats behind the accusations. “Adversaries engaging in model-extraction attacks highlights a shift in attack priorities,” he said. “Model extraction doesn’t infiltrate systems in the traditional sense, but rather prioritizes transferring the knowledge developed from the victim’s AI model and using it to accelerate the development of the attackers’ own AI models.”
The threat of intellectual property theft through model extraction should worry any organization providing AI models as services, according to the report. Google said these organizations should monitor API access patterns for signs of systematic extraction.
Filipek said defending against these attacks requires strict governance over AI systems and close monitoring of data flows. “Organizations should implement response filtering and output controls, which can prevent attackers from determining model behavior in the event of a breach,” he said.
Nation-state groups used Gemini to accelerate attack operationsGoogle sees itself not just as a potential victim of AI cybercrime, but also an unwilling enabler. Its report documented how government-backed threat actors from China, Iran, North Korea, and Russia integrated Gemini into their operations in late 2025. The company said it disabled accounts and assets associated with these groups.
Iranian threat actor APT42 used Gemini to craft targeted social engineering campaigns, feeding the AI biographical details about specific targets to generate conversation starters designed to build trust, according to the report. The group also used Gemini for translation and to understand cultural references in non-native languages.
Chinese groups APT31 and UNC795 used Gemini to automate vulnerability analysis, debug malicious code, and research exploitation techniques, the report found. North Korean hackers from UNC2970 mined Gemini for intelligence on defense contractors and cybersecurity firms, collecting details on organizational structures and job roles to support phishing campaigns.
Google said it took action by disabling associated accounts and that Google DeepMind used the insights to strengthen defenses against misuse.
Attackers integrate AI into malware operationsGemini is being misused in other ways too, Google said, with some bad actors embedding its APIs directly into malicious code.
Google identified a new malware family it called HONESTCUE that integrates Gemini’s API directly into its operations, sending prompts to generate working code that the malware compiles and executes in memory. The prompts appear benign in isolation, allowing them to bypass Gemini’s safety filters, according to the report.
AttackIQ field CISO Pete Luban sees services like Gemini as an easy way for hackers to up their game. “Integration of public AI models like Google Gemini into malware grants threat actors instant access to powerful LLM capabilities without needing to build or train anything themselves,” he said. “Malware capabilities have advanced exponentially, allowing for faster lateral movement, stealthier attack campaigns, and more convincing mimicry of typical company operations.”
Google also documented COINBAIT, a phishing kit built using AI code generation platforms, and Xanthorox, an underground service that advertised custom malware-generating AI but was actually a wrapper around commercial products including Gemini. The company shut down accounts and projects connected to both.
Luban said the pace of AI-enabled threats means traditional defenses are insufficient. “Continuous testing against realistic adversary behavior is essential to determining if security defenses are prepared to combat adaptive threats,” he said.
This article first appeared on CSO.
Malicious Chrome Extensions Caught Stealing Business Data, Emails, and Browsing History
Malicious Chrome Extensions Caught Stealing Business Data, Emails, and Browsing History
Rusko zcela zablokovalo aplikaci WhatsApp, lidem doporučuje domácí platformu MAX
- « první
- ‹ předchozí
- …
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- …
- následující ›
- poslední »



