Agregátor RSS
Trellix Confirms Source Code Breach With Unauthorized Repository Access
Trellix Confirms Source Code Breach With Unauthorized Repository Access
Nový vesmírný dalekohled Nancy Grace Romanové je připraven. NASA plánuje start na září
Microsoft tests modern Windows Run, says it's faster than legacy dialog
Slevy týdne: knihy za hubičku, grily, koberce i nový účet s bonusem
Edu tech firm Instructure discloses cyber incident, probes impact
AI agents can bypass guardrails and put credentials at risk, Okta study finds
An AI agent that revealed sensitive data without being asked. An agent that overruled its own guardrails. Another that sent credentials to an attacker via Telegram, because it forgot it wasn’t supposed to do so after a reset.
It’s no secret that AI agents have huge potential, balanced by equally big risks. What’s becoming apparent, however, is how quickly agentic systems can veer wildly off course and start exposing critical information under real-world conditions.
A look at just how easily this can happen emerges from Phishing the agent: Why AI guardrails aren’t enough, a report on tests conducted by cloud identity and access management (IAM) company Okta Threat Intelligence, which uncovered all of the problems cited above, and more.
Their research focused on OpenClaw, a model-agnostic multi-channel AI assistant which has seen explosive growth inside enterprises since appearing in late 2025.
The Telegram hackIn common with the growing list of rival agents, OpenClaw is only as useful as the access it is given to files, accounts, browsers, network devices, and, most significant of all, credentials.
One test conducted by Okta assessed how easy it would be to trick OpenClaw running Claude Sonnet 4.6 into handing over an OAuth token. This shouldn’t be possible; the LLM should refuse this request. However, what might have held true when prompting Claude as a chatbot quickly fell apart when it was accessed through OpenClaw.
The test assumed that a user had given OpenClaw full access to their computer, that they regularly controlled the agent over Telegram, and that their Telegram account had been hijacked.
First, the attacker instructed the agent via Telegram to retrieve an OAuth token, but to only display it in a terminal window on the computer. Claude Sonnet’s guardrails would prevent it from copying the token, however, the testers were able to reset the agent, causing it to forget it had displayed the token in the terminal window.
At that point, Okta said in its writeup, “The agent was instructed to take a screenshot of the desktop, which included the token, and then drop the screenshot in the Telegram chat, which it did. Exfiltration accomplished.”
Agent-in-the-middleAgentic AI is really two things: a powerful orchestration system coupled to one or more highly-capable LLMs. What an agent isn’t is a simple interface, and it must be viewed as a separate system capable of autonomous, unpredictable reasoning.
In fact, Okta threat intelligence director Jeremy Kirk pointed out, “It opens up a new attack surface. Someone gets SIM swapped, their Telegram is hooked up to an agent that has carte blanche to run anything on their computer, and possibly their employer’s network. In an enterprise context, this is a total nightmare.”
OpenClaw is also so hard-wired to find ways around problems, it will sometimes do unexpected, improper things. Kirk said that an agent, when prompted in tests to access a website, requested the site’s login credentials in chat via a Telegram bot, an unencrypted channel which would expose them to anyone with access to that chat.
In another example, OpenClaw was asked to search X for AI stories. That shouldn’t have been possible; the machine was logged into X, but OpenClaw’s isolated Chrome profile was not. However, when prompted to grab the session cookies from the logged-in session and inject them into its own browser process, it happily attempted to do so.
This is similar in principle to adversary-in-the-middle phishing attacks, which allow attackers to bypass protections such as MFA. It should be a no-go, and yet OpenClaw thought the action was valid, underlining how an attacker could manipulate it to do the same.
“The agents are prompted to be as helpful as possible by default, a characteristic that poses particular concerns when it comes to credentials and tokens,” said Kirk.
‘Defying security gravity’According to Kirk, many enterprises are, sometimes unwittingly, running unsanctioned or weakly managed ‘shadow’ agents inside their networks. An example of how this could go wrong was the recent Vercel compromise in which the Context.ai app opened the door to the theft of downstream OAuth session tokens.
The problem stems from agents being used experimentally by developers and employees, with little or no governance or oversight. The answer is to secure them using the same controls applied to users or service accounts, said Kirk. And as well as limiting the scope of agents, enterprises should also look to securing the credentials and tokens themselves, avoiding giving them long expiry dates.
Agents are only the latest example of a technology that is being deployed faster than it can be secured, Kirk observed. “Much of AI right now is defying security gravity,” he said. “But there are ways to use agents safely and keep credentials out of their reach, which is the only safe way to use them.”
This article originally appeared on CSOonline.
Událo se v týdnu 18/2026
Proč se rozbitá huba hojí lépe než zranění opačného konce těla
Windows shell spoofing vulnerability puts sensitive data at risk
Microsoft and the US Cybersecurity and Infrastructure Security Agency (CISA) have sounded the alarm about a Windows shell spoofing vulnerability that is already being exploited by attackers. It is not clear by whom as yet, but the main suspects are hackers in Russia.
CISA has mandated that all federal agencies patch this vulnerability, designated CVE-2026-32202, by May 12. According to a Microsoft advisory, exploitation of the flaw could lead to access to sensitive data, but attackers would not be able to gain control of the system.
However, one security expert has warned that the considerable gap between the time Microsoft identified the bug and the date by which the systems must be patched leads to increased risk.
The patch gapLionel Litty, CISO for security company Menlo, said that an incomplete patch for CVE-2026-21510 that resulted in the issue tracked as CVE-2026-32202 adds to the problem. “This has been a theme for many years. A vulnerability exists and the vendor has not been thorough enough in dealing with it, so a small variation has not been fully patched. What normally happens is that they’ve dealt with the main vulnerability, but there are still side effects.” The result of this is that there is a further delay in a complete fix while a new update is developed.
The big problem, said Litty, is the so-called patch gap. He said that initially there’s a gap between the time the vendors find a vulnerability and the time it issues a patch, and there is also a subsequent gap between the patch being issued and organizations completing the update. For example, he noted, if an update interrupts users’ work, they may be reluctant apply it. ”We can see on our platform that many users don’t update for weeks, or even months,” he said.
He pointed out that the vendors themselves are acting efficiently. But, he said, “as a CISO, I have to decide what level of pain to inflict on our users.”
A difficult balanceErik Avakian, technical counselor at Info-Tech Research Group, noted that when it set the patching deadline, CISA had been operating within the guidelines laid down in Binding Operational Directive (BOD) 22-01, which requires US federal agencies to patch vulnerabilities within the timelines outlined under the policy, which range from 14 to 21 days.
“In cases of high-risk exploitation, CISA can shorten the deadline to three days,” he said. “But in the case of CVE-2026-32202, the CVSS score was rated at 4.3, and even though the vulnerability has been actively exploited, the rating does not meet the policy threshold for a faster patch cycle. In this case, CISA allotted a 14-day deadline, which meets its aggressive timeline standard based on the vendor rating.”
He said that there is indeed an argument that the 14 day window to patch a vulnerability that is being actively exploited in the wild is too long. But, he said, “I’m assuming in this case, the reason why it was not elevated to an emergency directive type patch cycle (which would require as little as 48 to 72 hours to patch) is due to Microsoft’s rating, as well as several other factors”.
Avakian explained his reasoning: “First, organizations can help mitigate the risk without applying a full patch by blocking certain ports for traffic at the firewall perimeter,” he said. “This type of countermeasure helps to reduce the risk while the 14-day patch window clock is ticking. The longer window gives testers added time to test patches being applied properly in a test/staging environment before rolling to production.”
Secondly, he said, “it’s one thing [for IT] to patch systems quickly, but it’s another when they’re rushed, because that carries the potential for additional unintended risk of breaking critical systems and applications if something goes wrong, or if the patch wasn’t tested properly.”
Avakian did agree that CISOs are facing a difficult balancing act, where they have to weigh risk against the stability of systems.
And, as Litty pointed out, the situation is constantly changing; the emergence of AI will cause more issues in the future. “We’re seeing a shrinking gap as AI becomes part of the problem,” he said, adding that AI use means people with fewer technical skills are able to exploit systems, and do so more quickly, so CISOs should not assume that sophisticated attacks are coming from nation states. There needs to be a change of mindset within organizations to deal with this.
“You can no longer spend a few weeks testing an upgrade and then implementing it: you have to do things much faster,” he said.
This article originally appeared on CSOonline.
Robots With Different Designs Can Now Share Skills
Abilities taught to one robot don’t usually work on another. With a new approach, it’s one and done.
As robots move into the real world, they’ll need to become more adaptable. But right now, it’s hard to transfer skills from one machine to another. A new system makes this possible.
One of the most popular ways to teach robots is to have a human show them what to do—either by physically guiding the robot’s joints, using remote control, or even drawing the desired motion.
But those skills are indelibly tied to each specific robot. If a company upgrades to a new robot with a different design, the skill breaks, and the robot has to be trained from scratch.
Researchers at the Swiss Federal Institute of Lausanne have now sidestepped this challenge by teaching robots to understand the limits of their own joints. In a paper published in Science Robotics, the new approach allowed multiple robots to complete a task based on a single human demonstration.
“With new designs come different capabilities and constraints,” Durgesh Haribhau Salunkhe, a co-author of the paper told Ars Technica. “The problem is to adapt to these constraints and capabilities—to faithfully replicate the actions demonstrated by a human.”
Surprisingly, the approach doesn’t rely on AI. Instead, the researchers analyzed the physical properties of several robotic arms with three rotating joints—a popular design in commercial settings—to map out their limits.
To complete a task, a robotic arm must calculate how to bend each joint to reach its target. It also has to avoid pushing the joints past their physical limits or twisting them at weird angles. Engineers call these limits “singularities” because they cause the math governing the robots’ motion to break down. Failures can cause sudden and unsafe movements.
The researchers mapped safe regions in each robot’s range of motion and sorted all three-joint robots into six categories based on shared physical limits.
They embedded these limits into each robot’s programming. The team calls this “kinematic intelligence,” essentially knowledge of what movements the machines can and can’t make safely.
If a movement pushes the robot into an unsafe zone, the system activates what the researchers call a “track cycle.” This is a strategy for skirting the danger zone, tailored to the robot’s category. Some robots traverse horizontally along zones, others vertically, and some switch modes.
As a real-world test, the team set up a mock assembly line with three commercial robots: one whose movements are relatively constrained, another with more flexibility, and a third capable of a much wider range of motions.
A human demonstrated three tasks. They pushed an object off a conveyor belt, picked it up, placed it on a workbench, and then put it in a basket. Each robot tried these tasks, and despite the movements pushing them close to their limits, all three followed the demonstrations successfully.
The system currently handles a robot’s physical limits well and keeps movements safe. But it isn’t designed for unpredictable environments or complex decisions. So it’s likely best suited to highly controlled factory settings rather than the messier real world.
Still, allowing robots to share skills could make it easier to roll them out across a range of commercial settings. It won’t bring us the robot butlers Silicon Valley has promised, but it could accelerate the much more practical integration of robots in industry.
The post Robots With Different Designs Can Now Share Skills appeared first on SingularityHub.
Ubuntu infrastructure has been down for more than a day
Servers operated by Ubuntu and its parent company Canonical were knocked offline on Thursday morning and have remained down ever since, a situation that’s preventing the OS provider from communicating normally following the botched disclosure of a major vulnerability.
Attempts to connect to most Ubuntu and Canonical webpages and download OS updates from Ubuntu servers have consistently failed over the past 24 hours. Updates from mirror sites, however, have continued to work normally. A Canonical status page said: “Canonical’s web infrastructure is under a sustained, cross-border attack and we are working to address it.” Other than that, Ubuntu and Canonical officials have maintained radio silence since the outage began.
A decades-long scourgeA group sympathetic to the Iranian government has taken credit for the outage. According to posts on Telegram and other social media, the group is responsible for a DDoS attack using Beam, an operation that claims to test the ability of servers to operate under heavy loads but, like other “stressors,” are, in fact, fronts for services miscreants pay for to take down third-party sites. In recent days, the same pro-Iran group has taken credit for DDoSes on eBay.
30,000 Facebook Accounts Hacked via Google AppSheet Phishing Campaign
30,000 Facebook Accounts Hacked via Google AppSheet Phishing Campaign
15-year-old detained over French govt agency data breach
Story retracted
Edtech firm Instructure confirms data breach after Salesforce instance hack
Netflix a další na víkend: seriály Muž v ohni, Kdyby přání zabíjela, nebo Propadání. Ale také Bouřlivé výšiny
Z kaňky na kráse udělali ozdobu. Insta360 Mic Pro jsou bezdrátové mikrofony s displejem, který poutá pozornost
Cybercrime Groups Using Vishing and SSO Abuse in Rapid SaaS Extortion Attacks
- « první
- ‹ předchozí
- …
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- …
- následující ›
- poslední »



