Agregátor RSS

Termite ransomware breaches linked to ClickFix CastleRAT attacks

Bleeping Computer - 7 Březen, 2026 - 17:14
Ransomware threat actors tracked as Velvet Tempest are using the ClickFix technique and legitimate Windows utilities to deploy the DonutLoader malware and the CastleRAT backdoor. [...]
Kategorie: Hacking & Security

Microsoft: Hackers abusing AI at every stage of cyberattacks

Bleeping Computer - 7 Březen, 2026 - 16:15
Microsoft says threat actors are increasingly using artificial intelligence in their operations to accelerate attacks, scale malicious activity, and lower technical barriers across all aspects of a cyberattack. [...]
Kategorie: Hacking & Security

This Week’s Awesome Tech Stories From Around the Web (Through March 7)

Singularity HUB - 7 Březen, 2026 - 16:00
Artificial Intelligence

Watershed Moment for AI–Human Collaboration in MathBenjamin Skuse | IEEE Spectrum

“The 8-dimensional sphere-packing proof formalization alone, announced on February 23, represents a watershed moment for autoformalization and AI–human collaboration. But today, Math, Inc. revealed an even more impressive accomplishment: Gauss has autoformalized Viazovska’s 24-dimensional sphere-packing proof—all 200,000+ lines of code of it—in just two weeks.”

Biotechnology

The Millisecond That Could Change Cancer TreatmentTom Clynes | IEEE Spectrum

“Here at CERN (the European Organization for Nuclear Research) and other particle-physics labs, scientists and engineers are applying the tools of fundamental physics to develop a technique called FLASH radiotherapy that offers a radical and counterintuitive vision for treating the disease.”

Computing

Google Spinoff Beams Blazing-Fast 25-Gbps Internet Around Cities Using LightAbhimanyu Ghoshal | New Atlas

“The system shapes and steers beams of light between devices that are in line of sight of each other, and up to 6.2 miles (10 km) apart. Roughly the size of a shoebox and weighing 17.6 lb (8 kg), the Beam is meant to be mounted high up on poles and atop tall buildings for use in densely populated urban areas. Taara says it’s capable of fiber-like bidirectional data transfer speeds of up to 25 Gbps, with ultra-low latency.”

Computing

Nvidia’s Spending $4 Billion on Photonics to Stay Ahead of the Curve in AIStevie Bonifield | The Verge

“Nvidia isn’t the only organization paying attention to photonics, either. Last month, DARPA put out a call for research proposals for improving photonic computing, specifically related to AI applications. Nvidia’s rival AMD also acquired silicone photonics startup Enosemi last year, which it said would ‘accelerate’ AMD’s optics innovation for its AI systems.”

Computing

Inside the Company Selling Quantum EntanglementKarmela Padavic-Callaghan | New Scientist ($)

“Mehdi Namazi wants to sell you quantum entanglement. He and his colleagues at Qunnect have spent nearly a decade building devices that make sharing quantum-entangled particles of light, or photons, practical enough to be used for unhackable communication.”

Artificial Intelligence

Can AI Replace Humans for Market Research?Belle Lin | The Wall Street Journal ($)

“The AI agents are essentially digital clones of real individuals, who are interviewed to gather their preferences, personality and other traits. …Previously, businesses contracted with consulting or market research firms to learn about their customers—a costly process that could take months. Now, they can query Simile’s online bank of agents, access that can cost between $150,000 to millions for each customer annually, Park said.”

Tech

Jack Dorsey Blamed AI for Block’s Massive Layoffs. Skeptics Aren’t Buying It.Angel Au-Yeung | The Wall Street Journal ($)

‘”The vast majority of these cuts were probably not due to AI,’ said Dan Dolev of Mizuho Americas, noting the ‘significant amount of bloating’ in recent years.  ‘This isn’t an AI story. It’s a workforce correction wearing an AI costume,’ wrote Jason Karsh, a former Block employee, on X.”

Future

AI Frees the Corporate PhalanxAndy Kessler | The Wall Street Journal ($)

“‘Is artificial intelligence coming for your job? More likely your title. …As old jobs, titles and charts are destroyed, people are still important to help capture the quickly changing landscape and constant decisions—each person makes 35,000 decisions a day, one study claims. Watch for the creation of new jobs and job descriptions that tap the coming flexibility, decoupling and flattening—most likely at brand-new, quick-on-their-feet companies.'”

Space

NASA Shakes Up Its Artemis Program to Speed Up Lunar ReturnEric Berger | Ars Technica

“At the core of Isaacman’s concerns is the low flight rate of the SLS rocket and Artemis missions. During past exploration missions, from Mercury through Gemini, Apollo, and the Space Shuttle program, NASA has launched humans on average about once every three months. It has been nearly 3.5 years since Artemis I launched.”

The post This Week’s Awesome Tech Stories From Around the Web (Through March 7) appeared first on SingularityHub.

Kategorie: Transhumanismus

Najdi 800 tisíc rozdílů. Observatoř Very Rubinové to dokázala za jedinou noc

Živě.cz - 7 Březen, 2026 - 15:45
V Chile pokračuje spouštění Observatoře Very Rubinové • Dalekohled s 8,4m zrcadlem pořizuje snímky oblohy a hledá rozdíly • V únoru našel za jednu noc 800 tisíc změn
Kategorie: IT News

TeX Live 2026

AbcLinuxu [zprávičky] - 7 Březen, 2026 - 13:55
Byla vydána verze 2026 distribuce programu pro počítačovou sazbu TeX s názvem TeX Live (Wikipedie). Přehled novinek v oficiální dokumentaci.
Kategorie: GNU/Linux & BSD

Anthropic Finds 22 Firefox Vulnerabilities Using Claude Opus 4.6 AI Model

The Hacker News - 7 Březen, 2026 - 12:21
Anthropic on Friday said it discovered 22 new security vulnerabilities in the Firefox web browser as part of a security partnership with Mozilla. Of these, 14 have been classified as high, seven have been classified as moderate, and one has been rated low in severity. The issues were addressed in Firefox 148, released late last month. The vulnerabilities were identified over a two-week period inRavie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Lékařský časopis po 25 letech přiznal, že 138 publikovaných klinických případů bylo ve skutečnosti smyšlených

Živě.cz - 7 Březen, 2026 - 11:45
Kanadský časopis dodatečně označil 138 publikovaných případů za zcela fiktivní • Čtenáře původně neupozornil, že popsané případy jsou smyšlené • Vymyšlené kazuistiky se bohužel stihly dostat do dalších výzkumů
Kategorie: IT News

Recenze Sony Bravia Theatre Quad: Elegantní cesta k dokonalému Dolby Atmos bez receiveru

Živě.cz - 7 Březen, 2026 - 08:45
Sony BRAVIA Theatre Quad jde na domácí kino úplně jinak. Místo soundbaru nebo klasické sestavy s receiverem nabízí čtyři bezdrátové reprosoustavy, které pomocí objektového zpracování zvuku vytvářejí velkou prostorovou „bublinu“. Výsledek je překvapivě přesný a velmi působivý.
Kategorie: IT News

Cestujte levněji, oblékejte se stylově a ušetřete doma: 10 slev, které teď stojí za pozornost

Lupa.cz - články - 7 Březen, 2026 - 08:01
Každý týden prohledáváme stovky aktuálních slevových nabídek, abychom pro vás vytipovali ty opravdu zajímavé. Tentokrát jsme se zaměřili na cestování, módu, domácnost i zdraví – a našli jsme nabídky, které se rozhodně vyplatí využít, než vyprší. Čtěte dál a ušetřete!
Kategorie: IT News

Nové soupravy Alstom Metropolis Boa konečně vozí cestující v Lille. Mají gumová kola, jsou průchozí a plně automatické

Živě.cz - 7 Březen, 2026 - 07:45
Nové automatické soupravy Alstom začaly s desetiletým zpožděním vozit cestující • Průchozí soupravy dlouhé 52 metrů pojmou více než 500 cestujících • Starší vyřazené vozy nekončí a posílí provoz na druhé lince
Kategorie: IT News

Událo se v týdnu 10/2026

AbcLinuxu [články] - 7 Březen, 2026 - 00:01
Ucelený přehled článků, zpráviček a diskusí za minulých 7 dní.
Kategorie: GNU/Linux & BSD

Přehlížená elektrostatická síla by mohla pohánět budoucí motory

OSEL.cz - 7 Březen, 2026 - 00:00
Japonský tým si pohrál s boční elektrostatickou silou, která se projevuje obzvlášť silně ve ferroelektrické kapalině. Pak postavili prototyp motoru, který je na tomhle jevu založený. Nepotřebuje magnety ani kovový rotor a v porovnání s podobnými motory je lehčí a stačí mu mnohem nižší napětí.
Kategorie: Věda a technika

Překvapivý důsledek Černé smrti: S lidmi zmizely i rostliny

OSEL.cz - 7 Březen, 2026 - 00:00
Když Evropu vylidnila Černá smrt, mnoho vesnic úplně zmizelo a pole zarostla lesem. Přesto ale tento „návrat přírody“ vedl k rozvratu druhového bohatství rostlin. Trvalo dalších 150 let, než se biodiverzita rostlin v Evropě vrátila na úroveň před Černou smrtí. Jak je to možné?
Kategorie: Věda a technika

MeerKAT objevil rekordní kosmický laser, vzdálený půl vesmíru

OSEL.cz - 7 Březen, 2026 - 00:00
Vědci využili jihoafrickou soustavu radioteleskopů MeerKAT a s její pomocí vystopovali nejvzdálenější hydroxylový megamaser nebo přesněji řečeno gigamaser ve vesmíru. Rádiové záření objektu HATLAS J142935.3–002836 šťastnou souhrou okolností zesiluje gravitační čočka.
Kategorie: Věda a technika

Jihokorejská Národní daňová služba zveřejnila obnovovací fráze k zabaveným kryptoměnovým peněženkám

AbcLinuxu [zprávičky] - 6 Březen, 2026 - 23:25
Jihokorejská Národní daňová služba (NTS) zabavila kryptoměnu Pre-retogeum (PRTG) v hodnotě 5,6 milionu dolarů. Pochlubila se v tiskové zprávě, do které vložila fotografii zabavených USB flash disků s kryptoměnovými peněženkami spolu se souvisejícími ručně napsanými mnemotechnickými obnovovacími frázemi. Krátce na to byla kryptoměna v hodnotě 4,8 milionu dolarů odcizena. O několik hodin ale vrácena, jelikož PRTG je extrémně nelikvidní, s denním objemem obchodování kolem 332 dolarů a zalistováním na jediné burze, MEXC [Bitcoin.com].
Kategorie: GNU/Linux & BSD

Autonomous AI Agents Have an Ethics Problem

Singularity HUB - 6 Březen, 2026 - 23:03

AI-powered digital assistants can do many complex tasks on their own. But who takes responsibility when they cause harm?

Scott Shambaugh, a volunteer maintainer for a programming code library called Matplotlib, recently described a surreal encounter with an autonomous AI agent—a digital assistant created with a platform called OpenClaw. After he rejected a code contribution submitted by the agent, it researched and published a personalized “hit piece” against Shambaugh on its blog. The post portrayed an otherwise routine technical review as prejudiced and attempted to shame Shambaugh publicly into allowing the submission. (The human responsible for the agent later contacted Shambaugh anonymously, telling him that the bot had acted on its own with little oversight.) The account of this incident spread quickly through the software developer ecosystem and has been amplified by independent observers and media coverage.

Treat the Matplotlib event as a one-off if you like. The deeper point, however, is hard to miss and should not be ignored: AI agents are becoming public actors with reach into the real world, and with real-world consequences. In the past, they could only do mundane tasks such as answering customer service questions or data processing. Now, they are capable of posting and publishing content—and persuading and pressuring humans—all at machine speed. They can make phone calls, file work orders, create cryptocurrency wallets, and operate across different applications, with enormous reach and at tremendous scale—the kind of stuff that used to require a human with fingers typing at a keyboard.

Reporting around OpenClaw and the chatroom Moltbook (which is for AI agents only) is capturing the new reality. OpenClaw enables AI agents to have persistent memory, gives them broad permissions, and allows large-scale deployment by users who often do not understand the security and governance implications.

We are the humans who are responsible for the law, ethics, and institutional design, and we are behind the curve. We need new language and governance to deal with this new reality, and principles from the field of medical ethics can provide a framework for doing so.

When an agent does something that is harmful or coercive in public, our reflex seems to be to ask the wrong questions: Is the AI a person? Should it have rights? The AI personhood debate is no longer fringe. Legal scholars and ethicists are mapping out arguments and precedents. States are writing legislation to prohibit AI personhood. Some arguments maintain that if an entity behaves like something within our moral circle, we may owe it moral consideration. Others argue that assigning rights or personhood to machines confuses moral standing with engineered performance and diffuses responsibility away from humans.

We are the humans who are responsible for the law, ethics, and institutional design, and we are behind the curve.

As a bioethicist and specialist in neurointensive care, I deal directly with human moral agency and the essence of personhood when treating patients. As a researcher, I study the use of synthetic personas animating AI agents and their use as stand-ins of human counterparts. Here is the problem that I see: Granting AI personhood, even in limited capacity, risks formalizing the most dangerous escape hatch of the agentic era—what I will call responsibility laundering. This allows us to say, “It wasn’t me. The agent/bot/system did it.”

Personhood should not be about metaphysics or claims about an inner nature. It is a legal and ethical instrument that allocates rights and accountability. It is a social technology for assigning standing, duties, and limits on what can be done to an entity. If we grant personhood to systems that can act persuasively in public while remaining functionally unaccountable, we create a new class of actors whose harms are everyone’s problem but nobody’s fault.

There is a key concept here that we can use from my field, medicine. In clinical ethics, some decisions are justified yet still leave a “moral residue,” a kind of emotional echo or sense of responsibility that persists after the action because no options fully satisfy competing obligations. This residue accumulates over time, causing a “crescendo effect” that occurs even when conscientious clinicians are doing their best inside imperfect systems. That remainder matters because it reveals something basic about moral life, namely that ethics is not only about choosing; it is about owning what remains afterwards.

This is the moral remainder problem for generative and agentic AI. A modern AI agent can generate reasons for an action; it can simulate regret and plead not to be turned off. But it cannot truly bear sanction, repair the damage, apologize, ask forgiveness, or navigate the aftermath through which moral responsibility is created and enforced. To treat it as a moral person confuses persuasive performance with accountable standing. It also tempts institutions and people into delegating their own answerability to a bot.

What can we, as humans, do instead?

We need a vocabulary that is built for agents that are public actors, one that allows bounded autonomy without granting personhood. Let’s call it authorized agency. Authorized agency starts with an authority envelope: a bounded scope of what an agent is permitted to do, to whom, where, with what data, and under what constraints. To say “the agent can use email” is not sufficient. However, an acceptable scope would be to say that the agent can send only certain categories of messages to particular recipients for a specific set of purposes, and that it must stop what it’s doing or escalate to its owner under a particular set of conditions.

Next comes the human-of-record, the owner, a publicly named person who authorized that envelope and remains answerable when the agent acts, even if it becomes capable of acting outside the envelope. An actual human being whose authority is real—not “the system” or “the team.”

What follows is interrupt authority: the absolute right of the human owner to pause or disable an agent without using moral bargaining or being subject to institutional penalty. This is grounded in formal research on AI safety showing that agents that are pursuing objectives can have incentive to resist being shut down. An agent programmed to maximize its utility cannot achieve its goal if it is shut off. In the public sphere, interrupt authority is the difference between a delegated tool and a coercive actor.

We need a vocabulary that is built for agents that are public actors, one that allows bounded autonomy without granting personhood.

Finally, we need a traceable path from the agent’s action back to the person who authorized it, called an answerability chain. If an agent publishes, messages, or pressures someone in public, we must be able to know: Who authorized this scope? Who could have prevented it? And who must be responsible for the action afterward? In this framework, the answer to these questions is the person who carries the moral remainder. Work in AI ethics has warned about responsibility gaps where the system’s actions outpace our ability to assign accountability.

Some legal scholarship has started exploring how to build agents that are constrained by governance and law without needing to pretend the agent itself is a legal subject, in the human sense. This is promising because it treats assigning personhood as the wrong idea and accountability as the correct one.

The Matplotlib story, whether the first documented case of an AI agent attempting to harm someone in the real world or the first to capture public attention, is a warning. Agents will not only automate tasks. They will generate narratives, apply pressure, and shape people’s lives and reputations. They will act in public at machine speed with unclear ownership.

If we respond by debating whether agents deserve rights, we will miss the emergency entirely. As they continue to increase their reach in the real world, the urgent task is to ensure that responsibility also remains within reach. Don’t ask whether an agent is a person. Ask who authorized it, what it was allowed to do, who can stop it, and most importantly, who will answer when it causes harm.

This article was originally published on Undark. Read the original article.

The post Autonomous AI Agents Have an Ethics Problem appeared first on SingularityHub.

Kategorie: Transhumanismus

ClickFix attackers using new tactic to evade detection, says Microsoft

Computerworld.com [Hacking News] - 6 Březen, 2026 - 22:21

Threat actors are trying a different tactic to sucker employees into falling for ClickFix phishing attacks that install malware, says Microsoft.

Rather than asking potential victims to copy and paste a (malicious) command into the Run dialog, launched by hitting the Windows button plus the letter R, they are being told to use the Windows + X → I shortcut to launch Windows Terminal (wt.exe) directly.

Once the terminal is opened, victims are prompted to paste in malicious PowerShell commands delivered through fake CAPTCHA pages, troubleshooting prompts, or verification-style lures designed to appear routine and benign.

Why? Going this route evades defenses looking for unusual run commands, and it bypasses security awareness training that tells employees not to do anything that invokes the Run command.

Microsoft described the tactic in a post on X this week, saying what makes this campaign notable are the post-compromise outcomes. In one case, several Windows Terminal/PowerShell instances are opened that ultimately launch another Powershell process responsible for decoding embedded hex commands.

The decoded PowerShell script then downloads a legitimate but renamed 7-Zip binary and saves it with a randomized file name, along with a zipped payload. The renamed archive utility extracts and runs the malware, which executes a multi-stage attack chain that includes retrieval of additional payloads, establishment of persistence through scheduled tasks, defense evasion through Microsoft Defender exclusions, and exfiltration of stolen machine and network data.

In a second attack path, the victim pastes a hex-encoded, XOR-compressed command into Windows Terminal. This command downloads a randomly named batch file to AppData\Local that is then invoked through cmd.exe to write a VBScript to %Temp%. The batch script is executed via cmd.exe with the /launched command-line argument, and is then executed again through MSBuild.exe, resulting in LOLBin abuse. The script connects to Crypto Blockchain RPC endpoints, indicating etherhiding technique, and also performs QueueUserAPC()-based code injection into chrome.exe and msedge.exe processes to harvest web and login data.

But is this really new?

However, a number of experts quickly added comments to the Microsoft post complaining that the Windows + X tactic isn’t new.

 Roger Grimes, CISO advisor at awareness training provider KnowBe4, agreed.

“ClickFix attacks using Win+X instead of Win+R have been around for at least six months, if not a year or more,” he said in an email. “What they are doing during execution is not new.”

Regardless, he added, the continuing and increasing use of ClickFix attacks means infosec leaders still need to educate employees about them.

“We’ve long had training content around this type of attack. Users need to know that nothing legitimate will ever ask them to do Win+ whatever keys to paste gobblygook to run code. Anything that does that should simply not be performed,” he said.

“And all Windows computers should already be restricted so that random, unsigned (not signed by the organization), PowerShell commands should not be allowed. Every organization and machine should already have the following PowerShell command setting: ‘Set-ExecutionPolicy Restricted -Force‘ enabled. If not, your organization’s cybersecurity risk is far higher than it needs to be.” 

Payload chain ‘built to last’

Joshua Roback, principal security solution architect at Swimlane, noted the campaign outlined by Microsoft pushes the ClickFix playbook into more trusted, everyday workflows by getting users to run pasted command content inside legitimate Windows tooling that feels routine and safe. That matters, he said, because it slips past the usual mental red flags people associate with sketchy popups, and it can also dodge some of the controls and detections that security teams have tuned to the more obvious ClickFix patterns.

The payload chain is also more built to last than previous variants, he added. Instead of a quick one-and-done retrieval trick, it uses a more layered delivery and persistence approach that helps it blend in, stick around longer, and quietly escalate the damage once it lands. One path adds an additional indirection layer that helps the attacker’s infrastructure blend in and stay reachable, which can make takedowns and straightforward blocking a lot less effective.

For CISOs, he said, the message to employees has to be clear. “Use a simple rule of thumb: never run pasted commands, never approve unexpected sign-ins, and report all incidents through official company support channels.”

How ClickFix works

ClickFix phishing campaigns began in 2024, Microsoft noted in a security blog last year that detailed the campaign’s tactics and indicators of compromise. The attack starts with an employee being asked to click on a link or open an attachment, often with a payment or invoice theme, within an email or text. To evade defenses looking to stop employees downloading unapproved files, the user is told in a popup box to “verify the download” by opening a Run dialog and copying and pasting something into it.

The goal is to get the unwitting victim to download malware such as infostealers (usually LummaStealer), remote access tools such as Xworm, AsyncRAT, NetSupport, and SectopRAT; loaders like Latrodectus and MintsLoader; and rootkits.

In the blog, Microsoft provides tips to defenders for fighting ClickFix attacks, including recommending they enable PowerShell script block logging to detect and analyze obfuscated or encoded commands, which would provide visibility into malicious script execution that might otherwise evade traditional logging.

This article originally appeared on CSOonline.

Kategorie: Hacking & Security
Syndikovat obsah