Agregátor RSS
Global recruitment giant says 71% of human firewalls saw wages stagnate last year as threats and responsibilities grew Cybersecurity professionals were the most overlooked workers in IT when it came to pay rises in 2025, according to new figures from recruiter Harvey Nash.…
Microsoft is investigating an ongoing Outlook.com outage that is causing intermittent signing issues and preventing customers from accessing their mailboxes. [...]
Anthropic’s Claude Mythos Preview has dominated security discussions since its April 7 announcement. Early reporting describes a powerful cybersecurity-focused AI system capable of identifying vulnerabilities at scale and raising serious questions about how quickly organizations can validate, prioritize, and remediate what it finds.
The debate that followed has mostly focused on the right [email protected]
A pro-Ukrainian hacktivist group called PhantomCore has been attributed to attacks actively targeting servers running TrueConf video conferencing software in Russia since September 2025.
That's according to a report published by Positive Technologies, which found the threat actors to be leveraging an exploit chain comprising three vulnerabilities to execute commands remotely on susceptible Ravie Lakshmananhttp://www.blogger.com/profile/ [email protected]
A home security biz getting digitally burgled is not a great look - but that's exactly where ADT finds itself. The company has confirmed a cyber intrusion following an extortion attempt by the ShinyHunters crew, which claims to have made off with more than 10 million records. US-based ADT is one of the world's largest providers of monitored home alarm systems, selling everything from burglar alarms and cameras to smart home kits, all pitched on keeping unwanted visitors out. On Friday, the company said it detected "unauthorized access" on April 20, shut it down, and brought in outside incident responders, with law enforcement looped in. According to ADT, the intruder made off with a "limited set" of data covering names, phone numbers, and addresses, with a smaller slice including dates of birth and the last four digits of Social Security or tax ID numbers. No payment data was accessed, it said, and the firm was keen to stress that customer security systems were not touched. That's the official version. ShinyHunters, meanwhile, is telling a rather different story. In a post on its dark web leak site, seen by The Register, the crew claims it lifted "over 10M Salesforce records containing PII and other internal corporate data" and is now airing the lot after talks with ADT went nowhere. "The company failed to reach an agreement with us despite our incredible patience, all the chances and offers we made," the group said. "They don't care." The mention of Salesforce hints at a possible SaaS foothold rather than someone fiddling with alarm panels. While ADT has yet to confirm how the intruders gained access, it said in a separate 8-K filing [PDF] that attackers accessed "certain cloud-based environments." There is, to put it mildly, a gap between "limited set" and "10 million records." Companies tend to define incidents as tightly as possible, while crooks tend to do the opposite. The truth usually lands awkwardly in between. Have I Been Pwned has now put a number on it, listing 5.5 million unique email addresses, a number that sits far nearer "millions" than ADT's version of events. ShinyHunters recently made similar claims about cruise company Carnival Corporation, complete with talk of failed negotiations and a looming data dump. ADT has not yet responded to questions from The Register about how it was compromised, how many people were affected, whether customers outside the US are involved, or whether it has filed breach notifications with state attorneys general. For a company built on keeping intruders out, this one has already got inside the front door. Whether it also cleaned out the filing cabinets is the part still being argued over. ®
Security giant says attackers grabbed 'limited set' of data. Crooks claim 10 million records A home security biz getting digitally burgled is not a great look - but that's exactly where ADT finds itself. The company has confirmed a cyber intrusion following an extortion attempt by the ShinyHunters crew, which claims to have made off with more than 10 million records.…
Cybersecurity researchers have flagged dozens of Microsoft Visual Studio Code (VS Code) extensions on the Open VSX repository that are linked to a persistent information-stealing campaign dubbed GlassWorm.
The cluster of 73 extensions has been identified as cloned versions of their legitimate counterparts. Of these, six have been confirmed to be malicious, with the remaining acting as seemingly Ravie Lakshmananhttp://www.blogger.com/profile/ [email protected]
Keep the patches away for as long as you like Microsoft has devised a solution to the problem of Windows Updates that break customer devices – users are now able to pause them for as long as they like.…
Oživeno 24. 4. 2026 | Kulatá verze prohlížeče hlásí spoustu vylepšení. Firefox již dříve nabídl překladač stránek. Není tak dobrý jako cloudová konkurence, ale může fungovat bez připojení a slibuje naprosté soukromí. Na této technologii Mozilla staví překladač v reálném čase. Podporuje i ...
Se smartphony, které mají v názvu „Ultra“, se roztrhl pytel, a to i v Evropě • Otestovali jsme fotograficky laděný model Oppo Find X9 Ultra • Je zaměřený na focení, vysoký výkon a kvalitní displej
Loni na podzim Prusa Research poprvé ukázal nový systém multimateriálového 3D tisku INDX. Vyvíjí je společně se švédským Bondtechem a ten také na přelomu roku spustil předprodej první várky Founders pro nejméně trpělivé fanoušky.
V Holešovicích si dali na čas, ten ale nemarnili a INDX dále ...
Nerecenzovaná studie tvrdí, že ovoce a zelenina zvyšují riziko rakoviny plic • Odborníci kritizují chybějící kontrolní skupinu i zcela nepodloženou hypotézu • Dosavadní výzkumy prokazují obrovské zdravotní přínosy rostlinné stravy
UK’s data watchdog confirms its boss has been off the job since February while an HR investigation runs The UK's data watchdog is without its chief after John Edwards stepped aside from the Information Commissioner's Office while an independent workplace investigation examines unspecified HR matters.…
OPINION In retrospect, calling it Mythos made it a hostage to fortune. Anthropic may have hoped that the name implied its AI code security model had mythical god-like powers, but there's an alternate reading. Another definition for Mythos is a set of beliefs of obscure origin which are incompatible with reality. That reality is trickling in, and it’s looking less mythical, more typical. Mythos is a great tool that can automate a lot of the things expert humans do, and it’s the expert humans who get the most from it. It is very good at finding classes of vulnerability that humans know about, while not finding ones that they don’t. Training, amirite? Project Glasswing, limiting early use to trusted partners with a real need, is probably a responsible approach to using its powers for good, but other unrestricted models are quite good at this too. Some hype, some truth, LLMs gonna LLM. It is cynical to say the only real innovation is an AI company operating ethically. Equally cynical is seeing the closed roll-out and the attendant publicity as merely an exercise in hype. It is more constructive, arguably more accurate, and certainly more exciting, to take all this as an early glimpse of a better future. One where the threat landscape stops being a function of geological and climactic forces we can’t control, turning instead into one cultivated, controlled and gratifyingly anti-climactic. Two propositions point the way. One is that the effectiveness of tools like Mythos will continue to evolve, exposing more and more structural and individual code flaws. The other, that these tools will inevitably become generally available. How quickly and cheaply may be controllable, but the outcome is inevitable. There are no long-term secrets in IT. Right now, and for some time to come, most running code has been written in the pre-industrial age of vulnerability detection. Eyeballs, not AI balls, did the work. This is a bad public environment to dump roaming packs of implacable vuln-hunting robots. If they come too soon, it’ll be messy. And they are coming. But if we survive that transition intact, then let the robots roam at will. There is one class of code that is guaranteed to present no security risks whatsoever, and that’s undeployed code. New code has a lot of problems, some caught before deployment and some that aren’t, but never an infinite number. Where truly excellent tools exist, code can be made truly excellent before release. It doesn’t matter if the same tools are available to the bad guys thereafter. A good model, and cited often, is aviation safety. At the beginning of the jet age, new airliners had structural and mechanical faults that made them fall out of the sky. Over time, not only did design and material knowledge improve, but the engineering and regulatory disciplines evolved alongside. Now, we still have crashes, but they are inevitably traceable to things that could and should be done right, but weren't. There’s no new undiscovered class of failure waiting in the wings. It is highly unlikely that code is anything different — after all, we’ve been doing it precisely as long as we’ve been flying jets. Just fixing code vulnerabilities doesn’t fix security, in the same way that knowing how to make and fly exquisitely safe aircraft stops fuel contamination, flocks of geese, or foolish humans from creasing the things. It does help immensely, though. Looking at exploits based on long chains of known and unknown vulns shows how flakey code can be, but it also shows how removing just one of those bugs shuts down the entire attack. The Swiss cheese model of failure works less and less well the more the cheese tends to cheddar. As for the holes outside the code, the supply chain exploits, the special engineering, the straightforward inside sabotage job, to the extent that we can encode, model and train on them, they too will be amenable to the inexhaustible patience of the inference engines. And while huge swathes of enterprise infrastructure continue to run old, unpatched or misconfigured systems, it’ll be like flying on aircraft from the Age of Death. There’s no IT equivalent of the FAA with the power to ground that which should never be flying, much as that would be a fun counter-factual. This too shall pass. There is no way that a tool which catches vulnerabilities by the hundred does not make old code safer, new code so much more so. It will be most interesting to see how the tools for finding flaws evolve alongside the techniques for designing, factoring and writing code for inherent strength. Nobody should expect the way things are now to be the most efficient, least expensive way there is. Nor should anyone expect human expertise to fall out of use. The fact that so many aviation safety issues revolve around human failure shows how intrinsic humans still are in design, construction, maintenance and operation aloft. Let computers do what computers are good at, let humans do what humans are good at. Old but true. We know from decades of digital life that humans aren’t so good at security, and that computers aren’t so hot at it either. In another old saying — give us the tools and we can finish the job. Mythos isn’t a tool that can let us do that, not yet. AI in general seems determined to make things worse. Now, at last, we can see a path forward, a different way of doing things that is likely to actually happen. What was a threat landscape can become a garden where good things grow. That’s no myth, that’s the future. ®
AI vuln-hunter finds what humans taught it to find. Funny that Opinion In retrospect, calling it Mythos made it a hostage to fortune. Anthropic may have hoped that the name implied its AI code security model had mythical god-like powers, but there's an alternate reading. Another definition for Mythos is a set of beliefs of obscure origin which are incompatible with reality.…
Věřte nebo ne, Ryzen 9 9950X3D2 se koncem týdne - navzdory ceně $899 - dostal mezi 10 nejprodávanějších procesorů Amazonu. Zdá se, že názor uživatelů příliš nerezonuje se závěry (některých) recenzí…
Every CEO and executive enthusiastically slashing headcount in anticipation of an AI-driven productivity boom should read a new meta-analysis from the UK’s Royal Docks School of Business and Law. It suggests those decision-makers might be optimizing for the wrong thing.
While mass layoffs have an immediate measurable payoff, the study says the best use of AI is to boost human cognition and decision-making, not replace it. The research looks at how people can leverage AI to improve how knowledge is created and shared.
The study found that AI excels at tackling complex tasks quickly, while people excel at tasks involving judgment, meaning, and responsibility. AI can also improve an organization’s “collective intelligence” by pulling together facts and ideas from various subjects into one clear picture.
For example:
- A hospital where AI surfaces relevant research from specialties the treating physician doesn’t follow, but the doctor still makes the call
- A law firm where AI cross-references precedent across jurisdictions in minutes, while partners decide the best argument for the client
- A product team where AI synthesizes feedback from support tickets, sales calls, and app reviews — but humans decide what to build
AI use is far more effective than AI or people working independently.
Despite huge gains in in the technology’s capabilities, AI still needs people for interpretation and making ethical choices, according to the study. And it warns that over-reliance on AI erodes irreplaceable human judgment.
Instead of assuming AI can replace human expertise, organizations should focus on building “knowledge ecosystems” (the ways groups create, store, and share information) where AI supports human learning, innovation, and decision-making, according to the study.
The goal shouldn’t be to ban AI or replace employees outright, but to use AI to cultivate a powerful knowledge ecosystem that captures knowledge, facilitates its movement, and creates new understanding. (Think Slack channels, wikis, tribal knowledge, onboarding docs, expert networks, and AI layers on top.)
While replacing employees with AI captures cost savings, it surrenders the collective-intelligence opportunity.
On the cultivation of human talent
Initially, many organizations responded to the emergence of powerful AI chatbots and tools with a simplistic “we need more of this.” Now, it’s time to confront the “skills atrophy paradox.”
Some companies are trying to replace junior employees with AI used by senior employees. But if that’s happening at scale, where do tomorrow’s senior employees come from?
According to a new paper titled “AI Assistance Reduces Persistence and Hurts Independent Performance,” conducted by researchers from major US and UK universities, reliance on AI chatbots erodes human capability.
The study tested the effects of AI assistants such as ChatGPT on tasks like math and reading comprehension with over 1,200 participants. It found that while AI improved performance, scores dropped sharply once it was removed, and users were more likely to give up on hard problems than those who didn’t use AI at all.
These aren’t long-term effects. They appear after only about 10 to 15 minutes of using AI — about the same time it takes to drink a cup of coffee.
The researchers don’t recommend banning AI, but argue it should be used to help people grow and learn.
The takeaway from both studies: organizations benefit greatly by keeping people in authorship of decisions and avoid demoting them to rubber-stamping AI’s output.
Another error is to focus too much on the narrow idea of “productivity” or output. Companies that keep people in charge will be more legally defensible, more trusted by customers, and better at catching the high-cost mistakes AI makes confidently, according to the Royal Docks study.
How to build a strong ‘knowledge ecosystem’
The building blocks of a human-AI knowledge ecosystem are, according to the Royal Docks study:
- Workflow redesign: map tasks by who (or what) is best suited — then design handoffs, not replacements
- New roles: hire or cultivate AI specialists
- Training shift: from domain skills alone to metacognition — knowing when and how to combine individual personal knowledge with AI input
- Documentation matters more, not less: Focus on high-quality, thorough documentation of everything knowing that AI can handle the complexity of it all
- Ethical guardrails baked in: use people to keep AI aligned with human- and business-centered goals
The new AI strategy
The uncomfortable truth in the Royal Docks findings isn’t that AI is less powerful than we thought. It’s that its power is wasted on the strategy most organizations have chosen for it.
Replacement is a one-time cost saving. But using AI as part of a real knowledge ecosystem where AI makes humans smarter and humans keep AI honest delivered compounding advantages.
To focus on the cost savings of cut salaries is to fall for the quantitative fallacy, which is to favor the measurable and believe the unmeasurable isn’t important or doesn’t exist.
This will all play out over time. The companies replacing too many employees in the hopes AI will do their jobs will find themselves at a competitive disadvantage against those who invest in building those powerful knowledge ecosystems and a culture of partnership between people and AI.
AI disclosure: I don’t use AI for writing. The words you see here are mine. I do use a variety of AI tools via Kagi Assistant (disclosure: my son works at Kagi) — backed up by both Kagi Search, Google Search, as well as phone calls to research and fact-check. I use a word processing application called Lex, which has AI tools, and after writing use Lex’s grammar checking tools to find typos and errors and suggest word changes. Here’s why I disclose my AI use and encourage you to do the same.
|