Kategorie
Experts warn: Swarms of AI bots threaten democracy
A group of researchers from Berkeley, Harvard, Oxford, Cambridge, and Yale warn that the rise of AI bots and AI agents could pose a serious threat to democracy.
For example, power-hungry politicians around the world can relatively easily create swarms of AI bots that flood social media and messaging services with propaganda and disinformation.
In this way, they can not only influence election results but also persuade parts of the population to replace parliamentary democracy with an authoritarian regime.
“It is highly conceivable that certain actors will attempt to mobilize virtual armies of LLM-driven agents to disrupt elections and manipulate public opinion — for example, by targeting large numbers of individuals on social media and other electronic media,” says Michael Wooldridge, a professor at Oxford, in a comment to The Guardian.
The researchers’ warning can be read in full in Science.
Related reading:
How much does AI improve work efficiency? Managers and employees disagree
A new survey conducted by AI consulting firm Section among 5,000 white-collar workers in the US, UK, and Canada shows a clear gap between how company management and employees perceive the benefits of AI at work, reports The Wall Street Journal.
Over 40% of C-level executives say the technology saves them more than eight hours per week. At the same time, two-thirds of employees without managerial roles say AI saves them less than two hours per week, or no time at all.
Many employees instead describe AI as stressful and difficult to use correctly, often because AI-generated material must be checked, corrected, or redone.
What’s more, there are few economic effects visible at the moment. In a global CEO survey by PricewaterhouseCoopers with nearly 4,500 participants, only 12% said that AI has so far resulted in both cost savings and increased revenue. Over half see no clear business benefit at all.
Related reading:
Google’s AI search gets support for ‘personal intelligence’
A week ago, Google decided to add “personal intelligence” to Gemini, which gives the AI tool access to your searches, your photos, your YouTube history, and your Gmail emails.
Now TechCrunch reports reports that US users also have access to personal intelligence in Google’s AI search.
“With personal intelligence, recommendations not only match your interests — they fit seamlessly into your life,” wrote Google Search Director Robby Stein in a blog post.
The idea is that the new feature will help users with everything from restaurant visits to hotel bookings, as the AI already knows your preferences.
To use personal intelligence, you first need to activate the feature. It will also be possible to turn it off if desired.
Related reading:
Workers challenge ‘hidden’ AI hiring tools in class action with major regulatory stakes
Workers are getting fed up with AI-based hiring practices.
A new class action lawsuit filed in California alleges that human candidates are being unfairly profiled by “hidden” AI hiring technologies that “lurk in the background” to collect “sensitive and often inaccurate” information about “unsuspecting” job applicants.
The suit specifically targets Eightfold AI, claiming that tools used by the company should be regulated in the same way as credit report bureaus are via The Fair Credit Reporting Act (FCRA) and state laws based on it.
The case could have broad-reaching implications for the increased use of AI in hiring.
“This lawsuit is a pivot point,” said Sanchit Vir Gogia, chief analyst at Greyhound Research. “It tells us that AI isn’t just being scrutinized for what it does, but for how it does it and whether people even know it’s happening to them.”
Violating the 55-year-old FCRAThe suit was filed in the Superior Court of California by New York City-based law firm Outten & Golden LLP, on behalf of Erin Kistler and Sruti Bhaumik. The plaintiffs claim they were barred from employment on several occasions by companies using AI-based hiring tools.
The class action complaint asserts that Eightfold AI violated federal and state fair credit and consumer reporting acts and unfair competition laws by collecting data on applicants and selling reports to companies for use in employment decision-making. These practices “can have profound consequences” for job-seekers across the US, the lawsuit claims.
Eightfold markets itself as the “world’s largest, self-refreshing source of talent data” and incorporates more than 1.5 billion global data points, including job titles and worker profiles across “every job, profession, [and] industry.” It counts among its customers corporate giants including Microsoft, Morgan Stanley, Starbucks, BNY, Paypal, Chevron, and Bayer.
The suit claims the Santa Clara-based company’s proprietary large language model (LLM) and deep learning-based technology analyze data from public resources including career sites, job boards, and resumé databases such as LinkedIn and Crunchbase. It also culls information from social media profiles, applicant locations, and behind-the-scenes tracking tools. None of these personal data points are ever included in job applications.
AI algorithms then rank a candidate’s “suitability” on a numerical scale of 0 to 5, based on “conclusions, inferences, and assumptions” about their culture fit, projected future career trajectory, and other factors. This method is intended to create a profile of the candidate’s “behavior, attitudes, intelligence, aptitudes, and other characteristics,” according to the lawsuit.
However, these reports are “unreviewable” and “largely invisible” to candidates, who have no opportunity to dispute their contents before they are passed on to hiring managers, the plaintiffs argue. “Lower-ranked candidates are often discarded before a human being ever looks at their application.”
This method of report creation violates longstanding FCRA requirements, and there is no stipulated exemption for AI use, according to the suit.
The FCRA broadly defines consumer reports as any written, oral, or other communication from a consumer reporting agency that includes information on a person to determine their access to credit and insurance, as well as for “employment purposes.” According to the lawsuit, this definition covers reports that contain information on “habits, morals, and life experiences.”
Plaintiffs argue that, while automated screening technology did not exist when the FCRA was established in 1970, lawmakers at the time expressed concern about growing accessibility to consumer information through computer and data-transmission techniques, and that “impersonal blips,” inaccurate data, and analysis by “stolid and unthinking machines” could unfairly bar people from employment.
Thus, the lawsuit argues, agencies like Eightfold must disclose their practices, obtain certifications, and give consumers a mechanism to review and correct reports. “Large-scale decision-making based on opaque information is exactly the kind of harm the statute was designed to address.”
Neither the lawyers for the plaintiffs nor for the defendants responded to requests for comment. The Society for Human Resource Management (SHRM) also declined to comment.
Defensibility becomes the new barThis lawsuit exposes a “governance failure” and “fundamental accountability gap,” noted Greyhound’s Gogia.
And it’s not the first, nor will it likely be the last; HR company Workday, for instance, is facing a lawsuit alleging that its AI-powered hiring tools make decisions based on race, and also discriminate against older and disabled applicants.
If courts agree that AI evaluations function like credit reports, hiring will be pushed into regulated territory, Gogia noted; this means CIOs must establish clarity and set rules around notification, transparency, audit rights, and contestability.
“If your hiring tools operate like decision engines, they need to be governed like decision infrastructure,” he said. And when they influence employment decisions, enterprises will have to prove they’ve done their homework. This means showing the logic behind a model, understanding data provenance, and being able to explain why an applicant was rejected and the processes they have in place to correct bad calls.
“Defensibility will become the new bar,” said Gogia.
Where AI hiring helps, where it hurtsThat’s not to say that AI can’t be valuable in hiring; many real-world examples have proven that it can. The Human Resources Professionals Association, for one, points to successful use of AI in initial talent sourcing, screening, and assessment, while AI scribes can quietly take notes, helping recruiters focus more intently on candidate discussions.
Gogia agreed that AI can filter and rank large applicant pools, automate repetitive HR tasks, and identify overlooked candidates within internal databases. This means hiring teams can move faster, hone their focus, be more consistent, and reduce friction.
“But the moment AI moves into judgement territory, things get messy,” he emphasized. Scoring personality traits, predicting future roles, or evaluating the quality of a candidate’s education are all “subjective inferences dressed up as mathematical objectivity.”
Gogia advises clients to insist on human-readable evidence from vendors, including logs, bias audits, and disclosures about model updates. They should ask questions like: What did the model evaluate? Why did it rank one candidate higher over another? What can the hiring manager say if asked to justify that outcome?
The answers to those questions can lead to process changes. One of Greyhound’s European manufacturing clients, for instance, redesigned its hiring pipeline so that managers had to log a rationale at every decision point, even if AI had already created a shortlist. This helped improve the audit trail, catch errors, and taught the team to “treat AI as input, not verdict,” Gogia noted. And another client slowed its final screening process for senior hires because it couldn’t defend the decisions AI was influencing and realized the system wouldn’t be able to survive scrutiny.
“CIOs, CHROs, legal, risk — all need to co-own this now,” said Gogia. “That starts by restoring the human’s role as an accountable actor, not just a passive observer. The future of hiring tech is human with machine, governed from day one.”
Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health"
The project developer for one of the Internet’s most popular networking tools is scrapping its vulnerability reward program after being overrun by a spike in the submission of low-quality reports, much of it AI-generated slop.
“We are just a small single open source project with a small number of active maintainers,” Daniel Stenberg, the founder and lead developer of the open source app cURL, said Thursday. “It is not in our power to change how all these people and their slop machines work. We need to make moves to ensure our survival and intact mental health.”
Manufacturing bogus bugsHis comments came as cURL users complained that the move was treating the symptoms caused by AI slop without addressing the cause. The users said they were concerned the move would eliminate a key means for ensuring and maintaining the security of the tool. Stenberg largely agreed, but indicated his team had little choice.
Okta SSO accounts targeted in vishing-based data theft attacks
Spotify lawsuit behind shutdown of pirate library domains
A lawsuit filed by Spotify and several major record labels was behind the shutdown of several of Anna’s Archive’s domains earlier this year. This is according to recently published documents from a federal court in the US, reports Torrentfreak.
The background to this is that in December 2025, Anna’s Archive stated that the site had backed up Spotify and planned to release large amounts of collected data. According to the lawsuit, the archive circumvented Spotify’s DRM and scraped metadata and audio files linked to hundreds of millions of songs.
On December 29, Spotify, together with companies such as Universal, Sony, and Warner, filed a sealed lawsuit in New York. Shortly thereafter, the court issued a temporary order targeting domain registrars, web hosts, and other intermediaries, which led to the shutdown of Anna’s Archive’s .org and .se domains in early January. Among the recipients of the order was the Swedish Internet Foundation.
In mid-January, a broader injunction followed, which also covers operators such as Cloudflare and requires them to stop access to the copyrighted material. Shortly thereafter, Anna’s Archives’ special section for Spotify downloads was removed and marked as unavailable. The legal process is still ongoing.
This article originally appeared on ComputerSweden.
Anthropic’s Claude AI gets a new constitution embedding safety and ethics
Anthropic has completely overhauled the “Claude constitution”, a document that sets out the ethical parameters governing its AI model’s reasoning and behavior.
Launched at the World Economic Forum’s Davos Summit, the new constitution’s principles are that Claude should be “broadly safe” (not undermining human oversight), “Broadly ethical” (honest, avoiding inappropriate, dangerous, or harmful actions), “genuinely helpful” (benefitting its users), as well as being “compliant with Anthropic’s guidelines”.
According to Anthropic, the constitution is already being used in Claude’s model training, making it fundamental to its process of reasoning.
Claude’s first constitution appeared in May 2023, a modest 2,700-word document that borrowed heavily and openly from the UN Universal Declaration of Human Rights and Apple’s terms of service.
While not completely abandoning those sources, the 2026 Claude constitution moves away from the focus on “standalone principles” in favor of a more philosophical approach based on understanding not simply what is important, but why.
“We’ve come to believe that a different approach is necessary. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize — to apply broad principles rather than mechanically following specific rules,” explained Anthropic.
The constitution will help Claude to move from simply following a limited checklist of approved possibilities to one based on deeper reasoning. So, for example, instead of keeping data private because this agrees with a rule, the constitution will help it understand the ethical framework in which privacy is important.
The effect of this added complexity is length, with the new version expanding dramatically to 84 pages and 23,000 words. If this sounds long-winded, the reasoning is that the document has been written to be ingested primarily by Claude itself. “It [the constitution] needs to work both as a statement of abstract ideals and a useful artifact for training,” the announcement said.
It also noted that the document is currently written for mainline, general access Claude models, and that specialized models may not fully fit, but said that the company will “continue to evaluate” how to make them meet the constitution’s core objectives. In addition, it promised to be open about missteps “in which model behavior comes apart from our vision.”
Intriguingly, Anthropic has released Claude’s constitution under a Creative Commons CC0 1.0 Deed, which means it can be used freely by other developers in their models.
Don’t be evilThe context for the update is rising skepticism rising about the reliability, ethics, and safety of large proprietary LLMs. From the start, Anthropic, which was founded in 2021 by former OpenAI employees worried about the latter’s direction, has sought to set itself apart as taking a different approach.
More contentious is the constitution’s oblique reference to the debate over AI consciousness. “Claude’s moral status is deeply uncertain. We believe that the moral status of AI models is a serious question worth considering. This view is not unique to us: some of the most eminent philosophers on the theory of mind take this question very seriously,” it states on page 68.
In August, Anthropic introduced a new feature to its most advanced Claude Opus 4 and 4.1 models it said would end a conversation if a user repeatedly tried to push harmful or illegal content, as a mode of self-protection. And in November, an Anthropic research paper suggested that the same Opus 4 and 4.1 models showed “some degree” of introspection, reasoning about past actions in an almost human-like way.
In fact, LLMs are statistical models, not conscious entities, countered Satyam Dhar, an AI engineer with technology startup Galileo.
“Framing them as moral actors risks distracting us from the real issue, which is human accountability. Ethics in AI should focus on who designs, deploys, validates, and relies on these systems,” he said.
“An AI ‘constitution’ can be useful as a design constraint, but it doesn’t resolve the underlying ethical risk,” he added. “No philosophical framework embedded in a model can replace human judgment, governance, and oversight. Ethics emerge from how systems are used, not from abstract principles encoded in weights.”
This article originally appeared on CIO.com.
Nadella warns of AI bubble unless more people use the technology
Microsoft CEO Satya Nadella warns that the AI boom could become a speculative bubble if the technology does not gain wider acceptance outside of large technology companies and wealthy economies.
“For this not to be a bubble by definition, the benefits need to be spread much more evenly,” Nadella said at the World Economic Forum in Davos, according to the Financial Times.
Nadella says that he is confident that AI will transform several industries.
“I am much more convinced that this is a technology that will actually build on the foundation of cloud and mobile platforms, spread faster, bend the productivity curve, and create local surpluses and economic growth around the world,” said Nadella.
He also emphasized that the future is unlikely to belong to a single dominant AI model. According to Nadella, the core principle will be for companies to combine different models with their own data and model distillation to create smaller and cheaper solutions.
This article originally appeared on ComputerSweden.
More Microsoft news:
Apple’s Siri to see two major AI improvements this year
More details about the expected cadence of Apple’s plans to turn Siri into an AI-driven chatbot are emerging, and Mark Gurman tells us Apple has a two-tier approach in mind.
The current thinking is that Apple’s Gemini-powered chatbot will arrive in June with iOS 26.4, which will be a significant improvement in itself. “Other than the chatbot interface, the operating systems aren’t getting big changes this year,” Gurman wrote. “Apple is more focused on improving performance and fixing bugs.”
Project CamposBut Apple won’t stop there; it’s also developing a project, codenamed Campos, to bring a more serious chatbot experience in the form of an AI-powered Siri for Macs, iPhone, iPads, and other devices. That particular advance is set to ship with iOS 27 later this year.
Google will provide the AI model, which Apple will exploit in both iterations.
The first update will bring the contextual tools Apple first promised back in 2024, including onscreen awareness. It should also see the introduction of an answer engine, World Knowledge Answers.
The next iteration, part of iOS 27, will have a voice and type interface and be deeply integrated within the OS. It will be able to run apps, change settings, and use personal data, though it may have limited access to such information to protect user privacy.
What this all means is that by the end of the year, Apple should have equipped its products and systems with AI-tools that compete with any other AI technology in the industry. For users, the beauty of this is that you should gain access to these tools and services for the price of your Apple product. Apple seems to be resolute in fighting to ensure that, despite its relatively late entrance into the generative AI (genAI) space, it will not be left behind.
Getting to this point has been a real struggle for the company.
Power strugglesThe lengthy road to AI Apple has traveled included the departure of members of the original AI team and the rise of Apple’s software chief, Craig Federighi, who has taken on direct oversight of the task. The Information reports that, despite adopting Google Gemini, Apple continues to develop its own AI models — particularly those capable of running on device.
The quest for edge AI seems central to Apple’s future approach, and to support it the company will consider the acquisition of smaller AI firms who can deliver optimized, compressed AI models. The company also intends to work with third-party models to shrink and adapt them to work more fully on Apple’s hardware.
Doing so is important, as the more intelligence Apple can put at the edge, the more it can reduce demand on hosted cloud-based AI, which will reduce infrastructure costs. A second advantage is that enabling on-device AI makes it possible to build and introduce a wide number of AI-augmented products.
An AI pin from Apple?One such product could arrive this time next year. The Information shares a report that hints at the extent to which AI at Apple may inform the company’s future product releases. It claims Apple intends to release an AirTag-sized smart AI device in 2027. (The concept sounds remarkably like Humane’s discontinued AI pin.)
If the device ever gets released — and the report concedes development is at an early stage — it will have cameras and microphones to give it situational awareness. And I imagine it will be a connected device running the advanced versions of Siri Apple is building now.
Given the wearable nature of the device, it’s hard not to imagine this as yet unconfirmed product will turn out to be Apple’s response to the work its former designer Jony Ive is doing with OpenAI. (I’m not convinced there will be huge demand for AI-driven products that gather video and audio wherever you go; I expect a consumer reaction against this kind of ambient AI, particularly as data brokers begin to aggregate such information.)
All the same, the fact that Apple now seems to have gathered enough momentum to compete with the biggest names in the genAI space suggests that, once again, those who bet against the company in the last couple of years may yet eat a little humble pie.
You can follow me on social media! Join me on BlueSky, LinkedIn, Mastodon, and MeWe.
Curl ending bug bounty program after flood of AI slop reports
SmarterMail auth bypass flaw now exploited to hijack admin accounts
New Osiris Ransomware Emerges as New Strain Using POORTRY Driver in BYOVD Attack
Critical GNU InetUtils telnetd Flaw Lets Attackers Bypass Login and Gain Root Access
Microsoft Teams to add brand impersonation warnings to calls
INC ransomware opsec fail allowed data recovery for 12 US orgs
Critical Cisco UC bug actively exploited
Cisco has released patches for a critical remote code execution vulnerability in its unified communications products that attackers are actively exploiting. The US Cybersecurity and Infrastructure Security Agency has added the flaw to its Known Exploited Vulnerabilities catalog, confirming the exploitation.
Cisco disclosed CVE-2026-20045 along with patches for Unified Communications Manager, Unity Connection, and Webex Calling Dedicated Instance. The company assigned the vulnerability a “Critical” severity rating despite its CVSS score of 8.2.
“Cisco has assigned this security advisory a Security Impact Rating (SIR) of Critical rather than High as the score indicates,” the company said in its advisory. “The reason is that exploitation of this vulnerability could result in an attacker elevating privileges to root.”
CISA’s addition of the vulnerability to its KEV catalog confirms attackers are exploiting it in the wild. “This type of vulnerability is a frequent attack vector for malicious cyber actors and poses significant risks to the federal enterprise,” CISA said in its alert.
This is the second actively exploited Cisco vulnerability CISA has added to its KEV catalog in recent weeks. Last week, the agency added CVE-2025-20393, affecting Cisco’s AsyncOS software.
“Other collaboration products, including Contact Center Enterprise, Emergency Responder, Finesse, Unified Intelligence Center, and Unified Contact Center Express, are not vulnerable to CVE-2026-20045,” the advisory added.
Root-level compromise with no user interactionThe vulnerability stems from improper validation of user-supplied input in HTTP requests. “An attacker could exploit this vulnerability by sending a sequence of crafted HTTP requests to the web-based management interface of an affected device,” Cisco explained in the advisory. “A successful exploit could allow the attacker to obtain user-level access to the underlying operating system and then elevate privileges to root.”
The attack requires no user interaction and can be carried out by unauthenticated remote attackers, making it particularly dangerous for internet-facing unified communications deployments, the advisory added.
Cisco’s Product Security Incident Response Team added that it is “aware of attempted exploitation of this vulnerability in the wild,” underscoring the urgency of patching.
No workarounds availableCisco confirmed in the advisory that there are no workarounds or mitigations available for CVE-2026-20045. The company has released fixes specific to each product version.
For Unified Communications Manager, IM&P, SME, and Webex Calling Dedicated Instance running version 14, the company suggested administrators can upgrade to version 14SU5 or apply a version-specific patch file. Organizations running version 15 can apply version-specific patches for 15SU2 and 15SU3a, with a full release of version 15SU4 expected in March 2026, the company added.
Unity Connection administrators have similar options, with version-specific patch files available for releases 14SU4 and 15SU3.
Organizations still running version 12.5 face a harder choice: Cisco won’t release patches for this version and recommends migrating to a supported release.
“Customers are advised to migrate to a supported release that includes the fix for this vulnerability,” Cisco said in the advisory. Patches are version-specific, and administrators should consult the README files attached to each patch for deployment details, the advisory added.
Federal agencies face a deadlineCISA’s inclusion of CVE-2026-20045 in the KEV catalog triggers mandatory remediation timelines for Federal Civilian Executive Branch agencies under Binding Operational Directive 22-01. Federal agencies must patch the vulnerability within two weeks of its January 21 addition to the catalog.
While BOD 22-01 applies specifically to federal agencies, CISA “strongly recommends” that all organizations treat KEV-listed vulnerabilities as high-priority patching targets. The catalog tracks flaws with confirmed active exploitation, making them significantly more likely to be weaponized against a broader range of targets.
How to patchCisco said organizations should check for signs of potential compromise on all internet-accessible instances after applying mitigations. The company advised administrators to review system logs and configurations for any unauthorized changes or suspicious activity that may indicate prior exploitation.
For organizations unable to immediately upgrade to fixed releases, the company said version-specific patch files offer an interim remediation option. However, Cisco noted that patches must match the exact software version running on the device, and administrators should verify compatibility before deployment.
eBay bans illicit automated shopping amid rapid rise of AI agents
On Tuesday, eBay updated its User Agreement to explicitly ban third-party "buy for me" agents and AI chatbots from interacting with its platform without permission, first spotted by Value Added Resource. On its face, a one-line terms of service update doesn't seem like major news, but what it implies is more significant: The change reflects the rapid emergence of what some are calling "agentic commerce," a new category of AI tools designed to browse, compare, and purchase products on behalf of users.
eBay's updated terms, which go into effect on February 20, 2026, specifically prohibit users from employing "buy-for-me agents, LLM-driven bots, or any end-to-end flow that attempts to place orders without human review" to access eBay's services without the site's permission. The previous version of the agreement contained a general prohibition on robots, spiders, scrapers, and automated data gathering tools but did not mention AI agents or LLMs by name.
At first glance, the phrase "agentic commerce" may sound like aspirational marketing jargon, but the tools are already here, and people are apparently using them. While fitting loosely under one label, these tools come in many forms.
Why Active Directory password resets are surging in hybrid work
ThreatsDay Bulletin: Pixel Zero-Click, Redis RCE, China C2s, RAT Ads, Crypto Scams & 15+ Stories
- « první
- ‹ předchozí
- …
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- …
- následující ›
- poslední »



