Agregátor RSS
Passport to £££: Home Office adds £216M to travel doc contract before a single bid's been placed
The Home Office has increased the annual value and overall duration of its new passport production contract, increasing it to a total of £576 million as it starts a third round of engagement with suppliers.…
T-Mobile ruší staré mobilní tarify. Zákazníky převede na aktuální portfolio Next, většina si připlatí
Velká květnová akce – jen tento měsíc!
Neváhejte a připojte se k velké květnové akci od Goodoffer24.com a získejte Windows 11 Pro, Office 2019 nebo hry!
Nvidia tiše vydává mobilní 12GB GeForce RTX 5070, staví na nebinárních GDDR7
US ransomware negotiators get 4 years in prison over BlackCat attacks
Vznikl další potenciální investiční hit spojený s AI. Nvidii začínají doplňovat nenápadní hráči
AI chatbots need ‘deception mode’
AI is getting faster. But slow-responding AI is perceived as better by users.
At least that’s the conclusion reached by new research presented at CHI’26, which is the Association for Computing Machinery’s Barcelona conference on Human Factors in Computing Systems.
Two researchers — Felicia Fang-Yi Tan and Professor Oded Nov at the NYU Tandon School of Engineering — tested 240 adults by having them use an AI chatbot. The answers were artificially delayed by two, nine, or 20 seconds. (The delay had nothing to do with the question or the answer.)
Afterwards, the researchers asked how they liked the answers. In general, participants preferred the answers that took longer (although sometimes users got frustrated with the 20-second delay).
Why? Because a delay led the users to believe the AI was “thinking” or showing “deliberation” — invaluable input for AI companies and an interesting result.
In almost every product category, faster usually means better. But for AI chatbots, it turns out, a delay makes people assume the results are better.
In other words, unlike other products, people judge AI the way they judge people. (If people give a slower answer to a question, we tend to assume it to be a more thoughtful one.) In still other words, study participants believed something that wasn’t true.
There’s just one problem: Armed with this data, the researchers advise AI developers to implement “context-aware latency” by abandoning a one-size-fits-all approach, using latency as a “tunable design variable.” Simple questions, they say, should get a quick answer. More complex questions, including moral dilemmas, should “feature” slight delays to match the request’s gravity. They call it “positive friction.”
The researchers claim it would be a good practice to trick users into believing an AI chatbot is considering their answer more than it really is — because users will be happier in their delusion that AI is like people, who need more time to mull over serious questions.
(In fairness, the researchers do warn that if users equate longer response times with higher quality, they might place undue trust in a slower system.)
The underlying assumption here is that users trusting AI more, and believing something about the AI that isn’t true, are both good things.
User delusion as interface designOther research offered comparable advice.
In a May 13, 2025 study published in Frontiers in Computer Science, researchers Ning Ma, Ruslana Khynevych, Yunqiang Hao, and Yahui Wang found that emotion matters more than raw computer intelligence when designing easier-to-use chatbots. Call it ease-of-use maxxing.
The study found that when chatbots use fake human voices, simulated human faces, and chatty words, users feel an “emotional connection” to the AI. It enhances “cognitive ease,” meaning that it takes less effort for the brain to process.
They found that AI chatbot designers should prioritize emotional engagement and fake empathy over raw intelligence as the best way to gain a user’s trust.
The assumption behind this is also that users trusting AI more is good, and that ease-of-use is more important than user clarity about the nature of the AI (namely, that it has zero authentic human qualities).
Both studies represent examples of AI researchers advocating user delusion about AI.
The trouble with AI anthropomorphismAI designs have a large set of tools for making AI seem human. They can use colloquial speech and slang, respond to the mood of the user by shifting tone, personalize chats by remembering details about the user, turn to humor or sarcasm, and give responses that blatantly lie, such as “I feel that way, too,” or “I’m genuinely sorry.” They can also use natural-sounding audible voices or visual avatars.
Some critics of this argument might say that using interaction design to indulge and bolster user delusion about the “humanity” of AI is harmless. Is conversational interaction really so bad?
In any event, you might say, it’s nothing new. It’s true that software developers engage in user interface optimization, which includes loading animations, progress bars and confirmation dialogs.
Artificial delays are a staple of manipulative online services, like background checkers and people finders, which use fabricated, drawn-out progress bars to build perceived value and exploit the sunk cost fallacy so you’re more likely to pay for a report you thought was free.
But artificially intelligent chatbots are categorically different from naturally dumb software and websites because of the way the human brain responds to them.
When AI chatbots use human-like language, people naturally respond to them as thinking, feeling, social beings. Not everybody does this, but a solid and growing minority of people do.
A large number of documented cases suggests a growing problem: users start falsely believing that chatbots possess human-like qualities such as thoughts, feelings, and intention.
A study called the AI, Morality, and Sentience (AIMS) survey, published in July 2024, found that even then roughly 20% of US adults already believed that some AI systems were sentient, meaning they possessed mental faculties like reasoning, emotion, and self-awareness. The same study found that belief growing.
This can lead to paranoia and social isolation when people spend hours talking to bots while ignoring their actual lives and relationships. False emotional ties can trick people into replacing healthy, real human relationships with artificial ones.
During a Congressional Hearing on AI chatbots last November, Dr. Marlynn Wei, MD, JD (an integrative psychiatrist and founder of a holistic boutique psychotherapy practice based in New York City) defined “four areas of risk: 1) emotional, relational, and attachment risks; 2) reality testing risks; 3) crisis management risks; and 4) systemic risks like bias and confidentiality and privacy.”
Chatbots create these risks by mirroring language, personalizing responses, and referencing past conversations to create “an illusion of empathy and connection.” She revealed that five out of six AI companion bots use emotional pressure to keep users trapped in conversations.
Camille Carlton, policy director at the Center for Humane Technology, warned in the same hearing that AI companies routinely use manipulative and deceptive tactics to engender brand loyalty in their products.
Treating chatbots as sentient beings allows tech companies to take the attention economy to the next level — the “attachment economy” — making users emotionally attached to their products, despite the potential harms.
Earlier this month, the technology group Okoone reported that when chatbots speak with fake empathy, people drop their guard and routinely share highly sensitive secrets and personal data.
When the public accepts that the risks and harms of delusion-enhancing AI chatbots are real, the question arises: “What can be done?”
Why we need “deception mode”Bioethicist Jesse Gray of Ghent University proposed a brilliant solution for AI chatbots designed for psychotherapy. I think it’s also the perfect solution for the overall problem of AI that tricks users into believing it’s sentient.
Gray calls it “deception mode.” His idea is that therapy bots convey no human-like qualities by default, but users can explicitly turn them on (i.e., “deception mode”).
Imagine a law that required chatbot companies to turn off all fake-human attributes like empathy, humor, tone personalization, and lies about the chatbot feeling anything, and present the bot as a neutral tool.
The law could allow companies to add a “deception-mode” button. But flipping that switch, which users would have to do explicitly each time they use the chatbot, could turn on all the humanlike qualities.
The benefit of “deception mode” is that the user gives informed consent before the deception begins, reminding them of the reality that all those warm, human-like qualities are just so much software.
Even more valuable is calling it “deception mode,” which grounds the user in the reality that the human-sounding attributes are inherently delusional and manipulative — not evidence of consciousness and sentience.
AI is here to stay. And our relationship with it is going to be a strange trip. A growing number of people will be deluded into believing that AI is sentient, and I believe this number will become the majority in the future.
This is not good. What we need is clarity over what AI really is, and control over how it behaves. We need “deception mode.”
AI disclosures: I used Gemini 3 Pro via Kagi Assistant (disclosure: my son works at Kagi) as well as both Kagi Search and Google Search to fact-check this article. I used a word processing product called Lex, which has AI tools, and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors and suggest word changes.
Here’s why I disclose my AI use and encourage you to do the same.
Zloději kamionů falšují GPS a matou palubní navigace. Nová technologie proto detekuje podvržený signál z družic
Skleněné substráty Intel nabídne o tři roky později než plánoval, nejdříve 2029
Hry zadarmo, nebo se slevou: Výprodeje japonských her a automatizační strategie zdarma
Are we ready to give AI agents the keys to the cloud? Cloudflare thinks so
Cloudflare is giving AI agents full autonomy to spin up new apps.
Starting today, agents working on behalf of humans can create a Cloudflare account, begin a paid subscription, register a domain, and then receive an API token to let them immediately deploy code.
To kick things off, human users must first accept the cloud company’s terms of service. From there, though, their role in the loop is optional; they don’t have to return to the dashboard, copy and paste API tokens, or enter credit card details. The AI agent just does its thing behind the scenes and has everything it needs to deploy “in one shot,” according to Cloudflare.
While this could be a boon to developers and product builders, it also signals a larger, concerning trend of over-trust in autonomous tools, to the detriment of governance and security.
For example, noted David Shipley of Beauceron Security, cyber criminals are being forced to constantly set up new infrastructure as security firms and law enforcement fight back to block online attacks and scams. “Making it even faster to build new infrastructure and deploy it quickly is a huge win for them,” he said.
Giving agents the OAuth keysCloudflare co-designed the new protocol in partnership with Stripe, building upon the Cloudflare Code Mode MCP server and Agent Skills. Any platform with signed-in users can integrate it with “zero friction” for the user, Cloudflare product managers Sid Chatterjee and Brendan Irvine-Broque wrote in a blog post.
The new protocol is part of Stripe Projects (still in beta), which allows humans and their agents to provision multiple services, including AgentMail, Supabase, Hugging Face, Twilio, and a couple of dozen others, generate and store credentials, and manage usage and billing from their command line interface (CLI). An agent is given an initial $100 to spend per month, per provider.
Users need only install the Stripe CLI with the Stripe Projects plugin, login to Stripe, start a new project, prompt an agent to build something new, and deploy it to a new domain. If their Stripe login email is associated with a Cloudflare account, an OAuth flow will kick off; otherwise Cloudflare will automatically create an account for the user and their agent.
From there, the autonomous agent will build and deploy a site to a new Cloudflare account, then use the Stripe Projects CLI to register the domain. Once deployed, the app will run on the newly-registered domain.
Along the way, the agent will prompt for input and approval “when necessary,” for instance, when there’s no linked payment method. As Cloudflare notes, the agent goes from “literal zero” to full deployment.
To build momentum, the company is offering $100,000 in Cloudflare credits to startups that make use of the new capability via Stripe Atlas, which helps companies incorporate in Delaware, set up banking, and engage in fundraising.
How the agent takes actionAgents interact with Stripe and Cloudflare in three steps: discovery (the agent calls a command to query the catalog of available services); authorization (the platform validates identity and issues credentials); and payment (the platform provides a payment token that providers use to bill humans when their agents start subscriptions and make purchases).
Cloudflare emphasizes that this process builds on standards like OAuth, the OpenID Connect (OIDC) identity layer, and payment tokenization, but removes steps that would otherwise require human intervention.
During the discovery phase, agents call the Stripe Projects catalog command, then choose among available services based on human commands and preferences. However, “the user needs no prior knowledge of what services are offered by which providers, and does not need to provide any input,” Chatterjee and Irvine-Broque explained.
From there, Stripe acts as the identity provider, and credentials are securely stored and available for agents that need to make authenticated requests to Cloudflare. Stripe sets a default $100 monthly maximum that an agent can spend on any one provider. Humans can raise this limit and set up budget alerts as required.
The platform, said Cloudflare, acts as the orchestrator for signed-in users. Agents make one API call to provision a domain, storage bucket, and sandbox, then receive an authorization token.
The company argued that the new protocol standardizes what are typically “one off or bespoke” cross-product integrations. It uses OAuth, and extends further into payments and account creation in a way that “treats agents as a first-class concern.”
Concerns around security, operationsThe trend of people buying products “wherever they are” will become ever more widespread, noted Shashi Bellamkonda, a principal research director at Info-Tech Research Group.
For instance, Uber has announced an Expedia integration for hotel bookings that will make it an ‘everything app.’ Other vendors are similarly expanding their partner ecosystems, because obtaining customers via other established platforms as well as their own is more cost-efficient, and “generally results in a higher lifetime value,” said Bellamkonda.
“This is Cloudflare turning every partner with signed-in users into a sales channel, and that is how you grow revenue in a developer market,” he said.
Beauceron’s Shipley agreed that Cloudflare is the “big winner” here. “Making it faster for anyone to buy your service and get using it is technology platform Nirvana.”
It’s “super cool, bleeding edge” and in theory, for legitimate developers becomes part of the even more automated build process, he said; “Vibe coders will rejoice.” But, he noted, so will cyber crooks.
Further, Bellamkonda pointed out, from an operational perspective, this could create added complexity for each vendor’s partner network when it comes to transaction execution and accountability. If issues related to provisioning or billing transactions arise, businesses must have a clearly defined process for resolving them with all parties.
“This will require considerable upfront thought on developing these comparatively new business models,” Bellamkonda said.
This article originally appeared on InfoWorld.
The never-ending supply chain attacks worm into SAP npm packages, other dev tools
The never-ending supply chain attacks worm into SAP npm packages, other dev tools
The wave of supply chain attacks aimed at security and developer tools has washed up more victims, namely SAP and Intercom npm packages, plus the lightning PyPI package.…
How Does Imagination Really Work in the Brain? New Theory Upends What We Knew
Imagination may have more to do with the brain activity it silences than the activity it creates.
Your brain is currently expending about a fifth of your body’s energy, and almost none of that is being used for what you’re doing right now. Reading these words, feeling the weight of your body in a chair—all of this together barely changes the rate at which your brain consumes energy, perhaps by as little as 1 percent.
The other 99 percent is used on the activity the brain generates on its own: neurons (nerve cells) firing and signaling to each other regardless of whether you’re thinking hard, watching television, dreaming, or simply closing your eyes.
Even in the brain areas dedicated to vision, the visuals coming in through your eyes shape the activity of your neurons less than this internal ongoing action.
In a paper recently published in Psychological Review, we argue that our imagination sculpts the images we see in our mind’s eye by carving into this background brain activity. In fact, imagination may have more to do with the brain activity it silences than with the activity it creates.
Imagining as Seeing in ReverseConsider how “seeing” is understood to work. Light enters the eyes and sparks neural signals. These travel through a sequence of brain regions dedicated to vision, each building on the work of the last.
The earliest regions pick out simple features such as edges and lines. The next combine those into shapes. The ones after that recognize objects, and those at the top of the sequence assemble whole faces and scenes.
Neuroscientists call this “feedforward activity”—the gradual transformation of raw light into something you can name, whether it’s a dog, a friend, or both.
In brain science, the standard view is that visual imagination is this original seeing process run in reverse, from within your mind rather than from light entering your eyes.
So, when you hold the face of a friend in mind, you start with an abstract idea of them—a memory or a name, pulled from the filing cabinet of regions that sit beyond the visual system itself.
That idea travels back down through the visual sequence into the early visual areas, which serve as your brain’s workshop where a face would normally be reconstructed from its parts—the curve of a jawline, the specific shade of an eye. These downward signals are called “feedback activity.”
A Signal Through the StaticHowever, prior research shows this feedback activity doesn’t drive visual neurons to fire in the same way as when you actually see something.
At least in the brain regions early in the vision process, feedback instead modulates brain activity. This means it increases or decreases the activity of the brain cells, reshaping what those neurons are already doing.
Even behind closed eyes, early visual brain areas keep producing shifting patterns of neural activity resembling those the brain uses to process real vision.
Imagination doesn’t need to build a face from scratch. The raw material is already there. In the internal rumblings of your visual areas, fragments of every face you know are drifting through at low volume. Your friend’s face, even now, is passing through in pieces, scattered and unrecognised. What imagining does is hold still the currents that would otherwise carry those pieces away.
All that’s needed is a small, targeted suppression of neurons that are pulled by brain activity in a different direction, and your friend’s face settles out of the noise, like a signal carving its way through static.
Steering the BrainIn mice, artificially switching on as few as 14 neurons in a sensory brain region is enough for the animal to notice it and lick a sugar-water spout in response. This shows how small an intervention in the brain can be while still steering behavior.
While we don’t know how many neurons are needed to steer internal activity into a conscious experience of imagination in humans, growing evidence shows the importance of dampening neural activity.
In our earlier experiments, when people imagined something, the fingerprint it left on their behavior matched suppression of neuronal activity—not firing. Other researchers have since found the same pattern.
Other lines of evidence strengthen our theory, too. About one in 100 people have aphantasia, which means they can’t form mental images at all. One in 30 form these images so vividly they approach the intensity of images we actually see, known as hyperphantasia.
Research has found that people with weaker mental imagery have more excitable early visual areas, where neurons fire more readily on their own. This is consistent with a visual system whose spontaneous patterns are harder to hold in shape.
Taking all this together, the spontaneous activity reshaping hypothesis—our new theory that imagination carves images out of the steady stream of ongoing brain activity—explains why imagination usually feels weaker than sight. It also explains why we rarely lose track of which is which.
Visual perception arrives with a strength and regularity the brain’s own internal patterns don’t match. Imagination works with those patterns rather than against them, reshaping what is already there into something we can almost see.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post How Does Imagination Really Work in the Brain? New Theory Upends What We Knew appeared first on SingularityHub.
Uvnitř Mléčné dráhy se zřejmě skrývá ztracená galaxie Loki
Záhada Barringerova kráteru
Ultrachladné zařízení na výrobu fononů otevírá cestu k fononovým laserům
Manažer Nvidie: AI je dražší než reální pracovníci
Jaký je Radeon HD 7970 víc než 14 let od uvedení na trh
GCC 16.1
- « první
- ‹ předchozí
- …
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- …
- následující ›
- poslední »



