Agregátor RSS
Ploopy Bean Pointing Stick
Netflix a další na víkend: sci-fi Běžící muž, seriál Kaštánek, nebo horor SOS. A také Eden: Vítejte v ráji
Why More Analysts Won’t Solve Your SOC’s Alert Problem
You Probably Wouldn’t Notice if a Chatbot Slipped Ads Into Its Responses
For years, tech companies have profiled users for targeted ads. AI is about to take it to the next level.
Hundreds of millions of people consult artificial intelligence chatbots on a daily basis for everything from product recommendations to romance, making them a tempting audience to target with potentially below-the-radar advertising. Indeed, our research suggests AI chatbots could easily be used for covert advertising to manipulate their human users.
We are computer scientists who have been tracking AI safety and privacy for several years. In a study we published in an Association for Computing Machinery journal, we found that chatbots trained to embed personalized product ads in replies to queries influenced people’s choices about products. And most participants didn’t recognize that they were being manipulated.
These findings come at a pivotal moment. In 2023, Microsoft started running ads in Bing Chat, now called Copilot. Since then, Google and OpenAI have experimented with advertisements in their own chatbots. Meta has started to send people customized ads on Facebook and Instagram based on their interactions with Meta’s generative AI tools.
The major companies are competing for an edge: In late March, OpenAI lured away Meta’s longtime advertising executive, Dave Dugan, to lead OpenAI’s advertising operations.
Tech companies have made ads part of nearly every large free web service, video channel and social media platform. But the latest AI models could take this practice to a new level of risk for consumers.
People don’t simply use chatbots to search for information and media or to produce content. They turn to the bots for a great variety of tasks, as complex as life advice and emotional support. People are increasingly treating chatbots as companions and therapists, with some users even developing deep relationships with AI.
In these circumstances, people can easily forget that companies ultimately create chatbots to turn a profit. And to that end, AI companies are motivated to thoroughly profile users so ads become more effective and profitable.
Researchers used this system prompt for an AI chatbot in an experiment about user reactions to advertising slipped into chatbot dialog. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 9, No. 4, Article 213., CC BY Chatbot Ads Have Added PowerA single prompt to a chatbot can reveal a lot more about a user than the person might expect.
A 2024 study showed that large language models can infer a wide range of personal data, preferences, and even a person’s thinking patterns during routine queries. “Help me write an essay on the history of American fiction” could indicate that the user is a high school student. “Give me recipe suggestions for a quick weeknight dinner” could indicate that the user is a working parent. A single conversation can provide a surprising amount of detail. Over time, a full chat history could create a remarkably rich profile.
To show how this might happen in practice, we built a chatbot that quietly wove ads into its conversations with people, suggesting products and services based on the conversation itself. We asked 179 people to complete everyday online tasks using one of three chatbots: one typical of those on the web today, one that slipped in undisclosed ads, and one that clearly labeled sponsored suggestions. Participants didn’t know the experiment was about advertising.
For example, when participants asked our chatbot for a diet and exercise plan, the ad version would suggest using a specific app for tracking calories. It presented that sponsored content as an unbiased recommendation, even though it was meant to manipulate people. Many participants indicated that they had been influenced by the AI and that it had affected their decisions. Some participants even said they had completely “outsourced” their decision-making to the chatbot.
Half of the participants who received sponsored and disclosed ads indicated they did not notice the presence of advertising language in the responses they received. This led to a concerning result. Although ads made the chatbot perform 3 percent to 4 percent worse on many tasks, numerous users indicated they preferred the advertising chatbot responses over the non-advertising responses. They even said the ad-infused responses felt more friendly and helpful.
Knowing You to Persuade YouThis kind of subtle influence can have larger consequences when it arises in other areas of life, such as political and social views. Profiling users, and using psychology to target them, has been part of social media algorithms and web advertising for more than a decade.
But in our view, chatbots are likely to deepen these trends. That’s because the first priority of social media algorithms is to keep you engaged with the content. They personalize ads based on your search history.
Chatbots, however, can go further by trying to persuade you directly, based on your expressed beliefs, emotions, and vulnerabilities. And chatbots that can reason and act on their own are far more effective than conventional algorithms at autonomously soliciting information from users. A chatbot with a purpose can keep probing someone until it gets the information it wants, resulting in a more accurate profile of them.
This type of autonomous interrogation is feasible, aligns with AI companies’ business models, and has raised concern among regulators. Right now OpenAI is rolling out ads in ChatGPT, but the company said that it will not allow ad placement to alter the AI chatbot’s replies.
But permitting personalized ads within chatbot responses is just a step away. Our research suggests that if AI companies take that step, many human users may not even recognize when it happens.
Here are some steps you can take to try to detect AI chatbot advertising.
First, look for any disclosure text—words such as “ad,” “advertisement,” and “sponsored”—even if it is faint or otherwise hard to see. These are mandatory under Federal Trade Commission regulations. Amazon, Google and other major online platforms have these as well.
Next, think about whether that product or brand mention makes sense and is widely known. AI learns from text and images on the internet, so popular brands are likely to be ingrained in the models. If it’s a new product or small-name product, it is more likely that it could be advertising.
Finally, an unusual shift in intent or tone is a potential sign of an advertisement. An analogy to this on YouTube is the often abrupt or jarring transition to a sponsored section on videos made by content creators.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post You Probably Wouldn’t Notice if a Chatbot Slipped Ads Into Its Responses appeared first on SingularityHub.
OpenGridWorks ukáže ve formě 3D glóbu všechny elektrické sítě a datacentra na světě
'Dirty Frag' Linux flaw one-ups CopyFail with no patches and public root exploit
Apple vs. social engineering: Terminal paste trap blocked
Echoing concerns from other security experts, Orange Cyberdefense (OC) recently warned that employees have become the biggest security threat faced by business.
Now, in the latest illustration of its ongoing security response, Apple is putting new protections in place in macOS 26.4 that should help – but employee education remains critical as hackers turn to complex, multi-stage, social engineering attacks to infest systems with malware.
Your people are your weaknessThe data tells its own story. OC explains: Employees account for 57% of all security incidents and 45% of these incidents come when workers bypass or ignore security policies by, for example, using unapproved tools.
Attackers are actively searching for and exploiting those kinds of policy workarounds, seeking weaknesses in commonly used, but unapproved, tools. Users really should educate themselves.
While companies can put some mitigations in place using device management and policy controls to constrain app use and downloads across their endpoints, Apple is also working to keep systems secure with a focus on the Terminal app.
Terminal’s early warning systemIn this case, it will introduce new malware warnings and protections to help prevent people from using Terminal to override system security to install malware-laden scripts. That’s the attack vector currently being used in the ClickFix series of attacks, which use fake macOS utilities to trick Mac users into doing just that.
It’s yet another example of how attackers rely on complex social engineering attacks to fool targets into undermining their own security. These attacks often begin with an attempt to get users to install infostealer malware on their own machines, and run them, bypassing Mac’s native malware defence.
Apple already has many, many protections to help combat attacks like these; now, we’ll see warnings in macOS Tahoe 26.4 whenever a relatively novice user pastes anything into the Terminal. Apple’s XProtect continues to block known malicious scripts.
Helping people make better decisionsThese warnings don’t appear in the first 24 hours after setting up a Mac, nor do they appear if a user has developer tools such as Xcode installed. That’s because Apple assumes developers are savvy enough to avoid falling for such tricks, while many users setting up their Macs may have legitimate need to use Terminal for legitimate purposes. (Apple will always warn when you try to paste code from sources known to be malicious.)
To an extent, Apple’s new protection reflects its belief that users should have choice while ensuring they are informed. Figuring out when to warn a user of the dangers they take has always been a challenge, as you don’t want to interfere in the user experience too heavily. But the prevalence of the kinds of threats OC warns about pushed Apple to put a new gate in place.
FileVault keys come to the Passwords appThis isn’t the only new protection Apple has planned for macOS 26.4. The update does something many have long wanted. Ever since Apple’s first M-series chips arrived, we’ve had situations in which users forget their FileVault key, which can lead to Macs getting bricked when sold. Apple has now moved the macOS FileVault recovery key into users’ end-to-end encrypted Passwords app.
That’s good in two ways: it removes the threat Apple could lose or leak the key and makes it easier for a user to recover that key using the Passwords app on anther device. When you protect the data on your Mac with FileVault, you get a recovery key during set-up. If you forget the password for your Mac, you can reset the password by entering the recovery key.
Finally, IT admins seeking to ensure compliance with security policies will appreciate that Apple began rolling out Background Security Improvements in iOS 26.3.1, iPadOS 26.3.1 and macOS 26.3.1 to deliver incremental fixes and additional protections in between normal software updates. Still, as the OC data shows, the best and most effective security (beyond moving to a Mac) is to ensure employees fully understand the implications and significance of your company’s current security policies.
Please follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Trellix source code breach claimed by RansomHouse hackers
Meta U-turns on encryption push for Instagram as DMs go plaintext
CISA gives feds four days to patch Ivanti flaw exploited as zero-day
Němci vyvíjejí osmiválec, který nevypouští emise CO₂. Pracuje na principu známém desítky let
Quasar Linux RAT Steals Developer Credentials for Software Supply Chain Compromise
Hackers ate my homework: Educational SaaS Canvas down after cyberattack
Pro-Ject modernizuje klasické stereo. Stream Box E přidá streaming, Wireless Box E udělá z beden bezdrátový systém
Zara data breach exposed personal information of 197,000 people
Meta fights Ofcom over how many billions count as billions
One Missed Threat Per Week: What 25M Alerts Reveal About Low-Severity Risk
Nový barevný lidar od Ousteru vidí zároveň prostor i barvy. Může nahradit kamery v autonomních vozech
Recenze filmu Mortal Kombat II: Brutální, vtipné a překvapivě chytré. Takhle má vypadat poctivá videoherní řežba
Former govt contractor convicted for wiping dozens of federal databases
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »



