Kategorie
No application can eliminate human error: Signal’s head defends the app
When the editor-in-chief of The Atlantic, Jeff Goldberg, was accidentally added to a Signal conversation, things took a surprising turn. The journalist could not initially believe the authenticity of the invitation, but the chat apparently involving high-ranking US politicians and government officials discussed specific targets for attacking Huti forces in Yemen — and a few hours later, airstrikes did indeed take place.
Due to the nature of the information exchanged, his doubts were raised both by the fact that top-secret plans were discussed using an app that is not designed to transmit classified data, and by the free-form statements of the politicians, including Vice President J.D. Vance. The messages even included emoticons, symbolizing the celebration of the operation carried out.
The editor-in-chief of The Atlantic reactedGoldberg refrained from publishing details about specific targets and weaponry in his article about the chat, fearing that the safety of those involved would be compromised.
His description of the leaked news shows that Vice President JD Vance, one of the participants in the conversation, was critical of President Donald Trump’s decision to carry out the attacks, stressing that their effects could benefit Europe more than the United States.
The event instantly sparked a wave of discussion about security rules and possible violations of laws protecting classified information. Legal experts pointed out that transmitting secret data in this way could violate at least the Espionage Act, especially if the app’s configuration provides for automatic deletion of messages.
Trump, however, defended the use of Signal, explaining that access to secure devices and premises is not always possible at short notice.
Meredith Whittaker defends Signal appSignal’s CEO, Meredith Whittaker, defended the app in an interview with Polish media, stressing that Signal maintains full end-to-end encryption and prioritizes user privacy.
She pointed out that while WhatsApp also uses encryption technologies designed by Signal, it does not protect metadata to the same extent and does not guarantee such a strict policy against collecting or sharing user information.
Whittaker at the same time pointed out that no application can eliminate human error. The accidental invitation of a journalist to a government chat room is precisely one example of a risk that cannot be excluded by technological measures alone.
(This story was originally published by Computerworld Poland.)
Microsoft fixes printing issues caused by January Windows updates
RedCurl cyberspies create ransomware to encrypt Hyper-V servers
EncryptHub Exploits Windows Zero-Day to Deploy Rhadamanthys and StealC Malware
EncryptHub Exploits Windows Zero-Day to Deploy Rhadamanthys and StealC Malware
RedCurl Shifts from Espionage to Ransomware with First-Ever QWCrypt Deployment
RedCurl Shifts from Espionage to Ransomware with First-Ever QWCrypt Deployment
Microsoft: Recent Windows updates cause Remote Desktop issues
Microsoft’s newest AI agents can detail how they reason
If you’re wondering how AI agents work, Microsoft’s new Copilot AI agents provide real-time answers on how data is being analyzed and sourced to reach results.
The Researcher and Analyst agents, announced on Tuesday, take a deeper look at data sources such as email, chat or databases within an organization to produce research reports, analyze strategies, or convert raw information into meaningful data.
[ Related: Agentic AI – Ongoing news and insights ]In the process, the agents give users a bird’s eye-view on each step of how they’re thinking and analyzing data to formulate answers. The agents are integrated with Microsoft 365 Copilot.
The agents combine Microsoft tools with OpenAI’s newer models, which don’t answer questions right away, but can reason better. The models think deeper by generating additional tokens or drawing more information from outside sources before coming up with an answer.
The Researcher agent takes OpenAI’s reasoning models, checks the efficacy of the model, pokes around by pulling data from sources via Microsoft orchestrators and then builds up the level of confidence in the retrieval and results phases, according to information provided by Microsoft.
A demonstration video provided by Microsoft shows the Copilot chatbot interface publishing its “chain of thought” — for example, the step-by-step process of searching enterprise and domain data, identifying product lines, opportunities and more — with the ultimate output being the final document.
The approach is a major benefit for Microsoft since most models operate as a black box, said Jack Gold, principal analyst at J. Gold Associates.
Accountability and the ability to see how models are getting their results are important to assure users that the technology is safe, effective and not hallucinating, Gold said.
“Much of AI today is a ‘black hole’ when it comes to being able to figure out how it got to its results — most cite references, but not the logic on how they got to the end result,” Gold said. “Any transparency you can offer is about making users feel more comfortable.”
The Copilot Researcher agent can take a deeper look at internal data to develop business strategies or identify unexplored market opportunities — typical tasks by researchers. It provides highly technical research and strategy work that you’d expect to pay a highly skilled consultant, researcher, or analyst, a Microsoft spokeswoman said.
“Its ability to combine a user’s work data and web data means its responses are current, but also contextually relevant to every user’s personal needs,” the spokeswoman said.
For example, within the Researcher agent, a user can query the chatbot on exploring new business opportunities. In the process of analyzing data, the agent shares how the model is approaching the query. It will ask clarifying questions, publish a plan to reach an answer, show the data sources it is drawing information from, and explain how the data is collated, categorized, and analyzed.
The Analyst agent takes raw data and generates insights — typically the job of a data scientist. The tool is designed for workers using data to derive insights and make decisions without knowledge of advanced data analysis like Python coding, the spokeswoman said.
For example, the Analyst agent can take a spreadsheet with charts of unstructured data and share insights. Similar to the Researcher agent, the Analyst agent takes in a question via the Copilot interface, creates a plan to analyze the data, and determines the Python tools to generate insights. The agent shares its step-by-step process of how it is responding to the query and even shares the Python code used to generate the answer.
Microsoft has had a number of documented “misses” related to problematic generative AI (genAI) tools, such as Windows Recall, a Copilot feature that uses snapshots to log the history of activity on a PC, Gold said.
Giving users a sense of security is beneficial to getting users to try CoPilot, Gold said. “Think of it as having the safest car on the road when you go to select a new car for your family,” he said.
Malicious npm Package Modifies Local 'ethers' Library to Launch Reverse Shell Attacks
Malicious npm Package Modifies Local 'ethers' Library to Launch Reverse Shell Attacks
New npm attack poisons local packages with backdoors
Sparring in the Cyber Ring: Using Automated Pentesting to Build Resilience
Sparring in the Cyber Ring: Using Automated Pentesting to Build Resilience
Zero-Day Alert: Google Releases Chrome Patch for Exploit Used in Russian Espionage Attacks
Zero-Day Alert: Google Releases Chrome Patch for Exploit Used in Russian Espionage Attacks
How PAM Mitigates Insider Threats: Preventing Data Breaches, Privilege Misuse, and More
How PAM Mitigates Insider Threats: Preventing Data Breaches, Privilege Misuse, and More
Will Microsoft be laid low by the feds’ antitrust probe?
Microsoft is on top of the world right now, riding its AI dominance to become the world’s second-most valuable company, worth somewhere in the vicinity of $3 trillion, depending on the day’s stock price.
But that could easily change — and not because competitors have found a way to topple it as king of AI.
A federal antitrust investigation threatens to do to the company what was done to it 35 years ago by a US Justice Department suit that tumbled the company from its perch as the world’s top tech company. It also led to a lost decade in which Microsoft lagged in the technologies that would transform the world — the internet and the rise of mobile.
The current investigation was launched last year by the Federal Trade Commission (FTC) under the leadership of Chair Lina Khan. Khan was ousted by President Donald J. Trump when he re-took office in January, and there’s been a great deal of speculation about whether his administration would kill the investigation or let it proceed.
That speculation ended this month, when new FTC Chair Andrew Ferguson asked the company for a boatload of information about its AI operations dating back to 2016, including detailed requests about its training models and how it acquires the data for them.
The investigation isn’t just about AI. It also covers Microsoft’s cloud operations, cybersecurity efforts, productivity software, Teams, licensing practices, and more. In other words, just about every important part of the company.
More details about the investigationAlthough the investigation is a broad one, the most consequential parts focus on the cloud, AI, and the company’s productivity suite, Microsoft 365. It will probably dig deep into the way Microsoft uses its licensing practices to push or force businesses to use multiple Microsoft products.
Here’s how The New York Times describes it: “Of particular interest to the FTC is the way Microsoft bundles its cloud computing offerings with office and security products.”
The newspaper claims the investigation is looking at how Microsoft locks customers into using its cloud services “by changing the terms under which customers could use products like Office. If the customers wanted to use another cloud provider instead of Microsoft, they had to buy additional software licenses and effectively pay a penalty.”
That’s long been a complaint about the way the company does business. European Union regulators last summer charged that Microsoft broke antitrust laws by the way it bundles Teams into its Microsoft 365 productivity suite. Teams’ rivals like Zoom and Slack don’t have the ability to be bundled like that, the EU says, giving Microsoft an unfair advantage. Microsoft began offering some versions of the suite without Teams, but an EU statement about the suit says the EU “preliminarily finds that these changes are insufficient to address its concerns and that more changes to Microsoft’s conduct are necessary to restore competition.”
AI is a target, tooMicrosoft’s AI business is also in the legal crosshairs, though very few details have come out about it. However, at least part of the probe will likely center on whether Microsoft’s close relationship with OpenAI violates antitrust laws by giving the company an unfair market dominance.
The investigation could also focus on whether Microsoft uses its licensing practices for Microsoft 365 and Copilot, its generative AI chatbot, in ways that violate antitrust laws. In a recent column, I wrote that Microsoft now forces customers of the consumer version of Microsoft 365 to pay an additional fee for Copilot — even if they don’t want it. In January, Microsoft bundled Copilot into the consumer version of Microsoft 365 and raised prices on the suite by $3 per month or $30 for the year. Consumers are given no choice — if they want Microsoft 365, they’ll have to pay for Copilot, whether they use it or not.
Microsoft also killed two useful features in all versions of Microsoft 365, for consumers as well as businesses, and did it in a way to force businesses to subscribe to Copilot. The features allowed users to do highly targeted searches from within the suite. Microsoft said people could instead use Copilot to do that kind of searching. (In fact, Copilot can’t match the features Microsoft killed.) But business and educational Microsoft 365 users don’t get Copilot bundled in, so they’ll have to pay an additional $30 per user per month if they want the search features, approximately doubling the cost of the Office suite.
Expect the feds to file suitIt’s almost certain the feds will file at least one suit against Microsoft by the FTC, the Justice Department, or maybe both. After all, federal lawsuits against Amazon, Apple, Google, and Meta launched by the Biden administration have been continued under Trump. There’s no reason to expect he won’t target Microsoft as well.
There’s another reason the feds could hit Microsoft hard. Elon Musk is suing OpenAI and Microsoft, claiming their relationship violates antitrust laws. He’s also spending billions to compete against them. Given that he’s essentially Trump’s co-president — as well as being Trump’s most important tech advisor — it’s pretty much a slam dunk that one more federal suit will be filed.
As one piece of evidence that suits are coming, the FTC weighed in on Musk’s side in his suit against the company and OpenAI, saying antitrust laws support his claims. In a wink-wink, nudge-nudge claim that no one believes, the agency says it’s not taking sides in the Musk lawsuit.
The upshotExpect the investigations into Microsoft to culminate in one or more suits filed against the company. After that, it’s anyone’s guess what might happen. The government could ask that Microsoft be broken into pieces — perhaps lopping off its AI arm. It could even ask that the cloud as well as AI be turned into their own businesses. Or it could go a softer route by fining the company billions of dollars and forcing it to change its business practices.
Either way, hard times are likely ahead for Microsoft. The big question will be whether CEO Satya Nadella can weather the turbulence better than Bill Gates and Steve Ballmer did when the previous federal suit against the company laid it low for a decade.
The secret to using generative AI effectively
Do you think generative AI (genAI) sucks? I did. The hype around everything genAI has been over the top and ridiculous for a while now. Especially at the start, most of the tools were flashy, but quickly fell apart if you tried to use them for any serious work purposes.
When ChatGPT started really growing in early 2023, I turned against it hard. It wasn’t just a potentially interesting research product. It was a bad concept getting shoved into everything.
Corporate layoffs driven by executives who loved the idea of replacing people with unreliable robots hurt a lot of workers. They hurt a lot of businesses, too. With the benefit of hindsight, we can now all agree: genAI, in its original incarnation, just wasn’t working.
At the end of 2023, I wrote about Microsoft’s then-new Copilot AI chatbot and summed it up as “a storyteller — a chaotic creative engine that’s been pressed into service as a buttoned-up virtual assistant, [with] the seams always showing.”
You’d probably use it wrong, as I noted at the time. Even if you used it right, it wasn’t all that great. It felt like using a smarter autocomplete.
Much has changed. At this point in 2025, gen AI tools can actually be useful — but only if you use them right. And after much experimentation and contemplation, I think I’ve found the secret.
Ready to turn up your Windows productivity — with and without AI? Sign up for my free Windows Intelligence newsletter. I’ll send you free Windows Field Guides as a special welcome bonus!
The power of your internal dialogueSo here it is: To get the best possible results from genAI, you must externalize your internal dialogue. Plain and simple, AI models work best when you give them more information and context.
It’s a shift from the way we’re accustomed to thinking about these sorts of interactions, but it isn’t without precedent. When Google itself first launched, people often wanted to type questions at it — to spell out long, winding sentences. That wasn’t how to use the search engine most effectively, though. Google search queries needed to be stripped to the minimum number of words.
GenAI is exactly the opposite. You need to give the AI as much detail as possible. If you start a new chat and type a single-sentence question, you’re not going to get a very deep or interesting response.
To put it simply: You shouldn’t be prompting genAI like it’s still 2023. You aren’t performing a web search. You aren’t asking a question.
Instead, you need to be thinking out loud. You need to iterate with a bit of back and forth. You need to provide a lot of detail, see what the system tells you — then pick out something that is interesting to you, drill down on that, and keep going.
You are co-discovering things, in a sense. GenAI is best thought of as a brainstorming partner. Did it miss something? Tell it — maybe you’re missing something and it can surface it for you. The more you do this, the better the responses will get.
It’s actually the easiest thing in the world. But it’s also one of the hardest mental shifts to make.
Let’s take a simple example: You’re trying to remember a word, and it’s on the tip of your tongue. You can’t quite remember it, but you can vaguely describe it. If you were using Google to find the word, you’d have to really think about how to craft the perfect search term.
In that same scenario, you could rely on AI with a somewhat rambling, conversational prompt like this:
“What’s the word for a soft kind of feeling you get — it’s warm, but a little cold. It’s sad, but that’s not quite right. You miss something, but you’re happy you miss it. It’s not melancholy, that’s wrong, that’s too sad. I don’t know. It reminds me of walking home from school on a sunny fall afternoon. The sun is setting and you know it will be winter soon, and you miss summer, and you know it’s over, but you’re happy it happened.”
And the genAI might respond: wistful. That’s your answer. More likely, the tool will return a list of possible words. It might not magically know you meant wistful right away — but you will know the moment you see the word within its suggestions.
This is admittedly an overwrought example. A shorter description of the word — “it’s kind of like this, and it’s kind of like that” — would also likely do the trick.
Ramble onThe best way to sum up this strategy is simple: You need to ramble.
Try this, as an experiment: Open up the ChatGPT app on your Android or iOS phone and tap the microphone button at the right side of the chat box. Make sure you’re using the microphone button and not the voice chat mode button, which does not let you do this properly.
(Amusingly enough, the ChatGPT Windows app doesn’t support this style of voice input, and Microsoft’s Copilot app doesn’t, either. This shows that the companies building this type of product don’t really understand how it’s best used. If you want to ramble with your voice, you’ll need to use your phone — or ramble by typing on your keyboard.)
This is the easiest way to get started with true stream-of-consciousness rambling.Chris Hoffman, Foundry
After you tap the microphone button, ramble at your phone in a stream-of-consciousness style. Let’s say you want TV show recommendations. Ramble about the shows you like, what you think of them, what parts you like. Ramble about other things you like that might be relevant — or that might not seem relevant! Think out loud. Seriously — talk for a full minute or two. When you’re done, tap the microphone button once more. Your rambling will now be text in the box. Your “ums” and speech quirks will be in there, forming extra context about the way you were thinking. Do not bother reading it — if there are typos, the AI will figure it out. Click send. See what happens.
Just be prepared for the fact that ChatGPT (or other tools) won’t give you a single streamlined answer. It will riff off what you said and give you something to think about. You can then seize on what you think is interesting — when you read the response, you will be drawn to certain things. Drill down, ask questions, share your thoughts. Keep using the voice input if it helps. It’s convenient and helps you really get into a stream-of-consciousness rambling state.
Did the response you got fail to deliver what you needed? Tell it. Say you were disappointed because you were expecting something else. Say you’ve already watched all those shows and you didn’t like them. That is extra context. Keep drilling down.
You don’t have to use voice input, necessarily. But, if you’re typing, you need to type like you’re talking to yourself — with an inner dialogue, stream-of-consciousness style, as if you were speaking out loud. If you say something that isn’t quite right, don’t hit backspace. Keep going. Say: “That wasn’t quite right — I actually meant something more like this other thing.”
The beauty of back-and-forthLet’s say you want to use genAI to brainstorm the perfect marketing tagline for a campaign. You’d start by rambling about your project, or maybe just speaking a shorter prompt. Ask for a bunch of rough ideas so you can start contemplating and take it from there.
But then, critically, you keep going. You say you like a few ideas in particular and want to go more in that direction. You get some more possibilities back. You keep going, on and on — “Well, I like the third one, but I think it needs more of [something], and the sixth one is all right but [something else].” Keep talking, postulating, refining, following paths of concepts to something that feels more right to you.
If the tool doesn’t seem to be on the right wavelength, don’t get frustrated and back out. Tell it: “No, you don’t understand. This is for a major clothing company. I need it to sound professional but also catch people’s eyes. That’s why your suggestions are all too much.”
In a similar way to how the long stream-of-consciousness ramble lays a lot of context to push genAI in a useful direction this back-and-forth lays a lot of context as groundwork. Your entire conversation until that point forms the scaffolding of the conversation and affects the future responses in the thread. As you keep adding onto and continuing the conversation, you can make it more attuned to what you’re looking for.
Crucially, genAI is not making decisions. You are making all the decisions. You are exercising the taste. You can push it in this or that direction to get ideas. If it lands on something you disagree with, you can push back: “No, that’s not right at all. We really got off track. How about…?”
Is this silly? Well, brainstorming doesn’t normally mean sitting in an empty room meditating while staring at paint drying. It often means searching Google, seeing what other people say, poking around for inspiration. This can be similar — but faster.
Maybe you still use Google for brainstorming sometimes — or go for a walk and be alone with your thoughts! That’s fine, too. GenAI is meant to be another tool in your toolbox. It isn’t meant to be the end-all answer.
The bigger AI pictureTo be clear: I’m not here to sell you on the idea of embracing genAI. I’m here to tell you that companies peddling these tools right now are selling you the wrong thing. The way they talk about the technology is not how you should use it. It’s no wonder so many smart people are bouncing off it and being rightfully critical of what we’re being sold.
GenAI should not be a replacement for thinking. More than anything, it is a tool for exploring concepts and the connections between them. You can use it to write a better email. You can use it to put together a marketing plan. It will do things you don’t expect, and that’s the point.
Yes, it might hallucinate and make things up. (That’s why you need to keep your brain engaged.) You might want to just opt out. You might decided to keep plugging away looking for answers. Just remember: If you’re using genAI, try to use it to be more human, not less. That will help you write better emails — and accomplish much more beyond that.
Let’s stay in touch! Sign up for my free Windows Intelligence newsletter today. I’ll send you three new things to try each Friday.
- « první
- ‹ předchozí
- …
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- …
- následující ›
- poslední »
