Security-Portal.cz je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.

Kategorie

Asana warns MCP AI feature exposed customer data to other orgs

Bleeping Computer - 18 Červen, 2025 - 09:16
Work management platform Asana is warning users of its new Model Context Protocol (MCP) feature that a flaw in its implementation potentially led to data exposure from their instances to other users and vice versa. [...]
Kategorie: Hacking & Security

Paddle settles for $5 million over facilitating tech support scams

Bleeping Computer - 17 Červen, 2025 - 23:14
Paddle.com and its U.S. subsidiary will pay $5 million to settle Federal Trade Commission (FTC) allegations that the company facilitated deceptive tech-support schemes that harmed many U.S. consumers, including older adults. [...]
Kategorie: Hacking & Security

Scania confirms insurance claim data breach in extortion attempt

Bleeping Computer - 17 Červen, 2025 - 21:04
Automotive giant Scania confirmed it suffered a cybersecurity incident where threat actors used compromised credentials to breach its systems and steal insurance claim documents. [...]
Kategorie: Hacking & Security

Instagram ads mimicking BMO, EQ Bank are finance scams

Bleeping Computer - 17 Červen, 2025 - 18:52
Instagram ads impersonating financial institutions like Bank of Montreal (BMO) and EQ Bank (Equitable Bank) are being used to target Canadian consumers with phishing scams and investment fraud. Some ads use AI-powered deepfake videos in an attempt to collect your personal information, while others drive traffic to phishing pages. [...]
Kategorie: Hacking & Security

Instagram ads mimicking BMO, EQ Banks are finance scams

Bleeping Computer - 17 Červen, 2025 - 18:52
Instagram ads impersonating financial institutions like Bank of Montreal (BMO) and EQ Bank (Equitable Bank) are being used to target Canadian consumers with phishing scams and investment fraud. Some ads use AI-powered deepfake videos in an attempt to collect your personal information, while others drive traffic to phishing pages. [...]
Kategorie: Hacking & Security

Instagram 'BMO' ads use AI deepfakes to scam banking customers

Bleeping Computer - 17 Červen, 2025 - 18:52
Instagram ads impersonating financial institutions like Bank of Montreal (BMO) and EQ Bank (Equitable Bank) are being used to target Canadian consumers with phishing scams and investment fraud. Some ads use AI-powered deepfake videos in an attempt to collect your personal information, while others drive traffic to phishing pages. [...]
Kategorie: Hacking & Security

New Veeam RCE flaw lets domain users hack backup servers

Bleeping Computer - 17 Červen, 2025 - 17:42
​Veeam has released security updates today to fix several Veeam Backup & Replication (VBR) flaws, including a critical remote code execution (RCE) vulnerability. [...]
Kategorie: Hacking & Security

Sitecore CMS exploit chain starts with hardcoded 'b' password

Bleeping Computer - 17 Červen, 2025 - 17:10
A chain of Sitecore Experience Platform (XP) vulnerabilities allows attackers to perform remote code execution (RCE) without authentication to breach and hijack servers. [...]
Kategorie: Hacking & Security

UK fines 23andMe for ‘profoundly damaging’ breach exposing genetics data

Bleeping Computer - 17 Červen, 2025 - 16:59
The UK Information Commissioner's Office (ICO) has fined genetic testing provider 23andMe £2.31 million ($3.12 million) over 'serious security failings' that led to a 'profoundly damaging' data breach in 2023. [...]
Kategorie: Hacking & Security

Microsoft fixes Surface Hub boot issues with emergency update

Bleeping Computer - 17 Červen, 2025 - 16:06
Microsoft has released an emergency update to fix a known issue causing startup failures for some Surface Hub v1 devices running Windows 10. [...]
Kategorie: Hacking & Security

How to automate IT ticket handling with AI and Tines

Bleeping Computer - 17 Červen, 2025 - 16:01
Tired of drowning in IT tickets? This AI-powered workflow built on Tines auto-triages common issues like known bugs & password resets—saving time for your team and speeding up resolution. Learn more about Tines and get a free account now. [...]
Kategorie: Hacking & Security

Hacker steals 1 million Cock.li user records in webmail data breach

Bleeping Computer - 17 Červen, 2025 - 15:50
Email hosting provider Cock.li has confirmed it suffered a data breach after threat actors exploited flaws in its now-retired Roundcube webmail platform to steal over a million user records. [...]
Kategorie: Hacking & Security

Why Apple’s Foundation Models Framework matter

Computerworld.com [Hacking News] - 17 Červen, 2025 - 15:24

Look, it’s not just about Siri and ChatGPT; artificial intelligence will drive future tech experiences and should be seen as a utility. That’s the strategic imperative driving Apple’s WWDC introduction of the Foundation Models Framework for its operating systems. It represents a series of tools that will let developers exploit Apple’s own on-device AI large language models (LLMs) in their apps. This was one of a host of developer-focused improvements the company talked about last week

The idea is that developers will be able to use the models with as little as three lines of code. So, if you want to build a universal CMS editor for iPad, you can add Writing Tools and translation services to your app to help writers generate better copy for use across an international network of language sites.

Better yet, when you build that app, or any other app, Apple won’t charge you for access to its core Apple Intelligence models – which themselves operate on the device. That’s great, as it means developers for no charge can deliver what will over time become an extensive suite of AI features within their apps while also securing user privacy.

What are Foundation Models?

In a note on its developer website, Apple tells us the models it made available in Foundational Models Framework are particularly good at text-generation tasks such as summarization, “entity extraction,” text understanding, refinement, dialogue for games, creative content generation, and more.

You get:

  • Apple Intelligence tools as a service for use in apps.
  • Privacy, as all data stays on the device.
  • The ability to work offline because processing takes place on the device.
  • Small apps, since the LLM is built into the OS.

Apple has also made solid decisions in the manner in which it has built Foundational Models. Guided Generation, for example, works to ensure the LLM provides consistently structured responses for use within the apps you build, rather than the messy code many LLMs generate; Apple’s framework is also able to provide complex responses in a more usable format. 

Finally, Apple said it is possible to give the Apple Intelligence LLM access to tools other than your own. Dev magazine explains that “tool calling” means you can instruct the LLM when it needs to work with an external tool to bring in information, such as up-to-the-minute weather reporting. That can also extend to actions, such as booking trips.

This kind of access to real information helps keep the LLM sober, preventing it from using fake data to resolve its task. Finally, the company has also figured out how to make apps remember the AI conversations, which means you can engage in inclusive sessions of requests, rather than single-use requests. To stimulate development using Foundation Models, Apple has built in support for doing so inside Xcode Playgrounds.

Walking toward the horizon

Unless you’ve spent the last 12 months locked away from all communications on some form of religious retreat to promote world peace (in which case, I think you should have prayed harder), you’ll know Apple Intelligence has its critics. Most of that criticism is based on the idea that Apple Intelligence needs to be a smart chatbot like ChatGPT (and it isn’t at all unfair to castigate Siri for being a shadow of what it was intended to be). 

But that focus on Siri skips the more substantial value released when using LLMs for specific tasks, such as those Writing Tools I mentioned. Yes, Siri sucks a little (but will improve) and Apple Intelligence development has been an embarrassment to the company. But that doesn’t mean everything about Apple’s AI is poor, nor does it mean it won’t get better over time.

What Apple understands is that by making those AI models accessible to developers and third-party apps, it is empowering those who can’t afford fee-based LLMs to get creative with AI. That’s quite a big deal, one that could be considered an “iPhone moment,” or at least an “App Store moment,” in its own right, and it should enable a lot of experimentation.

“We think this will ignite a whole new wave of intelligent experiences in the apps users rely on every day,” Craig Federighi, Apple senior vice president for software engineering, said at WWDC. “We can’t wait to see what developers create.”

What we need

We need that experimentation. For good or ill, we know AI is going to be everywhere, and whether you are comfortable with that truth is less important than figuring out how to best position yourself to be resilient to that reality.

Enabling developers to build AI inside their apps easily and at no cost means they will be able to experiment, and hopefully forge their own path. It also means Apple has dramatically lowered the barrier to entry for AI development on its platforms, even while it is urgently engaged in expanding what AI models it provides within Apple Intelligence. As it introduces new foundation models, developers will be able to use them, empowering more experimenting.

With the cost to privacy and cost of entry set to zero, Foundation Models change the argument around AI on Apple’s platforms. It’s not just about a smarter Siri, it is about a smarter ecosystem — one that Apple hopes developers will help it build, one AI-enabled app at a time.

The Foundation Models Framework is available for beta testing by developers already with public betas to ship with the operating systems in July.

You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.

Kategorie: Hacking & Security

Windows 11 přidávají sdílení přetažením, Windows 10 zase sekundy do kalendářů

Zive.cz - bezpečnost - 17 Červen, 2025 - 14:45
**Microsoft uvolnil červnové servisní aktualizace pro Windows, Office a spol. **Windows 11 mají řadu nových funkcí včetně sdílení přetažením. **Pár úprav kalendáře na hlavním panelu se dočkaly Windows 10.
Kategorie: Hacking & Security

Global Microsoft 365 outage disrupts Teams and Exchange services

Computerworld.com [Hacking News] - 17 Červen, 2025 - 12:25

Microsoft experienced a significant service disruption across its Microsoft 365 services on Monday, affecting core applications including Microsoft Teams and Exchange Online. The outage left users globally unable to access collaboration and communication tools critical to consumers as well as enterprise workflows.

In a series of updates posted on X through the official account of Microsoft 365 Status, Microsoft acknowledged the incident and confirmed that it was actively investigating user reports of service impact. The incident was tracked under the identifier MO1096211 in the Microsoft 365 Admin Center.

Minutes after initial acknowledgement, Microsoft initiated mitigation steps and reported that all services were in the process of recovering. “We’ve confirmed that all services are recovering following our mitigation actions. We’re continuing to monitor recovery,” the company said in an update.

Roughly an hour later, Microsoft posted another update, saying, “Our telemetry indicates that all of our services have recovered and that the impact is resolved.”

“The Microsoft outage that disrupted Teams, Exchange Online, and related services was ultimately caused by an overly aggressive traffic management update that unintentionally rerouted and choked legitimate service traffic. According to Microsoft’s official post-incident report, the faulty code was rolled back swiftly, but not before triggering global access failures, authentication timeouts, and mass user logouts,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research.

Microsoft did not immediately respond to a request for comment.

Not an isolated incident

This incident adds to a growing number of high-profile cloud service disruptions across the industry, raising questions about the resilience of hyperscale infrastructure and the impact on cloud-dependent enterprises. In the last 30 days, IBM Cloud services were disrupted thrice, and a Google Cloud outage just last week impacted over 50 services globally for over seven hours.

Microsoft, in particular, has experienced a steady stream of service disruptions in recent months, exposing persistent fault lines in its cloud infrastructure.

In March this year, the outage disrupted Outlook, Teams, Excel, and more, impacting over 37,000 users. In May, Outlook suffered another outage, which was attributed to a change that caused the problem.

According to Gogia, this sustained pattern reveals architectural brittleness in Microsoft’s control-plane infrastructure — especially in identity, traffic orchestration, and rollback governance — and reinforces the urgent need for structural mitigation.

Costly outages call for contingency planning

Given the complexity and global scale of hyperscale cloud infrastructures, outages remain an ongoing risk for leading SaaS platforms, including Microsoft 365. More so for enterprises that operate in hybrid and remote work environments, threatening business continuity.

Such outages can lead to loss of productivity and disrupted communications, depending on the applications they affect as well as the extent of the outage. This could mean a loss of thousands of dollars to potentially millions of dollars for some, explained Neil Shah, vice president of research, Counterpoint.

Manish Rawat, analyst, TechInsights, said industry estimates suggest that IT downtime can cost mid- to large-sized enterprises between $100,000 and $500,000 per hour, depending on their sector and the criticality of operations. “For large organizations, even a brief 2–3 hour outage could result in millions in lost productivity, reputational harm, and serious operational setbacks, especially in high-stakes sectors like finance, healthcare, and manufacturing,” he said.

Given the recent incidents involving Microsoft 365 services alone, experts believe that enterprises must reduce their overdependence on Microsoft 365. “Organizations should adopt robust contingency plans that include alternative communication tools, offline access to critical documents, and a comprehensive incident response framework,” said Prabhu Ram, VP for industry research group at CMR.

Kategorie: Hacking & Security

Grammarly looks to evolve into an always-on desktop AI agent

Computerworld.com [Hacking News] - 17 Červen, 2025 - 12:00

Grammarly is reinventing itself as a platform of generative AI (genAI) agents that go beyond grammar recommendations.

The company is building always-on AI technologies that follow users across work applications and can coordinate projects, write documents and automate workflows.  The growth is partially fueled by a cash infusion of $1 billion from General Catalyst last month.

Users will be able to have deeper conversations with, and get recommendations from, a variety of AI tools that draw context from documents and action. The company hopes the tools will attract power users who want more than automatic recommendations.

“We’re going to be able to give you much more feedback than, ‘Here are correct words,’” Noam Lovinsky, chief product officer at Grammarly, told Computerworld. “We’ll be able to give you feedback from experts that you care about. We’ll be able to help you right from start to finish.”

For example, an agent could access Zoom transcripts from candidate interviews and create draft scorecards.

“Maybe at the end of that process, Grammarly says, ‘Actually, I have this agent. And if you want, I can like create a draft for you of every single score card the minute you get off of the Zoom,’” Lovinsky said.

Grammarly can take that further in its Coda team workspace tool. It can automate that post-Zoom workflow by generating a table of all interviews, linking transcripts, and generating draft responses. Users can review and refine the drafts before sending the data to hiring systems such as Greenhouse.

“That is almost like the common interface layer by which all the agents are going to get to show up. They’ll come to your applications the same way that Grammarly does today,” Lovinsky said.

Lovinsky offered another example of how Grammarly tools will work with popular collaboration tools from Slack and Atlassian. “If you’re writing a status update for a project, we’ll actually know the latest things that have been said in Slack about that project and the latest Jira tickets that have been filed, so we can help you coalesce those things and create a good update.”

Grammarly claims 40 million users and supports 500,000-plus applications and websites. Its tools typically slip into the user interface without disturbing the flow of work, which is what the company wants to continue.

“You just install it and it just works, and you just go do your thing and we show up in the right ways and in the right moments,” Lovinsky said.

The genAI technologies will include a companion that can go deeper into context, which could appeal to power users. “If you want more than just what shows up in the underlines, if you want deeper work with more back and forth, we’ll have a companion that opens up and works with every application,” Lovinsky said.

Grammarly appears to have thought about what technology it has that it could build on to offer a differentiated genAI tool, said Nancy Gohring, senior research director for AI at IDC. “What it landed on was the platform it had already created that allowed the original Grammarly application to work across third-party applications,” she said.  

Grammarly’s advantage comes from leveraging its existing platform as a way to offer a range of agents that work across the third-party apps already deployed in the enterprise, Gohring said.

But it will also compete with numerous companies delivering similar tools. Microsoft and Google already provide document drafting and automation AI agents. Users rely on large-language models (LLMs) for grammar and error correction.

But with LLMs, users must cut, paste, and prompt to get answers. Grammarly wants to ease that burden by working within the user interface and automatically understanding the right context.

“What I want to do is create an interface that doesn’t require you to prompt and re-prompt until you get your output,” Lovinsky said. “It just works.”

Grammarly builds its own LLMs, but also uses commercial AI providers. Users cannot explicitly choose their model, but that limitation could change.

“I think we’re going to change that because there are more sophisticated users,” Lovinsky said. “What they really like is that we bring it inline.”

The company plans to focus on tools for knowledge workers in the wider market and not target specific domains. “You’re not going to see us do a lot in coding, data analysis, or design work,” Lovinsky said.

There are already many rivals on the market that aim to allow for agent development, management and collaboration, IDC’s Gohring said.

“Grammarly will need to clearly articulate how it’s different and how it fits adjacent to the others,” she said.

Kategorie: Hacking & Security

OpenAI-Microsoft tensions escalate over control and contracts

Computerworld.com [Hacking News] - 17 Červen, 2025 - 11:00

The relationship between OpenAI and Microsoft is under growing strain amid extended talks over OpenAI’s restructuring, with OpenAI reportedly considering antitrust action over Microsoft’s influence in the partnership.

OpenAI leaders have considered alleging that Microsoft engaged in anticompetitive practices during their collaboration, a move that could prompt a federal investigation, WSJ reported.

The ChatGPT maker is reportedly exploring the option of urging regulators to examine its contractual relationship with Microsoft, along with a public campaign.

Meanwhile, The Information reported that OpenAI is seeking to give Microsoft a roughly 33% stake in its reorganized for-profit unit in exchange for relinquishing rights to future profits.

OpenAI also wants to revise existing contract clauses that grant Microsoft exclusive cloud hosting rights and to exclude its planned $3 billion acquisition of AI startup Windsurf from terms that give Microsoft access to OpenAI’s intellectual property, the report added.

These developments threaten to disrupt one of the most closely watched alliances in the AI sector.

A potential antitrust complaint by OpenAI could heighten regulatory scrutiny of major AI-cloud partnerships and lead enterprise customers to reevaluate risks tied to vendor lock-in and control over core infrastructure.

Microsoft, a major investor since 2019, supports OpenAI through Azure and powers tools like Microsoft 365 Copilot with its models.

However, tensions between OpenAI and Microsoft have been simmering in recent months, with occasional public clashes.

OpenAI has also been trying to reduce its dependence on Microsoft by turning to Google Cloud for additional computing power, while Microsoft has been working to lessen its own reliance on OpenAI by integrating alternative AI models into its Copilot platform, according to Reuters.

Impact on enterprises

A potential regulatory review may weaken enterprise confidence in adopting or expanding the use of Copilot and related tools, particularly in heavily regulated sectors such as healthcare and financial services.

“Over the short to long term, enterprises could face service disruptions, compatibility issues, or increased costs as vendors adjust their business models in response to changes in the partnership or service offerings,” said Prabhu Ram, VP of the industry research group at CyberMedia Research.

OpenAI models currently power Microsoft Copilot. But with growing innovation from rivals like DeepSeek, both firms appear to be preparing for a more independent path.

“The rate at which AI is advancing, especially given what DeepSeek has demonstrated, suggests that being locked into a single model is no longer a prudent strategy for Microsoft,” said Neil Shah, VP of research and partner at Counterpoint Research. “Enterprises will need to prepare for AI tools and platforms that are diverse in capability, modular, and scalable.”

For OpenAI, partnerships with Oracle Cloud and potentially Google Cloud will help scale its models further in enterprise deployments, particularly in the public sector, where Google is working to expand its presence.

“In the end, most cloud and AI providers will need to support multiple models and adopt modular integration to give enterprises more choice,” Shah said. “This way, they avoid becoming a one-trick pony and can select models based on their strengths, future development roadmaps, and alignment with specific use cases.”

Kategorie: Hacking & Security

Hackers switch to targeting U.S. insurance companies

Bleeping Computer - 16 Červen, 2025 - 22:43
Threat intelligence researchers are warning of hackers breaching multiple U.S. companies in the insurance industry using all the tactics observed with Scattered Spider activity. [...]
Kategorie: Hacking & Security

OpenAI’s MCP move tempts IT to trust genAI more than it should

Computerworld.com [Hacking News] - 16 Červen, 2025 - 21:23

Generative AI (genAI) poses a classic IT dilemma. When it works well, it is amazingly versatile and useful, fueling dreams that it can do almost anything. 

The problem is that when it does not do well, it might deliver wrong answers, override its instructions, and pretty much reinforce the plotlines of every sci-fi horror movie ever made. That is why I was horrified when OpenAI late last month announced changes to make it much easier to give its genAI models full access to any software using Model Context Protocol (MCP).

“We’re adding support for remote MCP servers⁠ in the Responses API, building on the release of MCP support in the Agents SDK⁠,” the company said. “MCP is an open protocol that standardizes how applications provide context to LLMs. By supporting MCP servers in the Responses API, developers will be able to connect our models to tools hosted on any MCP server with just a few lines of code.”

There are a large number of companies that have publicly said they will use MCP, including those with  popular apps such as PayPal, Stripe, Shopify, Square, Slack, QuickBooks, Salesforce and GoogleDrive.

The ability for a genAI large language model (LLM) to coordinate data and actions with all of those apps — and many more —certainly sounds attractive. But it’s dangerous because it allows access to mountains of highly sensitive compliance-relevant data — and a mistaken move could deeply hurt customers. MCP would also allow genAI tools to control those apps, exponentially increasing risks.

If the technology today cannot yet do its job properly and consistently, what level of hallucinogens are needed to justify expanding its power to other apps?

Christofer Hoff, the CTO and CSO at LastPass, took to LinkedIn to appeal to common sense. (OK, if one wanted to appeal to common sense, LinkedIn is probably not the best place to start, but that’s a different story.) 

“I love the enthusiasm,” Hoff wrote. “I think the opportunity for end-to-end workflow automation with a standardized interface is fantastic vs mucking about hardcoding your own. That said, the security Jiminy Cricket occupying my frontal precortex is screaming in terror. The bad guys are absolutely going to love this. Who needs malware when you have MCP? Like TCP/IP, MCP will likely go down as another accidental success. At a recent talk, Anthropic noted that they were very surprised at the uptake. And just like TCP/IP, it suffers from critical deficiencies that will have stuff band-aided atop for years to come.”

Rex Booth, the CISO at identity vendor SailPoint, said the concerns are justified. “If you are connecting your agents to a bunch of highly sensitive data sources, you need to have strong safeguards in place,” he said. 

But as Anthropic itself has noted, genAI models do not always obey their own guardrails

QueryPal CEO Dev Nag sees inevitable data usage problems. 

“You have to specify what files [the model] is allowed to look at and what files it is not allowed to look at and you have to be able to specify that,” Nag said. “And we already know that LLMs don’t do that perfectly. LLMs hallucinate, make incorrect textual assumptions.”

Nag argued that the risk is — or at least should be — already well known to IT decision makers. “It’s the same as the API risk,” Nag said. “If you open up your API to an outside vendor with their own code, it could do anything. MCP is just APIs on steroids. I don’t think you’d want AI to be looking at your core financials and be able to change your accounting.”

The best defense is to not trust the guardrails on either side of the communication, but to give the exclusion instructions to both sides. In an example with the model trying to access Google Docs, Nag said, dual instructions are the only viable approach.

“It should be enforced at both sides, with the Google Doc layer being told that it can’t accept any calls from the LLM,” Nag said. “On the LLM side, it should be told ‘OK, my intentions are to show my work documents, but not my financial documents.’”

Bottom line: the concept of MCP interactiveness is a great one. The likely near-term reality? Not so much.

Kategorie: Hacking & Security

Canalys: Companies limit genAI use due to unclear costs

Computerworld.com [Hacking News] - 16 Červen, 2025 - 21:11

As companies move from testing out generative AI tools and models into real-world use — also known as inference— they’re having trouble predicting what that use will lead to in terms of cloud costs, according to a new report from analyst firm Canalys .

“Unlike training, which is a one-time investment, inference represents a recurring operational cost, making it a crucial constraint on the path to commercializing AI,” said Canalys senior director Rachel Brindley in a statement. “As AI moves from research to large-scale deployment, companies are increasingly focusing on cost-effectiveness in inference, comparing models, cloud platforms, and hardware architectures such as GPUs versus custom accelerators.”

According to Canalys researcher Yi Zhang, many AI services rely on usage-based pricing models that charge per token or API call; that makes it difficult to predict costs when scaling up usage.

“When inference costs are volatile or excessively high, companies are forced to limit usage, reduce model complexity, or restrict implementation to high-value scenarios. As a result, the broader potential of AI remains underutilized,” said Zhang.

Kategorie: Hacking & Security
Syndikovat obsah