Security-Portal.cz je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.

Kategorie

Claude catches up to ChatGPT with built-in memory support

Bleeping Computer - 24 Červen, 2025 - 23:52
AI startup Anthorpic is planning to add a memory feature to Claude in a bid to take on ChatGPT, which has an advanced memory feature. [...]
Kategorie: Hacking & Security

Google Cloud donates A2A AI protocol to the Linux Foundation

Bleeping Computer - 24 Červen, 2025 - 23:34
Google Cloud has donated its Agent2Agent (A2A) protocol to the Linux Foundation, which has now announced a new community-driven project called the Agent2Agent Project. [...]
Kategorie: Hacking & Security

SonicWall warns of trojanized NetExtender stealing VPN logins

Bleeping Computer - 24 Červen, 2025 - 22:36
SonicWall is warning customers that threat actors are distributing a trojanized version of its NetExtender SSL VPN client used to steal VPN credentials. [...]
Kategorie: Hacking & Security

Windows 10 KB5061087 update released with 13 changes and fixes

Bleeping Computer - 24 Červen, 2025 - 20:07
Microsoft has released the June 2025 non-security preview update for Windows 10, version 22H2, with fixes for bugs preventing the Start Menu from launching and breaking scanning features on USB multi-function printers. [...]
Kategorie: Hacking & Security

Microsoft fixes known issue that breaks Windows 11 updates

Bleeping Computer - 24 Červen, 2025 - 19:13
Microsoft is rolling out a configuration update designed to address a known issue causing Windows Update to fail on some Windows 11 systems. [...]
Kategorie: Hacking & Security

Windows 10 users can get extended security updates using Microsoft points

Bleeping Computer - 24 Červen, 2025 - 19:00
Microsoft says Windows 10 home users who want to delay switching to Windows 11 can enroll in the Extended Security Updates (ESU) program at no additional cost using Microsoft Rewards points or enabling Windows Backup to sync their data to the cloud. [...]
Kategorie: Hacking & Security

Trezor’s support platform abused in crypto theft phishing attacks

Bleeping Computer - 24 Červen, 2025 - 18:54
Trezor is alerting users about a phishing campaign that abuses its automated support system to send deceptive emails from its official platform. [...]
Kategorie: Hacking & Security

Mosyle’s AccessMule makes employee access a little easier for SMBs

Computerworld.com [Hacking News] - 24 Červen, 2025 - 17:33

Apple device management vendor Mosyle has introduced AccessMule, an easy-to-use workflow platform designed to address a specific set of small business needs related to granting, managing, auditing, sharing, storing, and removing employee access from company systems.

These protections are particularly important when on-boarding and off-boarding employees. 

To understand why this matters, it’s important to consider that the main source of cybersecurity breaches among all businesses is not hackers per se, but intentional or unintentional actions performed by employees. That human factor is behind 74% of all security breaches, according to 2023 research from Verizon.

Mosyle has its own research to explain the problem

Employee access is a time bomb

According to that data:

  • Around 87% of small and mid-sized businesses (SMBs) say they cannot immediately verify which employees have access to company permissions.
  • Roughly the same percent of SMBs also fail to promptly revoke employee access when they leave.
  • Nearly 90% of companies have found former employees still have access to company applications and files, even after they leave the company.

None of these risks are good, of course — particularly in the context of an unravelling consensus around cybersecurity. So, it makes sense for companies to put sufficient protections in place today rather than face attacks in the future. 

Mosyle encountered challenges managing the on/off-boarding process at first. “The decision to build AccessMule was born out of necessity at Mosyle” said the company’s CEO, Alcyr Araujo. “Later, we realized it wasn’t just a gap for our organization, but a fundamental problem that needed to be solved for all SMBs. We’re launching AccessMule today as an independent subsidiary that will empower organizations with a high-quality, secure and efficient access and password management platform at an affordable price.”

What does AccessMule provide?

Mosyle’s wholly-owned service provider will provide a range of tools designed to defend against the consequences of lax employee security. The main focus is to automate those elements of access control that SMBs often fail to manage. That means tools to automate onboarding and offboarding processes, along with controls to assign access based on roles and overall oversight reporting so it is possible to check who has corporate access at any given time. 

Additional features include bult-in password management, safe password sharing, encryption sharing, and support for shared multi-factor authentication (MFA). Role-based access control (RBAC) features grant permissions in bulk, making it easy to assign permissions for new employees based on their role with a single action. All of these tools and services are available via an easy-to-use portal, the company said. The idea is that IT can maintain oversight on device and employee security, helping them better protect their company.

The ever booming Apple enterprise

Mosyle’s is just one of a range of announcements to emerge from across Apple’s enterprise value chain since WWDC. Just last week, Jamf published its own in-depth Apple-focused security report, while open-source device management vendor Fleet recently announced $27 million in new series B funding to help accelerate development of its own open platform for both cloud- and self-hosted device management for organizations of all kinds. Another vendor, Addigy, recently introduced its own new security partnership with CyberFOX

It is usual for Apple’s enterprise partners to begin making service announcements subsequent to WWDC. This is usually inspired by Apple’s moves to enhance enterprise support in its products at the event. It is possible that all reputable Apple device management partners have now begun working with the new Apple betas and enterprise features it is building for introduction this fall.

Apple at WWDC introduced a host of new enterprise-focused improvements, including better support for Apple Accounts in the enterprise, improvements in device management, and a significant enhancement in the quality and quantity of device information IT can access from across their fleets. The latter means tech will even be able to audit MAC address, Activation Lock statues, storage, and cellular information, as well as AppleCare coverage. Platform SSO, App management, and device sharing tools were also improved at WWDC.

You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.

Kategorie: Hacking & Security

New FileFix attack weaponizes Windows File Explorer for stealthy commands

Bleeping Computer - 24 Červen, 2025 - 17:00
A cybersecurity researcher has developed FileFix, a variant of the ClickFix social engineering attack that tricks users into executing malicious commands via the File Explorer address bar in Windows. [...]
Kategorie: Hacking & Security

US House reportedly bans WhatsApp from staffers’ devices over security concerns

Computerworld.com [Hacking News] - 24 Červen, 2025 - 16:20

A US House of Representatives official has reportedly banned WhatsApp from staffers’ government-issued devices, citing cybersecurity concerns about the messaging platform’s data handling practices. The decision adds Meta’s flagship messaging service to a growing list of applications deemed too risky for congressional use.

This ban signals heightened scrutiny of consumer messaging platforms in government environments and reinforces long-standing enterprise security concerns about using consumer-grade communication tools for sensitive business operations.

House cybersecurity office raises multiple red flags

The House Chief Administrative Officer (CAO) informed congressional staffers Monday that WhatsApp is banned on their government devices, according to a report by Axios. It cited an internal email saying “the Office of Cybersecurity has deemed WhatsApp a high risk to users due to the lack of transparency in how it protects user data, absence of stored data encryption, and potential security risks involved with its use.”

House staffers are prohibited from downloading or keeping “any mobile, desktop, or web browser versions” of WhatsApp on House-managed devices, the report said. Those who already have the app installed will be contacted to remove it.

According to the report, the CAO recommended several messaging alternatives, including Signal, Microsoft Teams, Amazon’s Wickr, and Apple’s iMessage and FaceTime. This selection reveals the House’s preference for platforms with stronger enterprise-grade security features or those developed by trusted US technology partners.

The CAO’s office did not respond to Computerworld’s request for comment.

Meta disputed the CAO’s decision. “We disagree with the House Chief Administrative Officer’s characterization in the strongest possible terms,” said a Meta spokesperson. “We know members and their staffs regularly use WhatsApp and we look forward to ensuring members of the House can join their Senate counterparts in doing so officially. Messages on WhatsApp are end-to-end encrypted by default, meaning only the recipients and not even WhatsApp can see them. This is a higher level of security than most of the apps on the CAO’s approved list that do not offer that protection.”

Enterprise-grade requirements take center stage

The House’s decision demonstrates a fundamental shift in how organizations approach messaging platform selection, particularly for sensitive communications.

Counterpoint Research partner Neil Shah said, “Applications meant for enterprise or critical public sector personas need to be enterprise grade, certified and whitelisted by the CIO or IT departments to mitigate any risk concerns.”

The ban represents “a big blow to Meta setting precedent on security concerns or transparency of the data traversing through its apps,” he said.

While WhatsApp remains a highly popular personal application, Shah noted, it “needs to have more transparency on how the data will be handled not just in transit but on servers as there is a deeper integration with Instagram, Facebook and other Meta properties building the user’s social graph to augment Meta’s ad business.”

This WhatsApp ban continues a broader trend of the House restricting technology applications based on security concerns.

“With all the geopolitical tensions, the US house doesn’t want to leave any gaping holes in security as data and information is the new arsenal for countries to get upper hand,” Shah said.

In December 2022, the House banned TikTok from staffers’ devices, citing the app as “high risk due to a number of security issues.” More recently, the House has restricted Microsoft Copilot AI and limited ChatGPT usage to the paid ChatGPT Plus version only, citing concerns about data leaks to unauthorized cloud services.

Enterprise security implications

The House’s decision reflects growing concerns among enterprise IT leaders about consumer messaging platforms documented by security experts for years.

Consumer messaging apps such as WhatsApp often lack administrative controls organizations need for compliance and data retention, failing to provide centralized management capabilities or detailed audit trails required in regulated industries. Even more concerning is the metadata exposure issue: Although WhatsApp encrypts message content, communication patterns and usage statistics may still be collected, potentially revealing sensitive business intelligence.

Additionally, WhatsApp backups stored in cloud services are not encrypted by default, leaving chat histories potentially exposed unless users manually enable encrypted backups, a step many users overlook.

Enterprise messaging strategy

For enterprise IT leaders, the House’s WhatsApp decision offers several strategic considerations. Organizations should assess messaging platforms based on enterprise security requirements rather than consumer popularity, evaluating key factors including end-to-end encryption, administrative controls, compliance features, and data residency options.

Clear policies distinguishing between approved personal and professional communication tools can help prevent security gaps while maintaining productivity. The House’s concerns about WhatsApp’s data handling transparency highlight the critical importance of thorough vendor assessments and clear data processing agreements.

Enterprise-grade platforms such as Microsoft Teams, Slack, or specialized secure messaging solutions may better serve organizational security and compliance needs, offering features like data loss prevention, legal hold capabilities, and integration with existing security infrastructure that consumer apps simply cannot match.

The House’s action on WhatsApp may influence other government agencies and enterprises to reevaluate their messaging platform policies. As organizations increasingly rely on digital communication tools, the balance between usability and security will continue to evolve.

Kategorie: Hacking & Security

How Today’s Pentest Models Compare and Why Continuous Wins

Bleeping Computer - 24 Červen, 2025 - 16:01
Legacy pentests give you a snapshot. Attackers see a live stream. Sprocket's Continuous Penetration Testing (CPT) mimics real-world attackers—daily, not annually—so you can fix what matters, faster. Learn why CPT is the future. [...]
Kategorie: Hacking & Security

US House bans WhatsApp on staff devices over security concerns

Bleeping Computer - 24 Červen, 2025 - 15:43
The U.S. House of Representatives has banned the installation and use of WhatsApp on government-issued devices belonging to congressional staff, citing concerns over how the app encrypts and secures data. [...]
Kategorie: Hacking & Security

Securonis: A Linux Distro Thats Raising the Privacy Bar

LinuxSecurity.com - 24 Červen, 2025 - 14:57
Linux admins who value privacy directly baked into their operating system should undoubtedly give Securonis a hard look. It's not a name that's been bouncing around forums for years '' not yet, anyway '' but for anyone managing secure environments, evaluating anonymity-focused setups, or tracking developments in privacy-first operating systems , Securonis is a compelling contender.
Kategorie: Hacking & Security

IBM Donates CBOM Toolset to Linux Foundation

LinuxSecurity.com - 24 Červen, 2025 - 14:26
IBM recently announced that it was donating its CBOM toolset to the Post-Quantum Cryptography Alliance (PQCA) under the Linux Foundation. If you're a Linux admin or seasoned infosec professional, this announcement should catch your attention''not just as another open-source contribution, but as a serious move toward improving cryptography management in increasingly complex environments.
Kategorie: Hacking & Security

DeepSeek accused of powering China’s military and mining US user data

Computerworld.com [Hacking News] - 24 Červen, 2025 - 12:54

DeepSeek has willingly provided and will likely continue to provide support to China’s military and intelligence operations, according to a senior US State Department official, raising serious questions about data security for the millions of Americans using the popular AI service.

The Chinese artificial intelligence startup DeepSeek is reportedly actively supporting China’s military and intelligence apparatus while employing sophisticated workarounds to access restricted US semiconductor technology, according to Reuters.

“We understand that DeepSeek has willingly provided and will likely continue to provide support to China’s military and intelligence operations,” the report said, quoting a senior State Department official. “This effort goes above and beyond open-source access to DeepSeek’s AI models.”

“DeepSeek sought to use shell companies in Southeast Asia to evade export controls, and DeepSeek is seeking to access data centers in Southeast Asia to remotely access US chips,” the report added, quoting the official. The allegations come as the Hangzhou-based company’s AI models have gained widespread adoption across US cloud platforms and among US users.

Military connections run deep

The company is referenced more than 150 times in procurement records for China’s People’s Liberation Army and other entities affiliated with the Chinese defense industrial base, according to the US official, who added that DeepSeek had provided technology services to PLA research institutions.

This adds context to recent reports of DeepSeek’s military applications. The PLA has >utilized DeepSeek’s latest AI models for non-combat tasks, including hospital settings and personnel management. Meanwhile, a research team at a university in northwest China employed the AI to generate 10,000 military scenarios in 48 seconds — a task that traditionally requires 48 hours for human commanders.

Chinese defense contractors have also integrated DeepSeek into autonomous military vehicles, with Chongqing Landship recently deploying it in a self-driving military vehicle at an international defense exhibition.

An Nvidia Spokesperson said DeepSeek acquired its products lawfully. “With the current export controls, we are effectively out of the China datacenter market, which is now served only by competitors such as Huawei. Our review indicates that DeepSeek used lawfully acquired H800 products, not H100,” the spokesperson said.

“DeepSeek’s models are open-sourced, allowing developers to modify them as they see fit. Each developer decides how to handle user information subject to applicable laws. We do not support parties that have violated US export controls or are on the US entity lists. We rely on the US government to update the controls and lists as it deems appropriate. Forcing developers to use foreign AI stacks for non-military applications only hurts America in the AI race. The US wins whenever a developer promotes the US AI stack. China has one of the largest populations of developers in the world, creating open-source foundation models and non-military applications used globally. While security is paramount, every one of those applications should run best on the US AI stack,” the spokesperson said.

Export control failures exposed

The allegations highlight what experts describe as fundamental flaws in current US export control policies. “The DeepSeek episode has spotlighted a structural weakness in the US export control regime: the increasing obsolescence of hardware-focused policies in a cloud-native, AI-driven world,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research.

DeepSeek allegedly has access to “large volumes” of Nvidia’s high-end H100 chips, Reuters quoted the US official as saying, despite these processors being under strict US export restrictions since 2022.

Gogia argues that current hardware-focused controls fail to account for “distributed, virtualized environments” where “entities can lease advanced GPUs via third-party cloud access or operate under shell identities across permissive jurisdictions.” He advocates for export controls to evolve toward “a behavioral and intent-based model that evaluates not just what is being used, but how and by whom.”

Data security and surveillance concerns

Among the allegations, the official cited in the report said DeepSeek is sharing user information and statistics with Beijing’s surveillance apparatus. According to Stanford Cyber Policy Center, DeepSeek gathers comprehensive information, including personal details, all text and audio inputs, uploaded files, complete chat histories, and keystroke tracking patterns.

US lawmakers have previously noted that DeepSeek transmits American users’ data to China through “backend infrastructure” connected to China Mobile, a Chinese state-owned telecommunications giant.

Cloud platform paradox creates enterprise risk

Despite the allegations, Amazon, Microsoft, Google, Huawei, and Alibaba Cloud continue offering customers access to DeepSeek models running on their own cloud infrastructure, from which they say information is not sent to DeepSeek or China. But use of the models may pose other risks unrelated to data transfer.

This corporate embrace contrasts sharply with government responses. The US Congress, Navy, Pentagon, NASA, and Texas have banned DeepSeek usage, while Italy blocked DeepSeek in a country-wide prohibition.

“The widespread availability of large language models via public cloud marketplaces, often with ambiguous provenance, unclear jurisdictional obligations, and hidden lineage, creates significant risk exposure for US enterprises,” Gogia warned. He noted that organizations are “effectively ingesting black-box models whose training data, hosting infrastructure, and developer affiliations may be misaligned with their compliance obligations.”

For enterprise customers, the revelations demand immediate policy changes. Gogia recommends organizations “evolve from vendor trust to systemic verification” through AI chain-of-custody audits and strict legal clauses governing data retention and jurisdictional obligations.

“AI integration pipelines must be redesigned with whitelisting at their core, enabling only those vendors that have demonstrably met audit requirements for security, governance, and geopolitical neutrality,” he said. “As AI becomes a strategic backbone rather than a functional add-on, the cost of operational opacity now carries enterprise-wide ramifications.”

Strategic implications and policy gaps

The allegations come amid intensifying US-China AI competition. DeepSeek represents the first time a Chinese AI lab has demonstrated breakthroughs at the absolute frontier of foundational AI research, marking a significant milestone in China’s capabilities.

When DeepSeek’s R1 model launched in January, it briefly caused Nvidia to lose more than $600 billion in market valuation as investors questioned assumptions about AI development costs and competitive advantages.

Some experts question DeepSeek’s claimed breakthroughs, arguing the true training costs were likely much higher than the reported $5.58 million, especially if the company had access to more advanced hardware than publicly disclosed.

Chinese companies create numerous shell companies faster than the Department of Commerce can track them, while export controls remain challenging as semiconductor chips are small, easily concealed, and produced by the millions.

When asked about potential additional sanctions, the official said the department had “nothing to announce at this time,” the report added.

DeepSeek did not respond to our requests for comment about the allegations. Amazon, Microsoft, and Google also did not immediately respond to requests for comment about their continued offering of DeepSeek models.

The detailed allegations suggest US officials are building a comprehensive case regarding the company’s activities, potentially setting the stage for more restrictive measures against Chinese AI firms operating in global markets.

Kategorie: Hacking & Security

College grads face a chaotic, nearly indiscernible IT job market

Computerworld.com [Hacking News] - 24 Červen, 2025 - 12:00

College grads are being screened out by AI before humans ever see their resumes, even as overwhelmed recruiters list “entry-level” jobs requiring years of experience. At the same time, hiring slowed sharply in April and May across all industries — including cybersecurity.

In April, employer hiring fell to its slowest pace in more than a decade, excluding the early months of the COVID-19 pandemic in 2020. “It’s happening in every industry,” according to a study by the nonprofit ISC2 (International Information System Security Certification Consortium).

Despite strong hiring intentions in late 2024, the cybersecurity field continues to face growing economic pressure. Hiring alone won’t solve skills shortages; organizations must also focus on retention, especially as the cost of an average US hire is nearly $5,000, according to the ISC2 study.

The study focused on hiring in cybersecurity roles, but it found more broadly that many organizations now prioritize soft skills and diverse backgrounds over technical expertise for a number of IT roles, reflecting a shift in a confusing job market. The research also found that certifications now outrank both education and experience when hiring for junior roles and more than half of hiring managers say they’ve passed on candidates because of social media activity.

ISC2 also found that many organizations are hiring people without technical chops for IT roles, preferring candidates with non-technical skills such as teamwork, problem-solving, and analytical thinking. However, a gap remains between expectations and realistic capabilities; for instance, while cloud security is deemed essential, few believe entry-level workers are ready to handle it.

When they do hire for tech roles, 90% of managers prefer IT experience and 89% prefer certifications over formal education.

ISC2 research shows many managers set unrealistic expectations for entry-level cybersecurity roles, despite this group’s potential to fill key skills gaps with proper support. For example, a third of hiring managers ask for advanced certifications like the CISSP in junior roles, even though they require multiple years of experience. Many of those certifications are intended to support more experienced cybersecurity professionals, not entry- and junior-level positions.

For example, 38% of hiring managers require the CISA (ISACA) certification for entry-level positions, even though the certification demands a minimum of five years of professional experience in information systems auditing, control, assurance or security. Likewise, hiring managers expect around a third of entry- (34%) and junior-level (33%) candidates to have the (ISC2) certification, which also requires a minimum of five years of cumulative, paid experience in cybersecurity.

Internships (55%) and apprenticeships (46%) are increasingly used to source talent, especially in sectors such as education, government, and energy, ISC2 said.

Despite concerns about attrition (58%), most managers have the budget for training (75%) and staffing (73%), seeing early-career development as fast, cost-effective, and strategic. Many also hire from non-tech academic backgrounds, recognizing the value of diverse perspectives.

Kategorie: Hacking & Security

Microsoft and OpenAI: Will they opt for the nuclear option?

Computerworld.com [Hacking News] - 24 Červen, 2025 - 12:00

The fight between Microsoft and OpenAI over what Microsoft should get for its $13 billion investment in the AI company has gone from nasty to downright toxic, with each of the companies considering strategies against the other that can only be described as their nuclear options. 

The stakes couldn’t be higher. 

Microsoft needs access to OpenAI technologies to keep its worldwide lead in AI and grow its valuation beyond its current more than $3.5 trillion. OpenAI needs Microsoft to sign a deal so the company can go public via an IPO. Without an IPO, the company isn’t likely to keep its highly valued AI researchers — they’ll probably be poached by companies willing to pay hundreds of millions of dollars for the talent.

How did things get so down and dirty? What comes next? To find out, let’s look at what started it all: that potential OpenAI IPO.

Chasing a $300 billion+ IPO

OpenAI was originally created as a non-profit with the sole goal of making sure AI was developed and used in an ethical manner. Founders, including current CEO Sam Altman and tech entrepreneur Elon Musk among others, said they wanted the technology to be “used in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

But that was before it became clear that trillions of dollars were at stake. So, now the company wants to restructure itself in a way that would include a for-profit focus and allow it to launch an IPO. If the IPO were launched today, the company would be worth an estimated $300 billion. Given that it won’t be launched until next year, the stakes are likely even higher.

Before OpenAI can go public, it must get approval for its restructuring from California, where the company is based, and Delaware, where the company is incorporated. To gain those approvals, it needs to ink a deal with Microsoft, its earliest and primary investor.

 Because of the peculiarities of OpenAI’s non-profit founding, Microsoft and OpenAI never made clear how Microsoft would be paid off if OpenAI went public. They’ve been sparring about it for more than a year. Now, the fight has become a steel-cage deathmatch.

Choosing their nuclear options

For its $13 billion investment in OpenAI, Microsoft received sole rights to OpenAI technologies, which it used to build its line of Copilot generative AI (genAI) tools. But nothing in the agreement specified what might happen if OpenAI were to go public.

Two issues are in play: What percentage of the company Microsoft should get for its investment, and whether Microsoft will get long-term exclusive access to OpenAI technologies.

The Financial Times reports that in negotiations over the past year, the companies have discussed Microsoft getting anywhere from a 20% to 49% equity stake in the OpenAI, which means the companies might be $100 billion apart in what they think should be done. They’re also still battling over whether Microsoft will continue to get exclusive rights to certain OpenAI technologies. 

For now, things are at an increasingly bitter impasse. Both companies are considering options that could threaten the other’s existence.

According to the Wall Street Journal, OpenAI is considering publicly accusing Microsoft of anticompetitive behavior. It might also ask the US government to review their contract to see whether it violates antitrust laws.

Antitrust suits are Microsoft’s biggest nightmare. One such suit laid the company low during the late 1980s and 1990s and led to a “lost decade” in which Microsoft became an also-ran in the most important new technologies, including the internet, social media, and mobile computing.

Google, Meta, Apple, and Amazon are all currently embroiled in federal antitrust suits and investigations, while Microsoft has remained — for now — untouched. But the US  government is quietly investigating whether Microsoft’s AI, cloud, and productivity suite technologies have been used to violate antitrust laws. OpenAI telling the feds Microsoft violated antitrust laws in their agreement could go a long way towards turning the investigation into an outright prosecution. And in prosecutions, anything can happen, including Microsoft being broken into pieces, even spinning off its AI capabilities.

Microsoft is mulling a nuclear option of its own — it might walk away from negotiations and, in the words of the Financial Times, “rely on its existing commercial contract to retain access to OpenAI’s technology until 2030.”  If that were to happen, OpenAI might not be able to go public. That would endanger a $400 billion investment in the company from Softbank and other investors.

Hovering over it all is an even bigger wildcard. Microsoft’s and OpenAI’s existing agreement dramatically curtails Microsoft’s rights to OpenAI technologies if the technologies reach what is called artificial general intelligence (AGI) — the point at which AI becomes capable of human reasoning. AGI wasn’t defined in that agreement. But Altman has said he believes AGI might be reached as early as this year

If he declares that OpenAI’s technologies have reached AGI, all bets for what might happen are off. At that point, we’d be in uncharted territory with judges trying to decide whether AGI had really been reached or not.

It’s not likely Altman will do that, because it would hand over OpenAI’s fate to the legal system, something he certainly doesn’t want. On the other hand, given the increasing bitterness of this fight, anything could happen. 

Who will be the winner in all this? My bet is still on Microsoft. It’s got enough cash and revenue to wait things out. OpenAI has more to lose than Microsoft. 

Expect OpenAI to blink first.

Kategorie: Hacking & Security

DuckDuckGo vás teď ochrání před podvodnými e-shopy, kryptoburzami a scarewarem

Zive.cz - bezpečnost - 24 Červen, 2025 - 09:45
**Webový prohlížeč DuckDuckGo hlásí vylepšení štítu proti hrozbám. **Scam Blocker nově odhalí podvodné kryptoburzy, e-shopy nebo scareware. **DuckDuckGo pro detekci nepoužívá Safe Browsing od Googlu.
Kategorie: Hacking & Security

APT28 hackers use Signal chats to launch new malware attacks on Ukraine

Bleeping Computer - 24 Červen, 2025 - 00:14
The Russian state-sponsored threat group APT28 is using Signal chats to target government targets in Ukraine with two previously undocumented malware families named BeardShell and SlimAgent. [...]
Kategorie: Hacking & Security

Microsoft’s new genAI model to power agents in Windows 11

Computerworld.com [Hacking News] - 23 Červen, 2025 - 21:40

Microsoft is laying the groundwork for Windows 11 to morph into a genAI-driven OS.

The company on Monday announced a critical AI technology that will make it possible to run generative AI (genAI) agents on Windows without Internet connectivity.

Microsoft’s small language model, called Mu, is designed to respond to natural language queries within the Windows OS, the company said in a blog post Monday. Mu takes advantage of the neural processing units (NPUs) of Copilot PCs, Vivek Pradeep, vice president and distinguished engineer for Windows Applied Sciences, said in the post.

Three chip makers — Intel, AMD and Qualcomm — provide NPUs in Copilot PCs prebuilt with Windows 11.

Mu already powers an agent that handles queries in the Settings menus in a preview version of Windows 11 available to early adopters with Copilot+ PCs. The feature is available in the Windows 11 preview version 26200.5651 that shipped  June 13. 

The model provides a better understanding and context of queries, and “has been designed to operate efficiently, delivering high performance while running locally,” Pradeep wrote.

Microsoft is aggressively pushing genAI features into the core of Windows 11 and Microsoft 365. The company introduced a new developer stack called Windows ML 2.0 last month for developers to make AI features accessible in software applications.

The company is also developing feature- or application-specific AI models for Microsoft 365 applications.

The 330-million parameter Mu model is designed to reduce AI computing cycles so it can run locally on Windows 11 PCs.  Laptops have limited hardware and battery life and need a cloud service for AI.

“This involved adjusting model architecture and parameter shapes to better fit the hardware’s parallelism and memory limits,” Pradeep wrote.

The model also generates high-quality responses with a better understanding of queries. Microsoft fine-tuned a custom Mu model for the Settings menu that could respond to ambiguous user queries on system settings. For example, the model can handle queries that do not specify whether to raise brightness on a main or secondary monitor.

The Mu encoder-decoder model breaks down large queries into a more compact representation of information, which is then used to generate responses. That’s different from large language models (LLMs), which are only decoder models and require all of the text to generate responses.

“By separating the input tokens from output tokens, Mu’s one-time encoding greatly reduces computation and memory overhead,” Pradeep said.

The encoder–decoder approach was significantly faster than LLMs such as Microsoft’s Phi-3.5, which is a decoder-only model. “When comparing Mu to a similarly fine-tuned Phi-3.5-mini, we found that Mu is nearly comparable in performance despite being one-tenth of the size,” Pradeep said.

Those gains are crucial for on-device and real-time applications. “Managing the extensive array of Windows settings posed its own challenges, particularly with overlapping functionalities,” Pradeep said.

The response time was under 500 milliseconds, which aligned with “goals for a responsive and reliable agent in Settings that scaled to hundreds of settings,” Pradeep said.

Microsoft has many genAI technologies that include OpenAI’s ChatGPT and its latest homegrown Phi 4 model, which can generate images, video and text.

Kategorie: Hacking & Security
Syndikovat obsah