Security-Portal.cz je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.

Kategorie

Adobe Express Enterprise is where iWork could boldly go

Computerworld.com [Hacking News] - 13 min 33 sek zpět

Apple’s deliberate approach to generative AI (genAI) means other companies have been rolling out solutions that include the quickly evolving technology — and Adobe’s pushing hard to realize its benefits. GenAI already figures in its creative products, and as of today it’s available in Adobe Express for Enterprise.

The idea is that you can use Adobe Express for Enterprise to build branded social media content and create regional marketing presentations, media briefings, internal reports and more. As it’s an enterprise product, companies can equip people across their teams with access to the powerful set of tools for automated content creation.

I can’t help but feel that genAI-driven tools like these would make excellent additions across the Apple iWork suite (Numbers, Pages, Keynote). Think how useful it would be to be able to sketch out ideas on iPhone to improve on other Apple devices, or even using a PC and iCloud.com. Introduction of powerful creative tools like these with every Mac, iPhone, or iPad makes plenty of sense, and most Apple users surely hope for something close to this at WWDC.

After all, Apple did acquire the iWork.ai domain earlier this year. But what do you get from Adobe Express today?

Adobe Express — an enterprise marketing power tool

Adobe Express has tools for AI content creation of frequently required publicity and marketing materials, including images, banners, social media posts and videos. Express also offers QR code generation, while the LLM support means it is possible to create images, templates, and other assets using word prompts. All of this runs on Adobe’s powerful Firefly model.

Adobe Express for Enterprise builds on this. For example, if you run a brand, you can apply brand kits across teams to nurture a consistent appearance. You can also share pre-created templates, and link assets created in other Adobe applications, such as Photoshop. The idea here is that the professionally designed images created by the design department can then be used on an ad hoc basis by regional offices or remote teams.

Companies including IBM Consulting, Dentsu, Red Hat and Owen Jones already use Express in their work for tasks, including ad hoc creation of branded marketing materials and fast content versioning.

The company claims to have addressed one major concern raised by enterprise users seeking to use genAI for image creation: copyright. Trained on public domain assets and images it owns, Adobe has built Firefly to be a system capable of creating commercially safe (as in copyright free) images. Imagery generated with its solution is IP indemnified.

What it means to business

In brief, Adobe Express for Enterprise aims to enable businesses to generate the sheer quantities of personalized content required for customer communications in a multiplatform, social media-connected age. Govind Balakrisnan, Adobe senior vice president, Express Product Group and Creative Cloud Services, promises it will help “fill the content gap,” while maintaining brand standards. 

That’s all interesting in its own right, of course, but it did also catch my eye that Adobe is working with Microsoft to develop Adobe Express Extension for Microsoft Copilot. The idea behind that effort is to make it possible for Microsoft 365 users to create various kinds of content from within their applications using Copilot chat.

Adobe’s willingness to work with Microsoft, itself currently riding a new wave of positive sentiment thanks to the success of what seem at least at present to be well-received breed of Copilot+ PCs, is also of interest.

Is Adobe, a company that seemed so very impressed with the introduction of Apple Silicon, also prepared to ally with Apple (or vice versa) to bring its genAI creative Firefly toolbox to iPhones, iPads, and Macs? Or will Apple hew close to its traditional path and attempt to bring LLM-powered creative tools in-house?

More importantly, will an iPhone user working in Pages soon be able to use LLM-generated templates to make their documents look even better across any device? Or will that ability remain the domain of Adobe Express, online or in the app? Either way, for anyone in business, Adobe Express Enterprise might soon become a familiar name.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Kategorie: Hacking & Security

Malware Delivery via Cloud Services Exploits Unicode Trick to Deceive Users

The Hacker News - 2 hodiny 11 min zpět
A new attack campaign dubbed CLOUD#REVERSER has been observed leveraging legitimate cloud storage services like Google Drive and Dropbox to stage malicious payloads. "The VBScript and PowerShell scripts in the CLOUD#REVERSER inherently involves command-and-control-like activities by using Google Drive and Dropbox as staging platforms to manage file uploads and downloads," Securonix Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

SolarMarker Malware Evolves to Resist Takedown Attempts with Multi-Tiered Infrastructure

The Hacker News - 3 hodiny 23 min zpět
The persistent threat actors behind the SolarMarker information-stealing malware have established a multi-tiered infrastructure to complicate law enforcement takedown efforts, new findings from Recorded Future show. "The core of SolarMarker's operations is its layered infrastructure, which consists of at least two clusters: a primary one for active operations and a secondary one likelyNewsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Adobe Lightroom gets first Firefly feature — Generative Remove

Computerworld.com [Hacking News] - 3 hodiny 30 min zpět

Adobe is bringing Firefly features to Lightroom for the first time with the addition of Generative Remove.

Adobe has been busy adding Firefly features to its various apps over the past year, including Photoshop and Premiere Pro; now it’s Lightroom’s turn.

Lightroom is Adobe’s app for organizing and processing photographs.

As with the similar Generative Fill feature in Photoshop, Generative Remove lets users remove unwanted elements of a photo —  wrinkles on a tablecloth in food photography, for example, or distractions in holiday photos — by selecting them and deleting. It does this non-destructively, meaning any changes can be reversed. 

Users also have access to several presets to help them get started. 

Generative Remove, available now in early access, relies on Adobe’s Firefly Image 1 model, an older version of the image generation tool compared to the Generative Fill feature in Photoshop. 

Additionally, the AI-powered Lens Blur announced last year at Adobe’s Max event is now generally available. The feature adds an “aesthetic blur” that can be applied to images, with several bokeh effects. Other updates announced Tuesday include a new Lightroom mobile editing experience that “streamlines the mobile toolbar to prioritize the most popular features,” optimization for HDR displays, and easier access to photo libraries in Lightroom mobile and desktop apps. 

Lightroom subscriptions start at $4.99 per user per month for mobile only, and $9.99 a month for access to the Lightroom ecosystem. 

Kategorie: Hacking & Security

Five Core Tenets Of Highly Effective DevSecOps Practices

The Hacker News - 4 hodiny 57 min zpět
One of the enduring challenges of building modern applications is to make them more secure without disrupting high-velocity DevOps processes or degrading the developer experience. Today’s cyber threat landscape is rife with sophisticated attacks aimed at all different parts of the software supply chain and the urgency for software-producing organizations to adopt DevSecOps practices that deeply The Hacker Newshttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Microsoft launches AI-powered Copilot+ PCs

Computerworld.com [Hacking News] - 5 hodin 42 min zpět

Microsoft has announced a new category of Windows PCs, designed to leverage the full power of AI. Christened Copilot+ PCs and developed in collaboration with PC manufacturers such as HP, Dell, Samsung, Asus, Acer, Lenovo, and Microsoft Surface, these devices will boast higher processing power, all-day battery life, and a suite of AI features.

Copilot+ PCs represent the “most significant change to the Windows platform in decades,” Yusuf Mehdi, executive vice president and consumer chief marketing officer at Microsoft said in a blog post. “We have completely reimagined the entirety of the PC – from silicon to the operating system, the application layer to the cloud – with AI at the center.”

The first batch of Copilot+ PCs will come with Qualcomm Snapdragon X series processors and will hit the shelves on June 18, the blog said. PCs with Intel and AMD chips will join the bandwagon soon after.

Unleashing AI power

Microsoft has designed an “all-new” system architecture combining the power of CPU, GPU, and a high-performance Neural Processing Unit (NPU) to add AI capabilities to the Copilot+ PCs.

Copilot+ PCs will be equipped with advanced silicon capable of performing 40 trillion operations per second (TOPS) as against 10 TOPS for Intel Meteor Lake processors that power the company’s AI PCs launched recently.

“Connected to and enhanced by the large language models (LLMs) running in our Azure Cloud in concert with small language models (SLMs), Copilot+ PCs can now achieve a level of performance never seen before,” Mehdi said in the blog post. The company claimed that this new line of PCs is “20X” more powerful and up to “100X” more efficient to run AI workloads.

They outperform Apple’s MacBook Air 15” by up to 58% in sustained multithreaded performance, all while delivering all-day battery life, Mehdi added in the blog.

Microsoft also introduced a “Recall” feature in the Copilot+ devices, designed to help users find lost information stored in the device. Recall acts like a form of “photographic memory” for the device, the company said.

“CoPilot+ PC is a step change for the entire PC industry,” said Neil Shah, VP for research and partner at Counterpoint Research. “Adding CoPilot and CoPliot+, on-device AI, and rearchitecting Windows ground up, and further optimized with advanced chipset solutions from Qualcomm, Intel, and AMD will redefine the PC experience. These experiences will warrant advanced configurations from compute to memory to run the generative AI-capable assistants. These assistants or gen AI features part of CoPilot+ such as Recall, Live Captions, and Photos are backed by tens of AI data models always running in the background collecting tons of information real-time right on the device making it more private, secure, and personal. This is the biggest difference between the earlier AI assistants, which always needed to connect to the cloud.”

Focus on security

Every Copilot+ PC comes secured out of the box, Mehdi said in the blog. “The Microsoft Pluton Security processor will be enabled by default on all Copilot+ PCs and we have introduced several new features, updates, and defaults to Windows 11 that make it easy for users to stay secure.”

This setup, paired with Microsoft’s Azure Cloud and Resource Public Key Infrastructure (RPKI), ensures superior AI performance and robust security measures.

Besides, the Copilot AI assistant also gets a major upgrade in the new devices, offering a streamlined interface and access to advanced models like OpenAI’s GPT-4o, enabling more engaging and natural voice interactions, the blog added.

What’s in it for enterprises?

According to industry experts, Copilot+ could be highly relevant for enterprises, providing powerful tools for productivity, creativity, and communication.

“My view is that they will be much more beneficial for enterprises than individuals,” Faisal Kawoosa, chief analyst and founder of Techarc said. “This is because enterprises will have tons of data to really unleash the power of AI. Also, in enterprises a user today works with a complex maze of apps, that’s sometimes a task for users to even remember and then connect them together, make them talk to each other, etc. That’s where AI through copilot + will take care of all such complexities.”

As Microsoft and its partners expand Copilot+ to enterprise PCs, the AI models running in the background will not only boost productivity across core Microsoft Office, Azure AI, and Dynamics CRM applications, but also within solutions from other partners such as Adobe, Cognizant, IBM, ServiceNow, Amdocs, Dell, Siemens, and more, Shah added.

“The CoPilot+ PCs will demonstrate how enterprise-level AI models can adapt and optimize specific workflows, providing employees with a more intelligent assistant powered by CoPilot+. Trained on internal enterprise data, CoPilot+ acts as an assistant, enabling tasks such as file searching, email summarization, smart scheduling, meeting note management, follow-ups, and efficient cross-collaboration among employees across various projects and locations.,” Shah said.

With native support for popular apps such as Microsoft 365, Chrome, Spotify, DaVinci Resolve, Affinity Suite, and Zoom, the Copilot+ PCs offer seamless integration into existing workflows. Slack is also getting added later this year, Mehdi said in the blog.

“I think now users in enterprises will only need to know what and why they want to do a task, how it will be done will be left to AI,” Kawoosa said.

Kategorie: Hacking & Security

Researchers Uncover Flaws in Python Package for AI Models and PDF.js Used by Firefox

The Hacker News - 6 hodin 8 min zpět
A critical security flaw has been disclosed in the llama_cpp_python Python package that could be exploited by threat actors to achieve arbitrary code execution. Tracked as CVE-2024-34359 (CVSS score: 9.7), the flaw has been codenamed Llama Drama by software supply chain security firm Checkmarx. "If exploited, it could allow attackers to execute arbitrary code on your system, Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Streamlining IT Security Compliance Using the Wazuh FIM Capability

The Hacker News - 6 hodin 9 min zpět
File Integrity Monitoring (FIM) is an IT security control that monitors and detects file changes in computer systems. It helps organizations audit important files and system configurations by routinely scanning and verifying their integrity. Most information security standards mandate the use of FIM for businesses to ensure the integrity of their data. IT security compliance involves adhering toThe Hacker Newshttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

What happens when genAI vendors kill off their best sources?

Computerworld.com [Hacking News] - 6 hodin 30 min zpět

If you think the latest generative AI (genAI) tools such as Google AI Overviews and OpenAI GPT-4o will change the world, you’re right. They will. But will they change it for the better? That’s another question.

I’ve been playing with both tools (and other genAI programs, as well). I’ve found they’re still prone to hallucinations, but sound more convincing than ever. That’s not a good thing.

One of the reasons I’m still making a living as a tech journalist is because I’m very good at discerning fact from fantasy. Part of that skill set comes from being an excellent researcher. The large language models (LLM) that underpin genAI chatbots…, not so much. Today, and for the foreseeable future, at their best, genAI is really just very good at copying and pasting from the work of others. 

That means the results they spit out are only as good as their sources. Look at it this way: if I want to know about the latest news, I go to The New York Times, the Washington Post, and the Wall Street Journal. Not only do I trust their reporters, but I know what their biases are. 

For example, I know I can believe what the Journal has to say about financial news, but I take their columnists with a huge grain of salt. (That’s just me; you might love them.)

As for the Times, remember it claims that OpenAI has stolen its stories to train ChatGPT — and if it wins its case, genAI is in trouble. Because other publishers will follow in quick succession. When that happens, all the genAI engines will have to steal — uhm, learn — their content from the likes of Reddit; your “private” Slack messages; and Stack Overflow, where users are sabotaging their answers to screw up OpenAI

That’s not going to go well. There’s a reason genAI engines often spew garbage; it’s what they were trained on. For instance, 80% of OpenAI GPT-3 tokens come from Common Crawl. Like the name says, these petabytes of data are scraped from everywhere and anywhere on the web. As a Mozilla Foundation study found, the result is not trustworthy AI.

Worse still, this will eventually lead to a time when those genAI tools start consuming their own garbage. This is a known problem that will cause model collapse. Or, as neuroscientist Erik Hoel pithily describes the end result: “synthetic garbage.” He’s not alone; many AI engineers think a little bit of AI-generated data can poison their LLMs.

At the same time, genAI companies aren’t doing us — or themselves, in the long run — any favors. For example, Google’s AI-powered “Overviews” provides concise AI summaries at the top of search results. This move promises quicker access to information, and Google’s Liz Reid claims it will drive more clicks to websites by piquing users’ interest.

Reid, who oversees search operations, maintains that AI Overviews really will encourage more searches and clicks to websites as users seek to “dig deeper” after getting the initial synthesized summary.

Publishers know better. Who will bother to go to the real story, which might require a subscription or — horrors —seeing an ad?  

Danielle Coffee, CEO of the News Media Alliance (it represents more than 2,200 publishers) warns that the change could be “catastrophic” for an industry already struggling with declining ad revenue. “It’s offensive and potentially unlawful for a dominant monopoly like Google to dictate the rules in a way that sacrifices the interests of publishers and creators,” she said.

Google has never been a friend to publishers. Just ask leaders in countries like Spain or Canada, where the government tried to get Google to pay publishers for access to their news sites. 

If Google, Microsoft, and other genAI companies keep all those search visitors (and ad revenues) to themselves, as I expect will be the case, publications will die at an even faster rate. And there goes any authoritative information Google and the other AI services need for their LLMs. 

OpenAI’s co-founder, Sam Altman, recently said, “GPT-4 is the dumbest model any of you will ever have to use again by a lot” and that “GPT-5 is going to be a lot smarter.”

I’m sure it will be. GPT-4o is clearly superior to its predecessor and GPT-5 will continue the trend. But GPT-6 and beyond? Simple greed may ensure that, as reliable human-created stories disappear, AI will only get dumber and dumber.  

In short, we’re looking at a future filled with AI GIGO: Garbage In, Garbage Out. No one wants that. The time to stop it is now. 

Kategorie: Hacking & Security

Windows 11 to Deprecate NTLM, Add AI-Powered App Controls and Security Defenses

The Hacker News - 7 hodin 28 min zpět
 Microsoft on Monday confirmed its plans to deprecate NT LAN Manager (NTLM) in Windows 11 in the second half of the year, as it announced a slew of new security measures to harden the widely-used desktop operating system. "Deprecating NTLM has been a huge ask from our security community as it will strengthen user authentication, and deprecation is planned in the second half of 2024," the Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

NextGen Healthcare Mirth Connect Under Attack - CISA Issues Urgent Warning

The Hacker News - 9 hodin 17 min zpět
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Monday added a security flaw impacting NextGen Healthcare Mirth Connect to its Known Exploited Vulnerabilities (KEV) catalog, citing evidence of active exploitation. The flaw, tracked as CVE-2023-43208 (CVSS score: N/A), concerns a case of unauthenticated remote code execution arising from an incomplete Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

"Linguistic Lumberjack" Vulnerability Discovered in Popular Logging Utility Fluent Bit

The Hacker News - 9 hodin 47 min zpět
Cybersecurity researchers have discovered a critical security flaw in a popular logging and metrics utility called Fluent Bit that could be exploited to achieve denial-of-service (DoS), information disclosure, or remote code execution. The vulnerability, tracked as CVE-2024-4323, has been codenamed Linguistic Lumberjack by Tenable Research. It impacts versions from 2.0.7 through Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Slack updates AI ‘privacy principles’ after user backlash

Computerworld.com [Hacking News] - 20 Květen, 2024 - 20:57

Slack has updated its “privacy principles” in response to concerns about the use of customer data to train its generative AI (genAI) models. 

The company said in a blog post Friday that it does not rely on user data — such as Slack messages and files — to develop the large language models (LLMs) powering the genAI features in its collaboration app. But customers still need to opt out of the default use of their data for its machine learning-based recommendations.

Criticism of Slack’s privacy stance apparently started last week, when a Slack user posted on X about the company’s privacy principles, highlighting the use of customer data in its AI models and requirement to opt out. Others expressed outrage on a HacknerNews thread

On Friday, Slack responded to the frustrations with an update to some of the language of its privacy principles, attempting to differentiate between its machine learning and LLMs. 

Slack uses machine learning techniques for certain features such as emoji and channel recommendations, as well as in search results. While these ML algorithms are indeed trained on user data, they are not built to “learn, memorize, or be able to reproduce any customer data of any kind,” Slack said. These ML models use “de-identified, aggregate data and do not access message content in DMs, private channels, or public channels.”

No customer data is used to train the third-party LLMs used in its Slack AI tools, the company said.

Slack noted the user concerns and acknowledged that the previous wording of its privacy principles contributed to the situation.  “We value the feedback, and as we looked at the language on our website, we realized that they were right,” Slack said in a blog post Friday. “We could have done a better job of explaining our approach, especially regarding the differences in how data is used for traditional machine-learning (ML) models and in generative AI.”  

“Slack’s privacy principles should help it address concerns that could potentially stall adoption of genAI initiatives,” said Raúl Castañón, senior research analyst at 451 Research, part of S&P Global Market Intelligence.

However, Slack continues to opt customers in by default when it comes to sharing user data with the AI/ML algorithms. To opt out, the Slack admin at a customer organization must email the company to request their data is no longer accessed. 

Castañón said Slack’s stance is unlikely to allay concerns around data privacy as businesses begin to deploy genAI tools. “In a similar way as with consumer privacy issues, while an opt-in approach is considerably less likely to get a response, it typically conveys more trustworthiness,” he said.

A recent survey by analyst firm Metrigy showed that the use of customer data to train AI models is the norm: 73% of organizations polled are training or plan to train AI models on customer data.

“Ideally, training would be opt-in, not opt-out, and companies like Slack/Salesforce would proactively inform customers of the specifics of what data is being used and how it is being used,” said Irwin Lazar, president and principal analyst at Metrigy.  “I think that privacy concerns related to AI training are only going to grow and companies are increasingly going to face backlash if they don’t clearly communicate data use and training methods.”

Kategorie: Hacking & Security

Exploring the Central Role of Linux in Quantum Computing

LinuxSecurity.com - 20 Květen, 2024 - 18:11
The intersection of Linux and quantum computing has become increasingly apparent, emphasizing the importance of Linux-based operating systems in developing and deploying quantum computing technologies. As quantum computing technology advances, there is a growing need for operating systems that can support quantum computing frameworks. This interdisciplinary discussion should be particularly interesting to Linux admins, infosec professionals, internet security enthusiasts, and sysadmins, as the impact on security and infrastructure is significant.
Kategorie: Hacking & Security

Iranian MOIS-Linked Hackers Behind Destructive Attacks on Albania and Israel

The Hacker News - 20 Květen, 2024 - 18:05
An Iranian threat actor affiliated with the Ministry of Intelligence and Security (MOIS) has been attributed as behind destructive wiping attacks targeting Albania and Israel under the personas Homeland Justice and Karma, respectively. Cybersecurity firm Check Point is tracking the activity under the moniker Void Manticore, which is also known as Storm-0842 (formerly DEV-0842) by Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Iranian MOIS-Linked Hackers Behind Destructive Attacks on Albania and Israel

The Hacker News - 20 Květen, 2024 - 18:05
An Iranian threat actor affiliated with the Ministry of Intelligence and Security (MOIS) has been attributed as behind destructive wiping attacks targeting Albania and Israel under the personas Homeland Justice and Karma, respectively. Cybersecurity firm Check Point is tracking the activity under the moniker Void Manticore, which is also known as Storm-0842 (formerly DEV-0842) by
Kategorie: Hacking & Security

Does Apple want to lower genAI expectations for WWDC?

Computerworld.com [Hacking News] - 20 Květen, 2024 - 17:58

There’s been a change in tone concerning what to expect from Apple’s forthcoming AI announcements at WWDC, so perhaps it’s time to moderate the hype.

What we’ve been hoping for is an impressive counterattack from the company, one designed to shrug off speculation the company is falling behind on AI. With thousands of engineers and billions of dollars focused on AI research and development, there’s been building expectations of something impressive from the company. However, if Apple industry bellwether Mark Gurman has it right, Apple’s planned announcements, while good, might not quite reach the pinnacle of great.

Is better enough?

That doesn’t mean what’s coming won’t be interesting or noteworthy. Gurman seems to expect some impressive highlights, including tools such as voice memo transcription, summaries of notifications and web pages, and generative AI-powered editing tools.

The latter will apparently work in a similar way to how genAI works in Adobe’s creative apps, which presumably means you’ll be able to generate machine-created images and apply edits using voice/text prompts on your devices. Gurman doesn’t seem to think these will impress regular Adobe users, but given that most of the world’s population don’t use Adobe, it’s reasonable to suppose that for many millions of people, Apple’s tools will be their first exposure to the potential of such technologies.

The gap remains

All the same, despite Apple’s advantages in market reach and platform size, Gurman claims Apple executives still think there is a gap between the current pace of Apple’s genAI development and that of its competitors. He even says this gap is unlikely to close soon, which is perhaps why Apple has been speaking to competitors such as OpenAI, Google, and Baidu.

It’s possible we’ll learn of a deal between OpenAI and Apple at or around WWDC 2024, potentially including integration of ChatGPT natively on the iPhone.  We might also see interesting new features built on the company’s recently introduced tools for accessibility

If Apple’s truly not quite there yet, it will not want to disappoint with weak-tea WWDC news — and that makes this a good time to constrain the optimism. This part of the match isn’t over; Apple and AI is a work in progress; and the company’s R&D teams continue to churn out powerful-seeming foundational technologies, including its very own multimodal LLM mode, called Ferret.

The privacy thing

One area of speculation I don’t think Apple hopes to quash revolves around privacy and edge AI. It seems probable that edge intelligence will guide some features, implying that when you do use genAI on your iPhone, the process/data will be kept confidential. 

That is essential if Apple wants to make using these tools a customary part of daily life, particularly in the enterprise space. With that in mind, it is curious that the tone of Gurman’s comments suggest Apple’s focus on privacy and security is limiting what it can achieve with AI — but does at least value the information. Apple is planning its own cloud-based genAI services that should deliver functionality and security, and is investing in highly secure data center processors.

There is a one more card in play that could work in Apple’s favor in the long game: ChatGPT and Google Gemini are server-based solutions, but their future evolution will be constrained by AI regulation and the need to maintain data sovereignty.

These forces will become a barrier to growth, and it remains possible that by focusing on data privacy today, Apple could hold a winning hand by the end of the game. So, while company insiders may be attempting to guide expectation a little lower as we travel toward the new AI iPhone in fall, the game at this table isn’t over yet. Partnership, or even acquisition, could be the next set of cards in Apple’s deck.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Kategorie: Hacking & Security

Empowering Linux and Open-Source Security with AI: Strategies, Tools and Best Practices

LinuxSecurity.com - 20 Květen, 2024 - 16:22
It's hard to think of a technology more impactful than Artificial Intelligence (AI) . While it's been around for a while, it's only recently broken into the mainstream. Now that it has, it's rewriting the playbook for much of the tech industry, especially open-source software (OSS).
Kategorie: Hacking & Security

Research Indicates All Linux Vendor Kernels Are Insecure - But There's a Fix!

LinuxSecurity.com - 20 Květen, 2024 - 15:55
Recent research sheds light on the security vulnerabilities prevalent in Linux vendor kernels due to flawed engineering processes that backport fixes. It emphasizes the importance of using the most up-to-date kernel releases for enhanced security, challenging the traditional vendor-bound kernel model.
Kategorie: Hacking & Security

Foxit PDF Reader Flaw Exploited by Hackers to Deliver Diverse Malware Arsenal

The Hacker News - 20 Květen, 2024 - 14:20
Multiple threat actors are weaponizing a design flaw in Foxit PDF Reader to deliver a variety of malware such as Agent Tesla, AsyncRAT, DCRat, NanoCore RAT, NjRAT, Pony, Remcos RAT, and XWorm. "This exploit triggers security warnings that could deceive unsuspecting users into executing harmful commands," Check Point said in a technical report. "This exploit has been used by multiple
Kategorie: Hacking & Security
Syndikovat obsah