Security-Portal.cz je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.

Kategorie

Ubuntu Linux 24.04 LTS Beta Released with Enhanced Security & Performance

LinuxSecurity.com - 7 hodin 44 min zpět
Canonical has recently announced the Beta release of Ubuntu Linux 24.04 LTS , codenamed "Noble Numbat." This release aims to continue Ubuntu's legacy of incorporating cutting-edge open-source technologies into a user-friendly, high-quality distribution.
Kategorie: Hacking & Security

Ex-Security Engineer Jailed 3 Years for $12.3 Million Crypto Exchange Thefts

The Hacker News - 13 Duben, 2024 - 16:25
A former security engineer has been sentenced to three years in prison in the U.S. for charges relating to hacking two decentralized cryptocurrency exchanges in July 2022 and stealing over $12.3 million. Shakeeb Ahmed, the defendant in question, pled guilty to one count of computer fraud in December 2023 following his arrest in July. "At the time of both attacks, Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

U.S. Treasury Hamas Spokesperson for Cyber Influence Operations

The Hacker News - 13 Duben, 2024 - 15:58
The U.S. Treasury Department's Office of Foreign Assets Control (OFAC) on Friday announced sanctions against an official associated with Hamas for his involvement in cyber influence operations. Hudhayfa Samir ‘Abdallah al-Kahlut, 39, also known as Abu Ubaida, has served as the public spokesperson of Izz al-Din al-Qassam Brigades, the military wing of Hamas, since at least 2007. "He publicly Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Hackers Deploy Python Backdoor in Palo Alto Zero-Day Attack

The Hacker News - 13 Duben, 2024 - 10:25
Threat actors have been exploiting the newly disclosed zero-day flaw in Palo Alto Networks PAN-OS software dating back to March 26, 2024, nearly three weeks before it came to light yesterday. The network security company's Unit 42 division is tracking the activity under the name Operation MidnightEclipse, attributing it as the work of a single threat actor of Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Growth in Open Source Use Among Businesses Analyzed

LinuxSecurity.com - 12 Duben, 2024 - 23:27
The open-source movement has come a long way, from its origins in the 1960s and 1970s to becoming an integral part of organizations worldwide. Recently, its adoption across various industries has increased significantly.
Kategorie: Hacking & Security

This month’s Patch Tuesday release is a big one

Computerworld.com [Hacking News] - 12 Duben, 2024 - 21:02

Microsoft released 149 updates in this month’s Patch Tuesday release, though there were no reports of public disclosures or other zero-days for the Microsoft ecosystem (Windows, Office, .NET). This update is very large, complex and will require some testing time, especially for the OLE, ODBC and SQL focused updates and their impact on complex applications. 

Microsoft also moved to make it easier to understand security-related CVE entries much easier by adopting the new CWE vulnerability reporting standard. The team at Application Readiness has provided this infographic detailing the risks associated with the April updates. 

Known issues 

Each month, Microsoft publishes a list of known issues that relate to the operating system and platforms included in the latest update cycle, including these two reported minor issues:

  • After you install KB5034203 or later updates, some Windows devices that use the DHCP Option 235 to discover Microsoft Connected Cache (MCC) nodes in their network might be unable to use those nodes. Microsoft is actively working on this issue, and so we should expect an update soon.
  • Some users of Windows Server 2008 will see messages that say, “Failure to configure Windows updates. Reverting Changes. Do not turn off your computer,” when attempting to update legacy devices. This may be a result of an improperly configured ESU configuration. Microsoft has recently updated its guidelines on acquiring and configuring ESU keys, which may help those still struggling.
Major revisions 

This month, Microsoft published these revisions to past updates:

  • CVE-2022-0001: Branch History Injection. Reason for revision: Corrected one or more links in the FAQ. This is an informational change only. No further action required.
  • CVE-2023-24932: Secure Boot Security Feature Bypass Vulnerability: Updated FAQs to include information on how to be protected from this vulnerability for customers running Windows 11 23H2 or Windows Server 2022, 23H2 Edition. No further action required.
  • CVE-2013-3900: WinVerifyTrust Signature Validation Vulnerability.

Microsoft has updated the FAQ documentation to inform customers that EnableCertPaddingCheck is data type REG_SZ (a string value) and not data type dword. When you specify ‘EnableCertPaddingCheck” as in “DataItemName1″=”DataType1:DataValue1” do not include the date type value or colon. This will mitigate the impact of this vulnerability.

There was a significant update to the Kerberos security system within Windows, too, with a change to an existing patch (CVE-2024-21427). Microsoft has removed all supported versions of Windows 11 as they are no longer affected by the vulnerability. (Looks like another reason to upgrade to the latest Windows desktop.)

Mitigations and workarounds

Microsoft released the following vulnerability-related mitigation:

  • CVE-2024-26232: Microsoft Message Queuing (MSMQ) Remote Code Execution Vulnerability. Microsoft helpfully notes that the MSMQ feature is rarely needed and can be disabled, reducing exposure to this vulnerability. Yep.

Each month, the Readiness team analyzes the latest updates and provides detailed, actionable testing guidance; the recommendations are based on a large application portfolio and detailed analysis of the patches and their potential impact on Windows and apps.

For this release cycle, we \ grouped the critical updates and required testing efforts into functional area including:

File management
  • Test scenarios involving tar.exe or the native support of archives in Windows.
  • Test end-to-end scenarios involving File Management Tasks and Storage Reports Management.
Crypto (local security mechanisms)
  • Test scenarios that utilize Crypto APIs. Please pay special attention to any operation that relies on CryptDecodeObject or CryptDecodeObjectEx
  • Test your cryptographic operations and key generation, particularly in VTL1 environments.
  • Test out variations of replications on different types and sizes of files and folders. 
Networking (DHCP and DNS)
  • Test functional scenarios where Client DUID is a required parameter. 
  • Send Message with VendorOption of DomainName. 
  • Check whether the client UID is provided to the RPC API.
  • Test DNS virtual instance and zone management scenarios.
Remote desktop and connections
  • Test out point-to-point connections and RRAS servers using the MPRAPI protocols. 
  • Test your VPN connections with a connect/disconnect, delete and repeat test cycle.

Automated testing will help with these scenarios (especially a testing platform that offers a “delta” for comparison between builds). However, for your line-of-business apps getting the application owner (doing UAT) to test and approve the results is absolutely essential. 

There have been a large number (24 of this month’s total of 164) of updates to Microsoft SQL components in Windows and to how OLE operates with other Windows features. Applications that require these kinds of “cooperative” interactions are generally complex line-of-business applications. Trouble-shooting these update scenarios requires specialist application expertise and can be very time consuming. 

To prevent downtime, expensive faults and potentially damaging compliance issues, we fully recommend an audit of your application portfolio, identifying SQLOLE, OLEDB, and ODBC dependencies with an assessment and testing plan before general deployment of this month’s patches.

Windows lifecycle update 

This section contains important changes to servicing (and most security updates) to Windows desktop and server platforms.

  • Windows 10 21H2 (E) ends in June 2024.
  • Microsoft .NET 7.0.18 (support ends this month).
  • Microsoft Visual Studio (2022 – 17.4 LTSC) support ends this month.
  • PowerShell 7.3 main support ends May 8, 2024.

Each month, we break down the update cycle into product families (as defined by Microsoft) with the following basic groupings: 

  • Browsers (Microsoft IE and Edge);
  • Microsoft Windows (both desktop and server);
  • Microsoft Office;
  • Microsoft SQL Server (not Exchange Server);
  • Microsoft Development platforms (ASP.NET Core, .NET Core and Chakra Core);
  • Adobe (if you get this far).
Browsers

Microsoft released just five updates to its Chromium-based browser, all rated important. Note that the next release for this browser platform is the week of April 18. Chromium releases are now out of sync with Microsoft Patch Tuesday updates. Add these updates to your standard patch release schedule. 

Windows

For this (mammoth) release to the Windows platform, the following broad areas have been updated.

  • Windows RAS, ICS, RRAS.
  • Windows Message Queuing.
  • Windows Cryptographic Services, BitLocker, Kerberos and LSASS.
  • Windows Distributed File System (DFS).
  • Windows DHCP Server.
  • Microsoft WDAC OLE DB provider for SQL.
  • Windows Telephony Server.

This month we do not see any reports of publicly reported vulnerabilities or exploits in the wild and if you are on a modern platform (Windows 10/11) all these reported security vulnerabilities are difficult to exploit. Please add this update to your standard Windows release schedule. 

Microsoft Office

Microsoft released only two patches (CVE-2024-26251 and CVE-2024-26257) for the Microsoft Office suite affecting Excel and SharePoint. Both updates are rated important by Microsoft and should be included in your standard Office update schedule.

Microsoft SQL Server (not Exchange Server)

In place (and instead) of Microsoft Exchange Server, we have a special guest this month: Microsoft SQL Server. Microsoft released 38 patches for its database platform, making it one of the largest, most complex and technically challenging updates in memory. 

The important thing to note here is that these updates affect how OLE (object linking and embedding), ODBC and SQL Server operate. As a critical middle layer for most business applications, this update will require significant attention from your in-house development, testing and deployment teams. It is not just a big update. It’s the multiplicative, interdependent nature of multiple cooperating systems that are being updated. Really, really. 

Microsoft development platforms 

Microsoft released 11 updates to the development platform, with 10 focused on Microsoft SQL ODBC issues within Microsoft Visual Studio and the other update impacting Microsoft .NET (CVE-2024-21409). This month’s .NET vulnerability has remote in the name, but it requires a local account (and permissions) and so can be added to your standard developer release schedule. The other 10 affecting SQL and ODBC? Your in-house development team will have to have an in-depth look at these updates. It could be really messy, so take your time.

Adobe Reader (if you get this far) 

No Adobe updates from Microsoft this month. And (lucky us) there are no other updates to third-party tools or platforms included in this update cycle.

Microsoft, Security, Windows, Windows 10, Windows 11, Windows Security
Kategorie: Hacking & Security

After cloud providers, UK antitrust regulator takes aim at AI

Computerworld.com [Hacking News] - 12 Duben, 2024 - 18:16

The UK’s antitrust regulator has put tech giants on notice after expressing concern that developments in the AI market could stifle innovation.

Sarah Cardell, CEO of the UK’s Competition and Markets Authority (CMA), delivered a speech on the regulation of artificial intelligence in Washington DC on Thursday, highlighting new AI-specific elements of a previously announced investigation into cloud service providers.

The CMA will also investigate how Microsoft’s partnership with OpenAI might be affecting competition in the wider AI ecosystem. Another strand of the probe will look into the competitive landscape in AI accelerator chips, a market segment where Nvidia holds sway.

While praising the rapid pace of development in AI and numerous recent innovations, Cardell expressed concerns that existing tech giant are exerting undue control.

“We believe the growing presence across the foundation models value chain of a small number of incumbent technology firms, which already hold positions of market power in many of today’s most important digital markets, could profoundly shape these new markets to the detriment of fair, open and effective competition,” Cardell said in a speech to the Antitrust Law Spring Meeting conference.

Vendor lock-in fears

Anti-competitive tying or bundling of products and services is making life harder for new entrants. Partnerships and investments — including in the supply of critical inputs such as data, compute power and technical expertise — also pose a competitive threat, according to Cardell.

She criticised the “winner-take-all dynamics” that have resulted in the domination of a “small number of powerful platforms” in the emerging market for AI-based technologies and services.

“We have seen instances of those incumbent firms leveraging their core market power to obstruct new entrants and smaller players from competing effectively, stymying the innovation and growth that free and open markets can deliver for our societies and our economies,” she said.

The UK’s pending Digital Markets, Competition and Consumers Bill, alongside the CMA’s existing powers, could give the authority the ability to promote diversity and choice in the AI market.

Amazon and Nvidia declined to comment on Cardell’s speech while the other vendors name-checked in the speech —Google, Microsoft, and OpenAI — did not immediately reply.

Dan Shellard, a partner at European venture capital firm Breega and a former Google employee, said the CMA was right to be concerned about how the AI market was developing.

“Owing to the large amounts of compute, talent, data, and ultimately capital needed to build foundational models, by its nature AI centralises to big tech,” Shellard said.

“Of course, we’ve seen a few European players successfully raise the capital needed to compete, including Mistral, but the reality is that the underlying models powering AI technologies remain owned by an exclusive group.”

The recently voted EU AI Act and the potential for US regulation in the AI marketplace make for a shifting picture, where the CMA is just one actor in a growing movement. The implications of regulation and oversight on AI tooling by entities such as the CMA are significant, according to industry experts.

“Future regulations may impose stricter rules around the ‘key inputs’ in the development, use, and sale of AI components such as data, expertise and compute resources,” said Jeff Watkins, chief product and technology officer at xDesign, a UK-based digital design consultancy.

Risk mitigation

It remains to be seen how regulation to prevent market power concentration will influence the existing concentrations — of code and of data — around AI.

James Poulter, CEO of AI tools developer Vixen Labs, suggested that businesses looking to develop their own AI tools should look to utilise open source technologies in order to minimise risks.

“If the CMA and other regulatory bodies begin to impose restrictions on how foundation models are trained — and more importantly, hold the creators liable for the output of such models — we may see an increase in companies looking to take an open-source approach to limit their liability,” Poulter said.

While financial service firms, retailers, and others should take time to assess the models they choose to deploy as part of an AI strategy, regulators are “usually predisposed to holding the companies who create such models to account — more than clamping down on users,” he said.

Data privacy is more of an issue for businesses looking to deploy AI, according to Poulter.

Poulter concluded: “We need to see a regulatory model which encourages users of AI tools to take personal responsibility for how they use them — including what data they provide to model creators, as well as ensuring foundation model providers take an ethical approach to model training and development.”

Developing AI market regulations might introduce stricter data governance practices, creating additional compliance headaches.

“Companies using AI for tasks like customer profiling or sentiment analysis could face audits to ensure user consent is obtained for data collection and that responsible data usage principles are followed,” Mayur Upadhyaya, CEO of APIContext said. “Additionally, stricter API security and authorisation standards could be implemented.”

Dr Kjell Carlsson, head of AI strategy, Domino Data Lab, said “Generative AI increases data privacy risks because it makes it easier for customers and employees to engage directly with AI models, for example via enhanced chatbots, which in turn makes it easy for people to divulge sensitive information, which an organisation is then on the hook to protect. Unfortunately, traditional mechanisms for data governance do not help when it comes to minimising the risk of falling afoul of GDPR when using AI because they are disconnected from the AI model lifecycle.”

APIContext’s Upadhyaya suggested integrating user consent mechanisms directly into interactions with AI chatbots and the like offers an approach to mitigate risks of falling out of compliance with regulations such as GDPR.

Generative AI, Regulation
Kategorie: Hacking & Security

Will AI end Apple’s existential crisis?

Computerworld.com [Hacking News] - 12 Duben, 2024 - 17:43

Consider this: Apple has been working with artificial intelligence (AI) in specific domains for many years. Then OpenAI’s ChatGPT emerged and made Apple look bad. Today as WWDC approaches, the company is expected to deliver souped-up AI across all its devices — and as competitors struggle to catch up in processor design, we expect fresh M4 Macs to appear this fall.

What this means is that Apple may soon offer computationally advanced mass market computers in a range of configurations (iPhone, iPad, Mac, Vision Pro), software with built-in AI to run on those devices, and the integration between hardware, software, and operating systems it needs to make everything work pretty well.

Survivalism

Apple needs to succeed in this gamble. Stung by claims it has fallen behind in AI development, the company wants to regain lost face and restore its reputation at the leading edge of tech. 

That’s not the only reason. With Apple’s former chief designer, Jony Ive, allegedly working with OpenAI’s Sam Altman to design and build what is already being called “the iPhone of AI” and new devices such as Humane’s AI Pin generating interest, the iPhone maker must urgently also seek to consolidate its existing reputation for cutting-edge consumer products. 

Together, both challenges add up to more than the sum of their parts; they also emerge within the framework of multiple existential challenges at the company. Not only is it pressed by the need to burnish its reputation as a tech powerhouse, but it is also enduring heavy-handed regulation as governments seek to break the hold of Big Tech firms over the industry.

Move faster

This even extends to AI. In the UK, the Competition Markets Authority has already begun monitoring Big Tech and its place in the evolving AI market, which will prompt further evolution in the space as companies seek to build solid presences there.

Apple also faces the same existential challenges as everyone else, including the impact of climate change and its already visible effect on crop yields, economic weakness in many markets, and increasing international tension eroding what has been a happy and mutually profitable relationship with China.

Any of these many problems is challenging in its own right, but together they represent a range of long-term threats to the future of the company.

Apple is no stranger to existential threat. Surviving these is core to the company’s own history, and the track record of triumph in adversity it possesses is second to few. But all these threats need a response, and once again Apple Silicon could turn out to be the wind beneath the company’s wings.

Move fast, make things

That Apple already plans M4 Macs isn’t terribly surprising. The cadence of its Mac processor upgrades seems to be around 12 to 18 months across the four processors in any M range (M-, M- Pro, M- Max, and M- Ultra). With each processor being around 20% improved on the previous generation, the company is making huge strides, setting industry expectations for computational performance and energy requirements for the chip price.

The processors also boast on-chip GPUs and Neural Engines, meaning that all existing Macs already have plenty of computational capability to pump into AI.

Apple Silicon isn’t just inside Macs, either. You also find it inside iPhones. We already anticipate Apple will field the world’s biggest personal AI ecosystem once it ships iOS 18 this fall, and there are claims the next iPhone will also deliver a big bump in computational performance. 

Playing its hand

With WWDC weeks away, it’s becoming clear how Apple is going to approach its next big release cycle. First, it will woo users back to that loving feeling with new and hopefully powerful AI features in its operating systems.

Second, it will introduce iPhones, iPads, and Macs that are faster than any other devices in their class and built to be perfectly capable of demanding generative AI (genAI) tasks on the device itself. We may even see an App Store for AI, where Apple device users can pick and choose between third-party solutions as they seek the perfect smart companion. 

If Apple gets this right, it will convince its already loyal audiences to stick with its hardware, enabling it to continue building sales of additional products and services to a happy user audience. Burnished by the rich patina of AI, iPhones and Macs will remain seriously attractive tools for work and play, and even as economic challenges continue Apple will be able to maintain a strong bottom line.

But if Apple doesn’t make the grade, it will find itself with limited time to turn the Cupertino spaceship around, though it should be more than adequately cushioned for a soft landing.

Probably.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Apple, Artificial Intelligence, Generative AI, iOS, Mac, Vendors and Providers
Kategorie: Hacking & Security

Popular Rust Crate liblzma-sys Compromised with XZ Utils Backdoor Files

The Hacker News - 12 Duben, 2024 - 16:55
"Test files" associated with the XZ Utils backdoor have made their way to a Rust crate known as liblzma-sys, new findings from Phylum reveal. liblzma-sys, which has been downloaded over 21,000 times to date, provides Rust developers with bindings to the liblzma implementation, an underlying library that is part of the XZ Utils data compression software. The Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Rust-Based Edera: Locking Down Container Security Once and For All

LinuxSecurity.com - 12 Duben, 2024 - 14:50
The Rust-based Edera project demonstrates a unique approach to container security that addresses cloud-native computing challenges. Let's examine this new, innovative approach to container security, which could be a game-changer in the industry!
Kategorie: Hacking & Security

Code Keepers: Mastering Non-Human Identity Management

The Hacker News - 12 Duben, 2024 - 13:13
Identities now transcend human boundaries. Within each line of code and every API call lies a non-human identity. These entities act as programmatic access keys, enabling authentication and facilitating interactions among systems and services, which are essential for every API call, database query, or storage account access. As we depend on multi-factor authentication and passwords to safeguard The Hacker Newshttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Microsoft na starších počítačích nepřehlédnutelně upozorňuje na konec podpory Windows 10

Zive.cz - bezpečnost - 12 Duben, 2024 - 12:45
**Microsoft začíná upozorňovat na ukončení podpory Windows 10 **Celoobrazovkové sdělní upozorňuje, že počítač nemůže upgradovat **Tlačítko pro trvalé potlačení informační obrazovky chybí
Kategorie: Hacking & Security

USB-C explained: How to get the most from it (and why it keeps on getting better)

Computerworld.com [Hacking News] - 12 Duben, 2024 - 12:00

Now that you’re used to seeing co-workers, family, and strangers at coffee shops, offices, and planes using the oblong USB-C connector, it’s time to see just what this promising standard can do today and tomorrow. As we approach its 10th birthday, the USB-C plug is now part and parcel of just about every new laptop, phone, and tablet made. Even MacBooks, iPads, iPhones, and Chromebooks now have USB-C ports, at least living up to the first part of its full name: Universal Serial Bus.

In other words, the older rectangular USB Type-A plugs we are so used to are slowly going the way of the dinosaur. This evolution is happening faster in some arenas than others. For example, the latest Mac Pro desktop has no fewer than eight USB-C ports for anything from sending video to a display to charging a phone.

The Acer Swift 1’s right side holds two USB-A and one USB-C ports, a full HDMI port, plus an audio jack. 

Melissa Riofrio/IDG

What is USB-C?

Without a doubt, USB Type-C, commonly referred to as USB-C, is becoming the standard connector for moving data and power to and from a wide variety of computing devices. Its symmetrical design means it can be inserted either way — up or down — eliminating many of the frustrations of earlier USB ports.

This alone makes it a hit for me. No more fumbling with plugs that always seem to be upside down.

Because it is a connector specification and not a data transfer protocol, USB-C has been a constant as the underlying technology for moving data and powering devices has evolved. It’s closely linked to several powerful new technologies, including Thunderbolt and Power Delivery, that have the potential to change how we think about our gear and how we work in the office, on the road, and at home.

What is USB Type-C used for?

Like USB Type-A connectors, Type-C USB ports and cables are used to transfer power and data between devices, from charging a phone to backing up data on an external drive. But USB-C’s support for newer USB protocols and other technologies makes it much more powerful, capable of charging larger devices and delivering up to 8K video to an external display.

USB protocols and what they mean

It’s when we start talking about protocols that things get messy. The five main USB protocols in use today are confusing, to say the least, creating an alphabet soup of standards that could muddle the most technical among us.

Here is a breakdown of the USB specifications, where it’s best to concentrate on the data flow levels:

  • Today, the most popular USB spec is the USB 3.2 Gen 1 protocol that allows a maximum throughput of 5Gbps to travel over a single lane of data. It can use an old-school Type-A rectangular plug or the oblong USB-C connector.
  • The next step up has two alternatives: the use of double speed lanes of data that abide by the 5Gbps speed limit (USB 3.2 Gen 1×2) as well as a single-lane variant that operates at twice the speed (USB 3.2 Gen 2×1). Generally compatible with each other, the result is 10Gbps peak throughput.
  • The USB 3.2 Gen 2×2 protocol uses two lanes of double-speed data traffic to top out at 20Gbps.
  • USB4 (no space between “USB” and “4”) is the newest protocol and incorporates the Thunderbolt 4 spec. Within USB4, there are several variants that provide 5, 10, 20 and 40Gbps of peak flow.
What’s in a name? USB specs and speeds Spec NameTop SpeedSingle- or Dual-Lane FlowUSB 3.2 Gen 15 GbpsSingleUSB 3.2 Gen 1×210 GbpsDualUSB 3.2 Gen 2×110 GbpsSingleUSB Gen 2×220 GbpsDualUSB440 GbpsDual

The final contemplated step up is USB4 v2, which takes data transfer speeds to new heights by using PAM-3 pulse amplitude modulation technology. Derived from 10Gbps Ethernet wired networking, PAM-3 tops out at 80Gbps in symmetric mode and gets to the spec’s top speed of 120Gbps in asymmetric mode. Unfortunately, these speed upgrades are off in the future.

Next up: Thunderbolt 5

Using USB4 v2 as a starting point, the next stage of USB-C’s development will incorporate Thunderbolt 5, which was debuted by Intel last fall. Under normal circumstances, it can move a maximum of 80Gbps, double the rate of Thunderbolt 4 and USB4. This will help with everything from moving data onto and off flash drives to running backups of company data and multipurpose docking stations.

But if more throughput is needed — such as for 8K video, which can require 50Gbps — it uses a clever technique known as Bandwidth Boost. This pushes its speed limit to 120Gbps when needed. It can also be useful for feeding video to a screen at a refresh rate of up to 544Hz, which might find a home with a company’s CAD designers, traders, or video editors.

It’s another case of hurry up and wait. With Thunderbolt 4 gear just coming to market, expect to see computers with TB5 in 2024 and the first round of accessories in 2025. At the moment, there is no corresponding USB5 spec.

Gear up for new USB-C capabilities

Despite the confusing name-game, older devices continue to work with the newer specs. In other words, that two-year-old USB-C flash storage key will work with your newest laptop, although not always at top speed.

To take full advantage of USB-C today, though, you’ll need to get some new gear. Be careful, because not all USB-C devices on the market support all the latest USB specs. For instance, just about every USB-C flash drive sold today supports the earlier USB 3.2 Gen 1 protocol, and some tablets and phones don’t support Alt Mode video (more on that in a moment). It’s best to read the spec sheet carefully so you know what you’re getting before you buy.

I tried out some newer USB-C accessories to see the latest capabilities for myself. Here’s what to expect.

Docking station

In the here and now, the first USB4 devices flooding the market are docking stations that can make a laptop feel right at home on a desktop, moving data while charging the system. The $290 Plugable TBT4-UDX1 dock is connection central, with 11 ports and the ability to stream up to 96 watts to charge a laptop. It includes four USB Gen 2 Type A ports capable of 10Gbps, two USB-C connections that can push 40Gbps, and a 2.5Gbps networking port. There are also more mundane amenities like an SD card slot, a headphone jack, and HDMI for video.

The Plugable TBT4-UDX1 docking station.

Plugable

By using a combination of the USB-C and HDMI ports, the UDX1 can drive up to two 4K monitors or a single 8K screen — that is, if you have the right cables and adapters.

Getting it set up on my desk was a snap, because it didn’t require any extra software. I plugged in the dock’s power adapter, connected it with the included Thunderbolt 4 cable to my Acer Swift Edge 16 notebook, and it immediately started charging my system as fast as its included AC adapter. The dock worked smoothly with my keyboard, mouse, and wired Ethernet connection, as well as an Epson PowerLite L260F projector and my Logitech game controller, because 9 to 5 only lasts until 5PM.

Fast data storage

The UDX1 came into its own with the Kingston XS2000 USB 3.2 2X2 flash drive plugged in and connected to the Acer Swift Edge 16 laptop. Somewhat larger and heavier than the typical flash drive, the XS2000 measures 2.7 x 1.3 x 0.5 inches and weighs 1 ounce. It fits into a pocket but requires a USB-C cable to connect.

The Kingston XS2000 external SSD.

Kingston

The XS2000 read data at 7.90Gbps, as measured by the CrystalDiskMark benchmark software — that’s less than half the spec’s 20Gbps speed limit but a huge increase from the 1.23Gbps that I got using a SanDisk USB 3.1 flash drive. Kingston sells XS2000 drives for $86 (500GB), $140 (1TB), $246 (2TB) and $450 (4TB).

Unfortunately, USB4 is so new that there weren’t any external drives available for my tests. So, I made one myself. Using the $120 Satechi USB4 NVMe SSD Pro drive enclosure, I plugged in a Crucial P3 Plus 500GB SSD module. It upped the data reading rate to an exceptional 29.5Gbps, about three-quarters of the 40Gbps spec and one of the fastest drives available anywhere. Stay tuned: I’ll show you how to make the drive later in the story.

Power Delivery

While a USB 2.0 port could deliver just 2.5 watts of power, about enough to slowly charge a phone, USB 3.1 upped this to about 4.5 watts, and the initial uses of USB-C topped out at 15 watts of power. Today, a single USB-C cable can handle both video and power using USB’s Power Delivery spec.

Happily, USB4 increases this output to 100 watts for the base protocol and as much as 240 watts with the Extended Power Range specification. For practical reasons, most devices limit this to between 96 and 100 watts. Still, this opens up a brave new world of laptop-powered projectors based on USB-C.

Today, though, Power Delivery is being used mostly for chargers, external battery packs, and small displays such as Ricoh’s Portable Monitor 150. Built around a 15.6-inch OLED screen, the $575 monitor not only shows 1920 x 1080 resolution but adds the convenience of 10-point touch control and can be powered by a laptop or phone via the same USB-C cable that delivers video. It weighs just 1.2 pounds, has a fold-out stand, and comes with a slipcase; Ricoh sells an $80 stylus as well. 

The Ricoh Portable Monitor 150.

Brian Nadel / IDG

It displayed everything from web pages and emails to memos and Word documents from a Windows 11 notebook, but it really came into its own with a Samsung Galaxy Note 20 phone, allowing me to leave the laptop behind for a day trip. When it was time to present, I plugged the PM 150 into a USB adapter and to my phone with USB-C cables and pointed the display at the small group. The monitor mirrored my phone’s content, allowing me to give the full presentation while maintaining eye contact with the audience. Later, we huddled over the touchscreen to modify a design using our fingers.

Alt Mode displays

The newest USB-C cables are capable of delivering video using USB-C’s Alternate Mode, or “Alt Mode.” At the moment, a Thunderbolt 4/USB4 cable can push 8K video or supply several 4K displays. This breakthrough can neaten a desk by getting rid of at least one cable.

For instance, Samsung’s 43-inch M70B display can use a USB-C cable to not only send video from a laptop to the screen but also send power the other way to charge the system. The $430 model I looked at has a resolution of 3840 x 2160 pixels and was able to charge my Acer Swift Edge 16 and my Google Pixel 7 phone.

Samsung’s 43-inch M70B display uses USB-C to receive video from and charge my laptop.

Brian Nadel / IDG

Cables

To get the most out of the new specs and the gear, you’ll need the right cables. Happily, after a proliferation of cable types, there’s a convergence going on. All Thunderbolt 4 cables will get the most out of USB4 devices, as well as all the specs that came before it. In fact, it’s so much of a no-brainer that all I buy these days are TB4 cables. They work well for anything from moving data off my phone to feeding video to a display or backing up data to a drive.

The reason they work with all specs is that each USB cable has an identification chip inside that senses the hardware’s capability and sets the speed and power abilities accordingly. Called e-marker, the integrated circuit is at both ends of the cable so that the USB device can query the cable’s capabilities and adjust the top speed to suit it. Older USB-C cables will generally work, just not always at top speed and might not work with the newest equipment.

Most of these cables are available in up to 2-meter lengths (about 6.6 feet), which is more than twice the standard 0.8-meter (31-inch) length of earlier USB-C cables. That said, there are also one-meter cables from Satechi and Plugable for $30 and $29. By contrast, Apple pushes Thunderbolt 4 cabling to 3 meters (9.8 feet), but its Thunderbolt 4 Pro cable is pricey at $159.

Apple Thunderbolt 4 Pro Cable.

Apple

One of my favorite USB-C cables is the Baseus Free2Draw Mini Retractable USB-C Cable 100W. Inside the Free2Draw’s small circular cable winder is a 3.3-foot USB 2.0 cable that can be spooled out at 1.1-, 1.9-, 2.7- or 3.3-foot lengths without getting tangled. Capable of delivering 100 watts to charge a phone, tablet, or laptop, it tops out at only 480Mbps of data.

Making USB-C work for you

To get the most out of these new specs, I’ve had to make some changes and buy some accessories. My older USB flash drives, keyboards and mice still work, though, even if they can’t take advantage of the new speeds.

Here are some tools, tips, and DIY projects that will help make USB-C work for you.

Make a USB-C travel kit

The good news is that USB-C ports can be used with older USB 2.0, 3.0, and 3.1 accessories. The bad news is that you’ll need a drawer full of adapters and cables. So far, I haven’t seen anything close to a complete ready-made kit. So, I’ve made my own USB-C survival kit with key cables and adapters that fits into an old Dopp bag.

Here’s what’s inside:

  • A small male USB-C to female USB Type-A and a male USB Type A to female USB-C adapter.
  • Short and long adapter cables with a USB Type-A male plug on one end and a male USB-C on the other.
  • A USB-C AC adapter that’s capable of delivering 30 watts.
  • A Thunderbolt 4 cable with USB-C male plugs at each end for using accessories.
  • A USB-C Ethernet adapter and short Ethernet jumper cable for when a wired connection is available.
  • One HDMI cable.
  • A small microfiber cloth for screen cleaning.

The center of attention is Satechi’s Thunderbolt 4 Slim Hub, which squeezes lots of connections into a small and light device. It delivers three 40Gbps Thunderbolt 4/USB4 connections as well as a 10Gbps USB 3.2 Gen 2 port, while supplying up to 15 watts of power to charge my phone. It can run an 8K screen or a pair of 4K ones but needs a fairly large and heavy 20-volt AC adapter that makes it a tight fit.

The essential travel companion: my homemade USB-C adapter kit.

Brian Nadel / IDG

There’s one additional adapter I’ve found essential on the road because, sadly, many phones and tablets now lack a headphone jack. I have USB-C earbuds but usually can’t find them when I need them. When that happens, I use a headphone jack adapter so I can use any inexpensive wired headphones with my Pixel 7 phone. They cost about $10 each.

Make an inexpensive (and fast) homemade SSD drive

USB4 may yield fast data speeds, but flash drives that support the spec have been slow to market. Using Satechi’s USB4 NVMe SSD Pro Enclosure, I made my own. It tops out at 40Gbps and is compatible with all the older USB-C and Thunderbolt specs, although at 4.4 x 2.7 x 1.0 inches and 7 ounces, it is larger and heavier than the typical flash drive.

The Satechi USB4 NVME SSD Pro Enclosure.

Satechi

The Satechi enclosure can be used with a B or B+M NVMe storage card up to 4TB. In addition to the $120 enclosure, I used a 500GB Crucial P3 Plus module that cost me $30.

The best part is that it doesn’t require tools to put together.

I started by sliding the enclosure’s lock open, freeing the lid.

Brian Nadel / IDG

Next, I slid the NVMe module into the connector and locked it in place with the soft silicon plug; it allows the use of three different-sized cards.

Brian Nadel / IDG

I finished by applying the included thermal pad and replacing the lid.

Brian Nadel / IDG

I snapped the case shut and was done.

Brian Nadel / IDG

It was worth the effort, because after I plugged it into the Plugable dock, it was able to move nearly 30Gbps. Not bad for a few minutes of work.

Be a power traveler

I travel a lot for work and pleasure, and it always seems as though my phone’s battery is at 20%. My latest trick to keep from being cut off from the world is the $30 Anker Nano Power Bank. It’s small, only adds 3.5 ounces to the weight of the phone, and has a unique swiveling USB-C connector that has worked with every phone I’ve tried. There’s also a port on the side for charging it and as an alternate way to charge a device.

The Anker Nano Power Bank can juice up my phone anywhere.

Brian Nadel / IDG

Available in five colors, the Nano Power Bank’s 5,000 miliamp-hour battery can put out 18 watts — about what the typical AC adapter delivers. Able to provide hours of extra juice, the five-dot LED charge gauge shows how much power is left.

There’s one more USB-C power trick I use every day with my Android work tablet that makes connecting and disconnecting much easier. The iSkey Magnetic USB C Adapter is a knock-off of Apple’s MagSafe design, where one part plugs into the tablet and the other into a USB cable. Inside, these two parts have powerful magnets that snap together to make a physical and electrical connection when they’re within a couple inches of each other. Later, when it’s time to move around the office, I pull the two apart. The best part is that it costs about $20.

iSkey’s Magnetic USB C Adapter imitates Apple’s MagSafe connector.

Brian Nadel/IDG

Troubleshooting USB-C

The fact that there isn’t much to adjust or configure with USB (C or otherwise) is a testament to its technological success. New or old, in almost all cases, it just works. That is, until it doesn’t. At that point, there are several angles of attack for troubleshooting.

My first step is to take a look at what the cable is doing, or not. For instance, I was having problems with my MacBook Air not reliably charging. To see what was going on with the USB-C charging, I inserted Plugable’s USBC-METER3-1MF diagnostic cable ($20) between the AC adapter and the notebook.

The cable meets the USB 3.1 Gen 2 10Gbps spec and can handle up to 240 watts of power; its built-in OLED screen shows how much electricity is flowing. In my case, it was 1 or 2 watts, not the 20 watts it should be. After jiggling the cord to see the power flow jump to a more normal level, I concluded that the charging cable had an intermittent short. Replacing it did the trick and I haven’t had any problems since.

The Plugable USB-C cable with multimeter tester.

Brian Nadel / IDG

My second step is to use Windows’ built-in USB tools. In addition to notifying me of a problem, the Settings screen in Windows 10 and 11 has a way to bring unresponsive USB devices back to life. If you’re having USB problems on a Windows 10 or 11 device, try these tips:

  1. Go to the Device Manager by right-clicking on This PCin File Explorer and then clicking Properties. Under “Related settings,” click Device Managernear the bottom to bring up a list of devices. In the Device Manager, double-click Universal Serial Bus controllers in the list to reveal the actual controller. It should read something like “USB 3.0 eXtensible Host Controller.” Give that a right-click, then choose Properties. In the Power Management tab, uncheck the box next to Allow the computer to turn off this device to save power to keep the port powered up. But be warned: your battery might drain faster because of this change.

Keep the USB port powered up by unchecking the box.

Brian Nadel / IDG

  1. While there, updating the USB drivers couldn’t hurt. You can do this by choosing the USB device that’s not working, right-clicking, and choosing Update driverfrom the drop-down list.
  2. Finally, check the specs of the computer, device, and cable to make sure they all match.

With Thunderbolt the underlying transfer technology for USB4, the Thunderbolt Control Center can provide insight. The app, which generally appears in the Windows Start menu apps list (and can also be downloaded from the Microsoft Store), interrogates the system’s Thunderbolt controller chip to maximize throughput and shows what Thunderbolt devices are online. At the bottom are details on whether it’s connected and how it’s powered. Click on the About section on the left to see a deeper level of detail. This includes the Thunderbolt version the controller supports.

Get details about connected Thunderbolt devices via the Thunderbolt Control Center.

Brian Nadel / IDG

Finally, when all else fails, try cleaning the physical USB-C port, because dust, dirt, and debris might be preventing an electrical connection. Try using compressed air to blow out the loose stuff and then gently clean the port with a soft plastic toothpick. I use the Oral-B Expert Interdental Brushes, which are the perfect size for extricating everything from pet hair to pocket lint. At $3 for 20, you can’t go wrong.

Give a malfunctioning USB-C port a good spring (or winter) cleaning.

Brian Nadel / IDG

You’d be surprised at what comes out. Hopefully you now have a clean machine, ready for work.

This article was originally published in August 2014 and most recently updated in April 2024.

Computers, Computers and Peripherals, Mobile, Small and Medium Business, Smartphones
Kategorie: Hacking & Security

The desktop processor market is suddenly hot again

Computerworld.com [Hacking News] - 12 Duben, 2024 - 12:00

The desktop/laptop market has been pretty quiet for several years. Windows carved out its dominant space, and despite repeated claims that it would happen, Linux never really emerged as a challenger on the desktop. The Apple Mac proved to be a solid if pricier alternative, popular in certain markets and industries and seeing a surge of interest in recent years with the introduction of powerful M-series Apple processors, which boosted Apple’s market share to 16% by the end of 2023. Chromebooks found their niche as well, primarily in education.

Nevertheless, Windows still claims around 72% of the desktop OS market share worldwide, according to Statista.

For decades, Windows PCs have been powered by processors built on Intel’s x86 architecture, giving rise to the term “Wintel” to describe Windows machines running on x86 chips, either from Intel or its sole x86 rival AMD. According to Mercury Research, which follows the CPU market, Intel has about 80% of the desktop and notebook x86 market, while AMD claims the remaining 20%.

Not that others haven’t tried to break the Wintel stranglehold. For example, Qualcomm, a leading manufacturer of mobile chips, entered the desktop fray back in 2016, partnering with Microsoft to run Windows on Qualcomm Snapdragon chips based on the Arm processor architecture. But those chips required an x86 emulator to run traditional Windows apps, resulting in poor performance.

Performance has improved over time, but so far, at least, Arm-based PCs have not posed a serious threat to Wintel dominance. The biggest challenge Wintel has faced from Arm so far is from the new Macs powered by Apple’s M-series custom silicon.

A new battle emerges

And yet, a battle is about to break out on both the hardware and software sides, driven by the generative AI boom. For starters, Qualcomm is once again fixing its sights on the PC market with a push to begin later this year, according to the company’s president and CEO Cristiano Amon, who discussed the initiative on the most recent earnings call with Wall Street analysts.

Amon disclosed that Windows 11 laptops with Qualcomm’s Arm-based Snapdragon X Elite System-on-a-Chip (SoC) will debut in mid-2024. The processor was launched last year and promises long battery life while providing enough CPU horsepower to run AI workloads at competitive speeds with x86 and Apple custom silicon architectures. “Products with this chipset [are] tied with the next version of Microsoft Windows that has a lot of the Windows AI capabilities,” Amon told analysts.

“Qualcomm is looking to expand into other markets besides mobile, because frankly, mobile is not growing at the same rate that it was years ago,” said Jack Gold, president of J. Gold Associates consultancy. “So they’re looking for peripheral markets to increase their market share.”

For its part, Microsoft seems to be hedging its bets, talking up the Snapdragon X Elite chips but also encouraging other chip makers to get into the Windows on Arm game. Both AMD and Nvidia, the market leader in the graphics processing units (GPUs) that power most AI workloads today, are said to be developing Arm-based CPUs for Windows PCs, according to Reuters.

One way or another, 2024 is shaping up to be a big year for Microsoft. It is expected to ship a significant update to Windows 11, possibly renaming it Windows 12, in the second half of the year. The new OS is expected to greatly expand on its AI processing capabilities. What’s more, Microsoft has ported Windows to native Arm platforms. More details are likely to be revealed at a special media event next month at which CEO Satya Nadella will outline the company’s “AI vision across hardware and software.”

Intel, of course, is fiercely defending its territory. At its Vision 2024 conference earlier this week, the company announced that the second generation of its Core Ultra processors meant to power AI workloads on Windows PCs will arrive later this year.

“Intel’s on a mission to bring AI everywhere,” said CEO Pat Gelsinger at the keynote. “Before competitors shipped their first [AI] chips, we’re launching our second.”

On top of all this comes word that PC maker Lenovo is looking to develop its own AI-oriented operating system, to be bundled with its hardware. Details are sketchy, including whether or not the new OS would be based on Linux. Lenovo declined to comment on the rumors.

Seeds of change?

Disrupting an established market is difficult, time-consuming, and expensive. A company wouldn’t make the move to challenge a dominant player unless they smelled blood in the water — but the Wintel (including AMD) partnership is still rock solid and in no danger of splintering, Gold said.

There is, however, an opening for Arm-based systems, particularly from Qualcomm, in machines for users who want maximum battery life, Gold noted. “But it’s still going to be a relatively small portion of the market going forward. I can’t give you a number, I don’t know what it’s going to be, but my guess would be well under 10%,” he added.

Mika Kitagawa, senior analyst with Gartner, notes that Qualcomm has been in the PC market for some time, with little to show for it. “The question is, will this new chip be the game changer in the market?” she said. “They have not been really successful so far, but we think that is going to change going forward.”

Her optimism stems from seeing benchmarks for the Snapdragon processor that showed great performance when compared to the best from Intel and Apple. “It is that great performance that will make Qualcomm get into the PC market in a way they couldn’t do in the past,” she said.

Both Gold and Kitagawa point out that Qualcomm is targeting the consumer market and not the enterprise. Uprooting x86-based PCs from the enterprise will be a significant challenge for Qualcomm, said Gold.

“The number one issue is that any machine [an organization] buys has to be able to run all their software, all their apps, and especially their legacy apps. And in the past, Arm-based PCs had issues with running legacy apps, because they’re not running them natively. They’re running them through translators, basically, so that’s a challenge from a performance perspective,” he said.

Kitagawa’s experience a few years back with x86 emulation “was horrible. I couldn’t really use it. But I think things are really improved,” she said.

Kitagawa declined to speculate on what Lenovo might be thinking with a proposed AI OS strategy, but Gold thinks it might be a part of a strategy for the company’s native China.

“Regular enterprises and users outside of China are unlikely to adopt any one-off, proprietary AI OS. But the Chinese government could mandate it in China for some uses. It’s hard to see Lenovo doing something in the short term that would compete with Microsoft or Linux in the general marketplace,” he said.

CPUs and Processors, Generative AI, Intel, Microsoft, Qualcomm, Windows
Kategorie: Hacking & Security

How Intel’s ‘AI everywhere’ strategy could challenge Nvidia’s dominance

Computerworld.com [Hacking News] - 12 Duben, 2024 - 12:00

At its annual Intel Vision conference, CEO Pat Gelsinger laid out an ambitious roadmap that includes generative artificial intelligence (genAI) at every turn.

Intel’s hardware strategy is centered around its new Gaudi 3 GPU, which was purpose built for training and running massive large language models (LLMs) that underpin genAI in data centers. Intel’s also taking aim with its new line of Xeon 6 processors — some of which will have onboard neural processing units (NPUs or “AI accelerators”) for use in workstations, PCs and edge devices. Intel also claims its Xeon 6 processors will be good enough to run smaller, more customized LLMs, which are expected to grow in adoption.

Intel’s pitch: Its chips will cost less and use a friendlier ecosystem than Nvidia’s.

Gelsinger’s keynote speech called out Nvidia’s popular H100 GPU, saying the Gaudi 3 AI accelerator delivers 50% on average better inference and 40% on average better power efficiency “at a fraction of the cost.” Intel also claims Gaudi 3 outperforms the H100 for training up different types of LLMs — and can do so up to 50% faster.

The server and storage infrastructure needed for training extremely large LLMs will take up an increasing portion of the AI infrastructure market due to the LLMs’ insatiable hunger for compute and data, according to IDC Research. IDC projects that the worldwide AI hardware market (server and storage), including for running generative AI, will grow from $18.8 billion in 2021 to $41.8 billion in 2026, representing close to 20% of the total server and storage
infrastructure market.

Along with its rapidly growing use in data center servers, genAI is expected to drive on-device AI chipsets for PCs and other mobile devices to more than 1.8 billion units by 2030. That’s because laptops, smartphones, and other form factors will increasingly ship with on-device AI capabilities, according to ABI Research. In layman’s terms, Intel wants its Xeon chips (and NPUs) to power those desktop, mobile and edge devices. Intel’s next generation Core Ultra processor — Lunar Lake — is expected to launch later this year, and it will have more than 100 platform tera operations per second (TOPS) and more than 45 NPU TOPS aimed at a new generation of PCs enabled for genAI use.

While NPUs have been around for decades for machine-learning systems, the emergence of OpenAI’s ChatGPT in November 2022 started an arms race among chipmakers to supply the fastest and most capable accelerators to handle rapid genAI adoption.

Intel CEO Pat Gelsinger describes the company’s “AI Everywhere” strategy at its Vision 2024 conference this week. 

Intel Corp.

Nvidia started with a leg up on competitors. Originally designed for computer games, Nvidia’s AI chips — graphics processor units (GPUs) — are its own form of accelerators, but they’re costly compared to standard CPUs. Because its GPUs positioned Nvidia to take advantage of the genAI gold rush, the company quickly became the third-most valuable company in the US. Only Microsoft and Apple surpass it in market valuation.

Industry analysts agree that Intel’s competitive plan is solid, but it has a steep hill to climb to catch Nvidia, a fabless chipmaker that boasts about 90% of the data center AI GPU market and 80% of the entire AI chip market.

Over time, more than half of Nvidia’s data center business will come from AI services run in the cloud, according to Raj Joshi, senior vice president for Moody’s Investors Service. “The lesson has not been lost on cloud providers such as Google and Amazon, each of which have their own GPUs to support AI-centric workloads,” he said.

“Essentially, there’s only one player that’s providing Nvidia and AMD GPUs, and that’s TSMC in Taiwan, which is the leading developer of semiconductors today, both in terms of its technology and its market share,” Joshi said.

Intel is not fabless. It has long dominated the design and manufacture of high-performance CPUs, though recent challenges due to genAI reflect fundamental changes in the computing landscape.

Ironically, Intel’s Gaudi 3 chip is manufactured by TSMC using its 5 nanometer (nm) process technology versus the previous 7nm process.

GenAI in data centers today, edge tomorrow

Data centers will continue to deploy CPUs in large numbers to support Internet services and cloud computing, but they are increasingly deploying GPUs to support AI — and Intel has struggled to design competitive GPUs, according to Benjamin Lee, a professor at the University of Pennsylvania’s School of Engineering and Applied Science.

Intel’s Gaudi 3 GPU and Xeon 6 CPU comes at a lower cost with lesser power needs than Nvidia’s Blackwell H100 and H200 GPUs, according to Forrester Research Senior Analyst Alvin Nguyen. A cheaper, more efficient chip will help mitigate the insatiable power demands of genAI tools while still being “performant,” he said.

Accelerator microprocessors handle two primary purposes for genAI: training and inference. Chips that handle AI training use vast amounts of data to train neural network algorithms that then are expected to make accurate predictions, such as the next word or phrase in a sentence or the next image, for example. So, chips are also required to speedily infer what that answer to a prompt (query) will be.

But LLMs must be trained before they can begin to infer a useful answer to a query. The most popular LLMs provide answers based on massive data sets ingested from the Internet, but they can sometimes be inaccurate or downright bizarre, as is the case with genAI hallucinations, when the tech goes right off the rails.

Gartner Research Vice President Analyst Alan Priestley said while today’s GPUs primarily support the compute-intensive training of massive LLMs, in the future businesses will want smaller genAI LLMs based on proprietary datasets — not information from an ocean outside of a company.

Nvidia’s pricing for now is based on a high-performance product that does an excellent job handling the intensive needs of training up an LLM, Priestley said. And, Nvidia can charge what it wants for the product, but that means it’s relatively easy for rivals to undercut it in the market.

RAG to the rescue

To that end, Intel’s Gelsinger called out Intel’s Xeon 6 processors, which can run retrieval augmented generation processes, or “RAG” for short. RAG optimizes the output of an LLM by referencing (accessing) an external knowledge base outside of the massive online data sets on which genAI LLMs are traditional trained. Using RAG software, an LLM could access a specific organization’s databases or document sets in real time.

For example, a RAG-enabled LLM can provide healthcare system patients with medication advice, appointment scheduling, prescription refills and help in finding physicians and hospital services. RAG can also be used to ingest customer records in support of more accurate and contextually appropriate genAI-powered chatbot responses. RAG also continuously searches for and includes updates from those external sources, meaning the information used is current.

The push for RAG and more narrowly tailored LLMs ties into Intel’s confidential computing and Trusted Domain security efforts, which is aimed at enabling enterprises to utilize their data while also protecting it.

“And for those models, Intel’s story is that you can run them on a much smaller system — a Xeon processor. Or you could run those models on a processor augmented by an NPU,” Priestley said. “Either way, you know you can do it without investing in billions of dollars in huge arrays of hardware infrastructure.”

“Gaudi 3, Granite Rapids or Sierra Forrest Xeon processors can run large language models for the type of things that a business will need,” Priestly said.

Intel is also betting on its use of industry standard Ethernet, pitting it against Nvidia’s reliance on the more proprietary InfiniBand high-performance computer networking bus.

Ethernet or Infiniband?

During a media call this week, Intel’s vice president of Xeon software, Das Kamhout, said he expects the Gaudi 3 chips to be “highly competitive” on pricing, the company’s open standards, and because of its integrated on-chip network, which uses data center friendly Ethernet. The Gaudi 3 has 24 Ethernet ports, which it uses to communicate between other Gaudi chips, and then to communicate between servers.

In contrast, Nvidia uses InfiniBand for networking and a proprietary software platform called Compute Unified Device Architecture (CUDA); the programming model provides an API that lets developers leverage GPU resources without requiring specialized knowledge of GPU hardware. The CUDA platform has become the industry standard for genAI accelerated computing and only works with Nvidia hardware.

Instead of a proprietary platform, Intel is working on creating an open Ethernet networking model for genAI fabrics, and introduced an array of AI-optimized Ethernet solutions at its Vision conference. The company is working through the Ultra Ethernet Consortium (UEC) to design large scale-up and scale-out AI fabrics.

“Increasingly, AI developers…want to get away from using CUDA, which makes the models a lot more transportable,” Gartner’s Priestley said.

A new chip arms race

Neither Intel nor Nvidia have been able to keep up with demand caused by a firestorm of genAI deployments. Nvidia’s GPUs were already in popular, which caused the company’s share price to surge by almost 450% since January 2023. And it continues to push ahead: at its GTC AI Conference last month, Nvidia unveiled the successor to its H100, the Blackwell B200, which delivers up to 20 petaflops of compute power.

Meanwhile, Intel at its Vision conference called out its sixth generation of Xeon processors, which includes the Sierra Forest, the first “E-Core” Xeon 6 processor that will be delivered to customers with 144 cores per socket, “demonstrating enhanced efficiency,” according to IDC Research Vice President Peter Rutten. Intel claims it has received positive feedback from cloud service providers who’ve tested the Sierra Forest chip.

Intel’s newest line of Xeon 6 processors are being targeted for use in the data center, cloud and edge devices, but those chips will handle smaller to mid-sizes LLMs.

Intel also plans to release Granite Rapids processor in the second quarter of the year. “The product, which is being built on Intel 3nm process, shares the same base architecture as that of Sierra Forest, enabling easy portability in addition to the increased core and performance per watt and better memory speed,” Rutten wrote in a report. Intel claims the Granite Rapids processor can run Llama-2 models with up to 70 billion parameters.

Intel’s next-gen Xeon 6 and Core Ultra processors will be key to the company’s ability to provide AI solutions across a variety of use cases, including training, tuning, and inference, in a variety of locations (i.e., end user, edge, and data center), according to Forrester’s Nguyen. But, the Xeon and Core Ultra processors are being marketed at smaller to mid-sized large language models. Intel’s new Gaudi 3 processor is purpose-built for genAI use and will be targeted at LLMs with 176 billion parameters or more, according to an Intel spokseperson.

“The continued AI [chip] supply chain shortages means Intel products will be in demand, guaranteeing work for both Intel products and Intel foundry,” Nguyen said. “Intel’s stated willingness to have other companies use their foundry services and share intellectual property — licensing technology they develop — means their reach may grow” into markets they currently do not currently address, such as mobile.

CPUs and Processors, Generative AI, Intel, Vendors and Providers
Kategorie: Hacking & Security

Iranian MuddyWater Hackers Adopt New C2 Tool 'DarkBeatC2' in Latest Campaign

The Hacker News - 12 Duben, 2024 - 11:49
The Iranian threat actor known as MuddyWater has been attributed to a new command-and-control (C2) infrastructure called DarkBeatC2, becoming the latest such tool in its arsenal after SimpleHarm, MuddyC3, PhonyC2, and MuddyC2Go. "While occasionally switching to a new remote administration tool or changing their C2 framework, MuddyWater’s methods remain constant," Deep Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

5 advanced tricks for Google’s Circle to Search on Android

Computerworld.com [Hacking News] - 12 Duben, 2024 - 11:45

One of my favorite Android features right now is something that’s simultaneously new and familiar.

It’s Circle to Search — a clever concept that came out for Google’s Pixel 8 and Pixel 8 Pro phones along with the Galaxy S24 earlier this year and is now in the midst of rolling out to even more Android devices.

Circle to Search is brilliant in both its power and its simplicity: On any device where it’s available, you just press and hold your finger to the bottom-center of the screen to summon it and search for anything you see on your screen at that moment.

The “Circle” part comes into play because after activating the system, you use your finger to circle the specific area of your screen you want to explore — be it an image you want to gain extra context around, a graphic with typically unselectable text that you want to copy, or a word or phrase you want to define or research further.

Google’s Circle to Search system in action on Android.

JR

It’s almost exactly like the powers Google gave us and then soon took back away with a feature called Google Now on Tap way back in 2015. The technology behind the system has grown more advanced in the time since Now on Tap’s debut and subsequent demise, but the core concept is shockingly similar.

And now more than ever, the system is packed with productivity-pushing potential. That’s especially true if you know about some impressive yet completely invisible tricks within it.

[Love learning little-known tech tricks? Check out my free Android Intelligence newsletter and get three new things to try in your inbox every Friday!]

Lemme show ya some of the best Circle to Search magic I’ve uncovered over these past several weeks — and if you’re using a phone that doesn’t have Circle to Search available yet, don’t despair: I’ve got a crafty workaround that’ll let you experience much of the same goodness on any Android device, even if Circle to Search itself isn’t present.

Android Circle to Search trick #1: Zippity zooming

Up first, ever find it tricky to circle or highlight small-sized text on your screen after activating Circle to Search?

Take note: Once the Circle to Search system is present, you can zoom in or out of the frozen area beneath it by pinching two fingers apart or together on the screen.

Zoom-a-zoom-zoom zoomin’, Circle to Search style.

JR Raphael, IDG

Good to know, right?!

Android Circle to Search trick #2: Bar bumpin’

The telltale sign of Circle to Search being active is the Google search bar at the bottom of the screen. But what if the area you want to circle and search is beneath that bar and impossible to access?

You’d never know it, but that Circle to Search bar is actually completely fluid and moveable. Just tap your finger onto it and swipe or flick upward to send it up to the top of the screen instead.

The Circle to Search bar can shift around the screen as needed.

JR Raphael, IDG

Whee!

Android Circle to Search trick #3: Easy adjusting

Here’s a neat one: If you ever find yourself wanting to shift the focus of Circle to Search after activating it and drawing your initial circle, you don’t have to close out your current session and start all over again.

Instead, just tap your finger anywhere on the screen to select another area — or use your finger to draw another circle. It’ll work, and it’ll instantly replace your original focus with whatever new one you select.

It’s simple to change your selection once Circle to Search is active.

JR Raphael, IDG

And speaking of after-the-fact adjustments…

Android Circle to Search trick #4: Fast follow-ups

The next time Circle to Search shows you info around something on your screen and you want to dive even deeper into that same subject, remember this: You can ask follow-up questions related to your selection to seek out even more specifics.

This trick works when you’ve selected a box-outlined area of the screen with Circle to Search — not just highlighted text. If you’ve highlighted text, you’ll need to tap on an open area of the screen without words on it to summon the box tool and then drag it over the appropriate area first.

Once you have an area selected with a box, though, you can simply tap the Google search bar in the panel at the bottom of the screen or tap the microphone icon within the bar to ask a conversational question about whatever Circle to Search is showing you.

See?

Asking a follow-up question in Circle to Search on Android.

JR

And finally…

Android Circle to Search trick #5: On-demand translation

Translating languages on Android has always been relatively easy to do, but it gets even faster with Circle to Search in the mix.

Just fire up Circle to Search while viewing the words you want to translate. Now, next to the search bar at the bottom of the screen, see that circular icon — the one with an “A” inside of it?

The Circle to Search translation button, hiding in plain sight.

JR Raphael, IDG

Tap that. And in the blink of an eye, your phone will pop up a prompt asking what languages you want to use for the translation.

Circle to Search translation lets you select your languages.

JR Raphael, IDG

Select what you want, and bam: Before you can even utter the words “bonjour, pamplemousse,” you’ll have your translation in front of your purty peepers and ready to be read.

A completed translation, by Circle to Search. Facile, non?

JR Raphael, IDG

Pas mal, pamplemousse. Pas mal du tout.

Get even more Googley knowledge with my free Android Intelligence newsletter — three things to know and three things to try every Friday!

Android, Google, Google Search, Mobile
Kategorie: Hacking & Security

Zero-Day Alert: Critical Palo Alto Networks PAN-OS Flaw Under Active Attack

The Hacker News - 12 Duben, 2024 - 10:56
Palo Alto Networks is warning that a critical flaw impacting PAN-OS software used in its GlobalProtect gateways is being actively exploited in the wild. Tracked as CVE-2024-3400, the issue has a CVSS score of 10.0, indicating maximum severity. "A command injection vulnerability in the GlobalProtect feature of Palo Alto Networks PAN-OS software for specific PAN-OS versions and distinct Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

XZ backdoor story – Initial analysis

Kaspersky Securelist - 12 Duben, 2024 - 10:00

On March 29, 2024, a single message on the Openwall OSS-security mailing list marked an important discovery for the information security, open source and Linux communities: the discovery of a malicious backdoor in XZ. XZ is a compression utility integrated into many popular distributions of Linux.

The particular danger of the backdoored library lies in its use by the OpenSSH server process sshd. On several systemd-based distributions, including Ubuntu, Debian and RedHat/Fedora Linux, OpenSSH is patched to use systemd features, and as a result has a dependency on this library (note that Arch Linux and Gentoo are unaffected). The ultimate goal of the attackers was most likely to introduce a remote code execution capability to sshd that no one else could use.

Unlike other supply chain attacks we have seen in Node.js, PyPI, FDroid, and the Linux Kernel that mostly consisted of atomic malicious patches, fake packages and typosquatted package names, this incident was a multi-stage operation that almost succeeded in compromising SSH servers on a global scale.

The backdoor in the liblzma library was introduced at two levels. The source code of the build infrastructure that generated the final packages was slightly modified (by introducing an additional file build-to-host.m4) to extract the next stage script that was hidden in a test case file (bad-3-corrupt_lzma2.xz). These scripts in turn extracted a malicious binary component from another test case file (good-large_compressed.lzma) that was linked with the legitimate library during the compilation process to be shipped to Linux repositories. Major vendors in turn shipped the malicious component in beta and experimental builds. The compromise of XZ Utils is assigned CVE-2024–3094 with the maximum severity score of 10.

The timeline of events

2024.01.19 XZ website moved to GitHub pages by a new maintainer (jiaT75)
2024.02.15 “build-to-host.m4” is added to .gitignore
2024.02.23 two “test files” that contained the stages of the malicious script are introduced
2024.02.24 XZ 5.6.0 is released
2024.02.26 commit in CMakeLists.txt that sabotages the Landlock security feature
2024.03.04 the backdoor leads to issues with Valgrind
2024.03.09 two “test files” are updated, CRC functions are modified, Valgrind issue is “fixed”
2024.03.09 XZ 5.6.1 is released
2024.03.28 bug is discovered, Debian and RedHat notified
2024.03.28 Debian rolls back XZ 5.6.1 to 5.4.5-0.2 version
2024.03.29 an email is published on the OSS-security mailing list
2024.03.29 RedHat confirms backdoored XZ was shipped in Fedora Rawhide and Fedora Linux 40 beta
2024.03.30 Debian shuts down builds and starts process to rebuild it
2024.04.02 XZ main developer recognizes the backdoor incident

Backdoored source distributions

xz-5.6.0

MD5 c518d573a716b2b2bc2413e6c9b5dbde SHA1 e7bbec6f99b6b06c46420d4b6e5b6daa86948d3b SHA256 0f5c81f14171b74fcc9777d302304d964e63ffc2d7b634ef023a7249d9b5d875

xz-5.6.1

MD5 5aeddab53ee2cbd694f901a080f84bf1 SHA1 675fd58f48dba5eceaf8bfc259d0ea1aab7ad0a7 SHA256 2398f4a8e53345325f44bdd9f0cc7401bd9025d736c6d43b372f4dea77bf75b8 Initial infection analysis

The XZ git repository contains a set of test files that are used when testing the compressor/decompressor code to verify that it’s working properly. The account named Jia Tan or “jiaT75“, committed two test files that initially appeared harmless, but served as the bootstrap to implant backdoor.

The associated files were:

These files were intended to contain shell scripts and the backdoor binary object itself. However, they were hidden within the malformed data, and the attacker knew how to properly extract them when needed.

Stage 1 – The modified build-to-host script

When the XZ release is ready, the official Github repository distributes the project’s source files. Initially, these releases on the repository, aside from containing the malicious test files, were harmless because they don’t get the chance to execute. However, the attacker appears to have only added the malicious code that bootstrap the infection when the releases were sourced from https://xz[.]tukaani.org, which was under the control of Jia Tan.

This URL is used by most distributions, and, when downloaded, it comes with a file named build-to-host.m4 that contains malicious code.

build-to-host.m4 (c86c8f8a69c07fbec8dd650c6604bf0c9876261f) is executed during the build process and executes a line of code that fixes and decompresses the first file added to the tests folder:

Deobfuscated line of code in build-to-host.m4

This line of code replaces the “broken” data from bad-3-corrupt_lzma2.xz using the tr command, and pipes the output to the xz -d command, which decompresses the data. The decompressed data contains a shell script that will be executed later using /bin/bash, triggered by this .m4 file.

Stage 2 – The injected shell script

The malicious script injected by the malicious .m4 file verifies that it’s running on a Linux machine and also that it’s running inside the intended build process.

Injected script contents

To execute the next stage, it uses good-large_compressed.lzma, which is indeed compressed correctly with XZ, but contains junk data inside the decompressed data.

The junk data removal procedure is as follows: the eval function executes the head pipeline, with each head command either ignoring the next 1024 bytes or extracting the next 2048 or 724 bytes.

In total, these commands extracted 33,492 bytes (2048*16 + 724 bytes). The tail command then retains the final 31,265 bytes of the file and ignores the rest.

Then, the tr command applies a basic substitution to the output to deobfuscate it. The second XZ command decompresses the transformed bytes as a raw lzma stream, after which the result is piped into shell.

Stage 3 – Backdoor extraction

The last stage shell script performs many checks to ensure that it is running in the expected environment, such as whether the project is configured to use IFUNC (which will be discussed in the next sections).

Many of the other checks performed by this stage include determining whether GCC is used for compilation or if the project contains specific files that will be used by the script later on.

In this stage, it extracts the backdoor binary code itself, an object file that is currently hidden in the same good-large_compressed.lzma file, but at a different offset.

The following code handles this:

Partial command used by the last script stage

The extraction process operates through a sequence of commands, with the result of each command serving as the input for the next one. The formatted one-liner code is shown below:

Formatted backdoor extraction one-liner

Initially, the file good-large_compressed.lzma is extracted using the XZ tool itself. The subsequent steps involve calling a chain of head calls with the “eval $i” function (same as the stage 3 extraction).

Then a custom RC4-like algorithm is used to decrypt the binary data, which contains another compressed file. This compressed file is also extracted using the XZ utility. The script then removes some bytes from the beginning of the decompressed data using predefined values and saves the result to disk as liblzma_la-crc64-fast.o, which is the backdoor file used in the linking process.

Finally, the script modifies the function is_arch_extension_supported from the crc_x86_clmul.h file in liblzma, to replace the call to the __get_cpuid function with _get_cpuid, removing one underscore character.

This modification allows it to be linked into the library (we’ll discuss this in more detail in the next section). The whole build infection chain can be summarized in the following scheme:

Binary backdoor analysis A stealth loading scenario

In the original XZ code, there are two special functions used to calculate the CRC of the given data: lzma_crc32 and lzma_crc64. Both of these functions are stored in the ELF symbol table with type IFUNC, a feature provided by the GNU C Library (GLIBC). IFUNC allows developers to dynamically select the correct function to use. This selection takes place when the dynamic linker loads the shared library.

The reason XZ uses this is that it allows for determining whether an optimized version of the lzma_crcX function should be used or not. The optimized version requires special features from modern processors (CLMUL, SSSE3, SSE4.1). These special features need to be verified by issuing the cpuid instruction, which is called using the __get_cpuid wrapper/intrinsic provided by GLIBC, and it’s at this point the backdoor takes advantage to load itself.

The backdoor is stored as an object file, and its primary goal is to be linked to the main executable during compilation. The object file contains the _get_cpuid symbol, as the injected shell scripts remove one underscore symbol from the original source code, which means that when the code calls _get_cpuid, it actually calls the backdoor’s version of it.

Backdoor code entry point

Backdoor code analysis

The initial backdoor code is invoked twice, as both lzma_crc32 and lzma_crc64 use the same modified function (_get_cpuid). To ensure control over this, a simple counter is created to verify that the code has already been executed. The actual malicious activity starts when the lzma_crc64 IFUNC invokes _get_cpuid, sees the counter value 1 indicating that that the function has already been accessed, and initiates one final step to redirect to the true entry point of this malware.

Backdoor initialization

To initialize the malicious code, the backdoor first initializes a couple of structures that hold core information about the current running process. Primarily, it locates the Global Offset Table (GOT) address using hardcoded offsets, and uses this information to find the cpuid pointer inside it.

GOT modification code

The GOT contains the offsets of symbols, including the cpuid wrapper. The backdoor then swaps the pointers to the main malware function, and calls it as if it were calling cpuid.

Core behavior

The main goal of the backdoor is to successfully hook specific functions that will allow it to monitor every connection to the infected machine. The targeted functions include:

Targeted function Description RSA_public_decrypt Used by libcrypto to decrypt a ciphertext signed by a private key EVP_PKEY_set1_RSA Used by libcrypto to set the RSA key of a given context RSA_get0_key Used by libcrypto to retrieve an RSA key

However, hooking all these functions is not a simple and straightforward task for the backdoor. It involves accessing many internal structures and manipulating process-specific structures until it achieves its goal, and it also needs to be as stealthy as possible, which means that the malicious code contains multiple checks to avoid being analyzed.

Execution environment check

It first parses ld-linux (dynamic linker), which is primarily responsible for dynamically resolving symbol addresses in memory. It also extracts various information about the environment and verifies that the current process the backdoor is running in matches its execution criteria, which are:

  • The current process is /usr/bin/sshd
  • Check if the kill switch is present

It extracts the name of the current process from argv[0] and also parses all the process environment variables to see if any of them are present in its internal string table. If a match is found, or the process is not sshd, the malware halts its execution.

Process environment check

The only variable inside its own table is yolAbejyiejuvnup=Evjtgvsh5okmkAvj, which in this context serves as a kill switch.

The trie structure

One of the distinctive features of the backdoor is the use of a single trie structure for string operations. Instead of directly comparing strings or using string hashes to match a particular constant (for example, the name of a library function), the code performs a trie lookup, and checks if the result is equal to a certain constant number. For example, the magic value for the ELF header results in the trie returning 0x300, and the name of the system function is matched with a return value of 0x9F8. Trie is not just used for comparisons: certain functions that use pointers to strings (for example, ssh-2.0) search for these strings in the host binary using the trie, so there will be no suspicious data in the backdoor’s body.

The implementation of the trie uses 16-byte bitmasks, each half corresponding to the byte input ranges 0x00-0x3F and 0x40-0x7F, and 2-byte trie leaf nodes, 3 bits of which are flags (direction, termination) and the rest is reserved for the value (or the location of the next node).

Part of the trie lookup function that performs the bitmap match

Symbol resolver

There are at least three symbol resolver-related routines used by the backdoor to locate the ELF Symbol structure, which holds information such as the symbol name and its offset. All symbol resolver functions receive a key to be searched in the trie.

Symbol resolver example

One of the backdoor resolver functions iterates through all symbols and verifies which one has the desired key. If it is found, it returns the Elf64_Sym structure, which will later be used to populate an internal structure of the backdoor that holds all the necessary function pointers. This process is similar to that commonly seen in Windows threats with API hashing routines.

The backdoor searches many functions from the libcrypto (OpenSSL) library, as these will be used in later encryption routines. It also keeps track of how many functions it was able to find and resolve; this determines whether it is executing properly or should stop.

Another interesting symbol resolver abuses the lzma_alloc function, which is part of the liblzma library itself. This function serves as a helper for developers to allocate memory efficiently using the default allocator (malloc) or a custom one. In the case of the XZ backdoor, this function is abused to make use of a fake allocator. In reality, it functions as another symbol resolver. The parameter intended for “allocation size” is, in fact, the symbol key inside the trie. This trick is meant to complicate backdoor analysis.

Symbol resolver using a fake allocator structure

The backdoor dynamically resolves its symbols while executing; it doesn’t necessarily do so all at once or only when it needs to use them. The resolved symbols/functions range from legitimate OpenSSL functions to functions such as system, which is used to execute commands on the machine.

The Symbind hook

As mentioned earlier, the primary objective of the backdoor initialization is to successfully hook functions. To do so, the backdoor makes use of rtdl-audit, a feature of the dynamic linker that enables the creation of custom shared libraries to be notified when certain events occur within the linker, such as symbol resolution. In a typical scenario, a developer would create a shared library following the rtdl-audit manual. However, the XZ backdoor opts to perform a runtime patch on the already registered (default) interfaces loaded in memory, thereby hijacking the symbol-resolving routine.

dl-audit runtime patch

The maliciously crafted structure audit_iface, stored in the dl_audit global variable within the dynamic linker’s memory area, contains the symbind64 callback address, which is invoked by the dynamic linker. It sends all the symbol information to the backdoor control, which is then used to obtain a malicious address for the target functions, thus achieving hooking.

Hooking placement inside the Symbind modified callback

The addresses for dl_audit and dl_naudit, which holds the number of audit interfaces available, are obtained by disassembling both the dl_main and dl_audit_symbind_alt functions. The backdoor contains an internal minimalistic disassembler used for instruction decoding. It makes extensive use of it, especially when hunting for specific values like the *audit addresses.

dl_naudit hunting code

The dl_naudit address is found by one of the mov instructions within the dl_main function code that accesses it. With that information, the backdoor hunts for access to a memory address and saves it.

It also verifies if the memory address acquired is the same address as the one accessed by the dl_audit_symbind_alt function on a given offset. This allows it to safely assume that it has indeed found the correct address. After it finds the dl_naudit address, it can easily calculate where dl_audit is, since the two are stored next to each other in memory.

Conclusion

In this article, we covered the entire process of backdooring liblzma (XZ), and delved into a detailed analysis of the binary backdoor code, up to achieving its principal goal: hooking.

It’s evident that this backdoor is highly complex and employs sophisticated methods to evade detection. These include the multi-stage implantation in the XZ repository, as well as the complex code contained within the binary itself.

There is still much more to explore about the backdoor’s internals, which is why we have decided to present this as Part I of the XZ backdoor series.

Kaspersky products detect malicious objects related to the attack as HEUR:Trojan.Script.XZ and Trojan.Shell.XZ. In addition, Kaspersky Endpoint Security for Linux detects malicious code in SSHD process memory as MEM:Trojan.Linux.XZ (as part of the Critical Areas Scan task).

Indicators of compromise Yara rules rule liblzma_get_cpuid_function { meta: description = "Rule to find the malicious get_cpuid function CVE-2024-3094" author = "Kaspersky Lab" strings: $a = { F3 0F 1E FA 55 48 89 F5 4C 89 CE 53 89 FB 81 E7 00 00 00 80 48 83 EC 28 48 89 54 24 18 48 89 4C 24 10 4C 89 44 24 08 E8 ?? ?? ?? ?? 85 C0 74 27 39 D8 72 23 4C 8B 44 24 08 48 8B 4C 24 10 45 31 C9 48 89 EE 48 8B 54 24 18 89 DF E8 ?? ?? ?? ?? B8 01 00 00 00 EB 02 31 C0 48 83 C4 28 5B 5D C3 } condition: $a } Known backdoored libraries

Debian Sid liblzma.so.5.6.0
4f0cf1d2a2d44b75079b3ea5ed28fe54
72e8163734d586b6360b24167a3aff2a3c961efb
319feb5a9cddd81955d915b5632b4a5f8f9080281fb46e2f6d69d53f693c23ae

Debian Sid liblzma.so.5.6.1
53d82bb511b71a5d4794cf2d8a2072c1
8a75968834fc11ba774d7bbdc566d272ff45476c
605861f833fc181c7cdcabd5577ddb8989bea332648a8f498b4eef89b8f85ad4

Related files
d302c6cb2fa1c03c710fa5285651530f, liblzma.so.5
4f0cf1d2a2d44b75079b3ea5ed28fe54, liblzma.so.5.6.0
153df9727a2729879a26c1995007ffbc, liblzma.so.5.6.0.patch
53d82bb511b71a5d4794cf2d8a2072c1, liblzma.so.5.6.1
212ffa0b24bb7d749532425a46764433, liblzma_la-crc64-fast.o

Analyzed artefacts
35028f4b5c6673d6f2e1a80f02944fb2, bad-3-corrupt_lzma2.xz
b4dd2661a7c69e85f19216a6dbbb1664, build-to-host.m4
540c665dfcd4e5cfba5b72b4787fec4f, good-large_compressed.lzma

Sneaky Credit Card Skimmer Disguised as Harmless Facebook Tracker

The Hacker News - 12 Duben, 2024 - 07:09
Cybersecurity researchers have discovered a credit card skimmer that's concealed within a fake Meta Pixel tracker script in an attempt to evade detection. Sucuri said that the malware is injected into websites through tools that allow for custom code, such as WordPress plugins like Simple Custom CSS and JS or the "Miscellaneous Scripts" section of the Magento admin panel. "Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security
Syndikovat obsah