Agregátor RSS
Kniha kryptologie, šifrování a tajná písma v prodeji !
Kniha byla v drívejším vydání v edici OKO zcela vyprodána a nebylo ji možné získat.
Nyní je tedy možnost ji zakoupit v e-shopu KYBERCENTRA. Ale pozor k prodeji touto cestou bylo uvolnen pouze omezený pocet 200 kusu .
Sháníte knihu : Kryptologie, šifrování a tajná písma ?
Kniha vyšla v 2006 v nákladu 8000 ks a byla brzy zcela vyprodána.
Kniha nyní vyjde pomocí Crowdfundingu v rámci projektu Centra kybernetické bezpe?nosti, z. ú. (KyberCentrum).
Podpo?te tento projekt a stanete se vlastníci této knihy.
Kryptologie, šifrování a tajná písma
Knihu lze získat v rámci projektu Kybercentra (Crowdfunding).
Update on NIST\'s Post-Quantum Cryptography Program
Rozluštil jsem nejtajemn?jší text sv?ta, tvrdí v?dec. Vojni??v rukopis je prý ženskou p?íru?kou pro královnu
Policie zatkla cizince, do bankomatu montovali ?te?ku karet
Jak tvo?it a pamatovat si hesla (2019)
P?ehled kvalitních bezplatných bezpe?nostních program?
P?ehled bezplatných silných antivirových a dalších bezpe?nostních program?, které vám mohou pomoci udržet vaše citlivé informace v bezpe?í.
How to encrypt email (Gmail, Outlook iOS, OSX, Android, Webmail)
So you want to start encrypting your email? Well, let’s start by saying that setting up email encryption yourself is not the most convenient process. You don’t need a degree in cryptography or anything, but it will take a dash of tech savvy. We’ll walk you through the process later on in this article.
Alternatively, you can use an off-the-shelf encrypted email client. Tutanota is one such secure email service, with apps for mobile and a web mail client. It even encrypts your attachments and contact lists. Tutanota is open-source, so it can be audited by third parties to ensure it’s safe. All encryption takes place in the background. While we can vouch for Tutanota, it’s worth mentioning that there are a lot of email apps out there that claim to offer end-to-end encryption, but many contain security vulnerabilities and other shortcomings. Do your research before choosing an off-the-shelf secure email app.
If you’d prefer to configure your own email encryption, keep reading.
Google: Security Keys Neutralized Employee Phishing
Security Keys are inexpensive USB-based devices that offer an alternative approach to two-factor authentication (2FA), which requires the user to log in to a Web site using something they know (the password) and something they have (e.g., a mobile device).
Crypto gripes, election security, and mandatory cybersec school: Uncle Sam´s cyber task force emits todo list for govt
The report [PDF], compiled by 34 people from six different government agencies, examines the challenges facing Uncle Sam´s agencies in enforcing the law and protecting the public from hackers. It also lays out what the government needs to do to thwart looming threats to its computer networks.
Let´s (not) Encrypt
If you´ve been following the news for the last few years it will come as no surprise that the Justice Department is not a fan of the common man having access to encryption.
The report bemoans the current state of encryption and its ability to keep the government from gathering and analyzing traffic for criminal investigations. The word ´encryption´ comes up 17 times in the report, not once in a favorable light.
In the past several years, the Department has seen the proliferation of default encryption where the only person who can access the unencrypted information is the end user, the report reads.
The advent of such widespread and increasingly sophisticated encryption technologies that prevent lawful access poses a significant impediment to the investigation of most types of criminal activity.
Quantum computing revenue to hit $15 billion in 2028 due to AI, R&D, cybersecurity
Cracking the Crypto War
Zimmerman and friends: ´Are you listening? PGP is not broken´
However, PGP´s creator Phil Zimmerman, Protonmail´s Any Yenn, Enigmail´s Patrick Brunschwig, and Mailvelope´s Thomas Oberndörfer are still concerned that misinformation about the bug remains in the wild.
Yenn tried to refute the EFAIL “don´t use PGP” on May 25, and the four have followed up with this joint post.
Softwarová sklizeň (8. 5. 2024): skenování QR kódů a převod textu na řeč
Watch out for rogue DHCP servers decloaking your VPN connections
A newly discovered vulnerability undermines countless VPN clients in that their traffic can be quietly routed away from their encrypted tunnels and intercepted by snoops on the network.…
CISA's early-warning system helped critical orgs close 852 ransomware holes
Interview As ransomware gangs step up their attacks against healthcare, schools, and other US critical infrastructure, CISA is ramping up a program to help these organizations fix flaws exploited by extortionists in the first place.…
TikTok sues America to undo divest-or-die law
TikTok and its China-based parent ByteDance sued the US government today to prevent the forced sale or shutdown of the video-sharing giant.…
AI chip shortages continue, but there may be an end in sight
As the adoption of generative artificial intelligence (genAI) continues to soar, the infrastructure to support that growth is currently running into a supply and demand bottleneck.
Sixty-six percent of enterprises worldwide said they would be investing in genAI over the next 18 months, according to IDC research. Among organizations indicating genAI will see increased IT spending in 2024, infrastructure will account for 46% of the total spend. The problem: a key piece of hardware needed to build out that AI infrastructure is in short supply.
The breakneck pace of AI adoption over the past two years has strained the industry’s ability to supply the special high-performance chips needed to run the process-intensive operations of genAI and AI in general. Most of the focus on processor shortages has been on the exploding demand for Nvidia GPUs and alternatives from various chip designers such as AMD, Intel, and the hyperscale datacenter operators, according to Benjamin Lee, a professor in the Department of Computer and Information Science at the University of Pennsylvania.
“There has been much less attention focused on exploding demand for high-bandwidth memory chips, which are fabricated in Korea-based foundries run by SK Hynix,” Lee said.
Last week, SK Hynix said its high-bandwidth memory (HBM) products, which are needed in combination with high-performance GPUs to handle AI processing requirements, are almost fully booked through 2025 because of high demand. The price of HBMs has also recently increased by 5% to 10%, driven by significant premiums and increased capacity needs for AI chips, according to market research firm TrendForce.
SK Hynix\’s HBM3 product with industry’s largest 24GB memory capacity features high-capacity and high-performance through stacking of 12 DRAM chips.
SK Hynix
HBM chips are expected to account for more than 20% of the total DRAM market value starting in 2024, potentially exceeding 30% by 2025, according to TrendForce Senior Research Vice President Avril Wu. “Not all major suppliers have passed customer qualifications for [high-performance HBM], leading buyers to accept higher prices to secure stable and quality supplies,” Wu said in a research report.
Why GPUs need high-bandwidth memoryWithout HBM chips, a data center server’s memory system would be unable to keep up with a high-performance processor, such as a GPU, according to Lee. HBMs are what supply GPUs with the data they process. “Anyone who purchases a GPU for AI computation will also need high-bandwidth memory,” Lee said.
“In other words, high-performance GPUs would be poorly utilized and often sit idle waiting for data transfers. In summary, high demand for SK Hynix memory chips is caused by high demand for Nvidia GPU chips and, to a lesser extent, associated with demand for alternative AI chips such as those from AMD, Intel, and others,” he said.
“HBM is relatively new and picking up a strong momentum because of what HBM offers — more bandwidth and capacity,” said Gartner analyst Gaurav Gupta. “It is different than what Nvidia and Intel sell. Other than SK Hynix, the situation for HBM is similar for other memory players. For Nvidia, I believe there are constraints, but more associated with packaging capacity for their chips with foundries.”
While SK Hynix is reaching its supply limits, Samsung and Micron are ramping up HBM production and should be able to support the demand as the market becomes more distributed, according to Lee.
The current HBM shortages are primarily in the packaging from TSMC (i.e., chip-on-wafer-on-substrate or CoWoS), which is the exclusive supplier of the technology. According to Lee, TSMC is more than doubling its SOIC capacity and boosting capacity for CoWoS by more than 60%. “I expect the shortages to ease by the end of this year,” he said.
At the same time, more packaging and foundry suppliers are coming online and qualifying their technology to support NVIDIA, AMD, Broadcom, Amazon, and others using TSMC’s chip packaging technology, according to Lee.
Nvidia, whose production represents about 70% of the global supply of AI server chips, is expected to generate $40 billion in revenue from GPU sales this year, according to Bloomberg analysts. By comparison, competitors Intel and AMD are expected to generate $500 million and $3.5 billion, respectively. But all three are ramping production as quickly as possible.
Nvidia is tackling the GPU supply shortage by increasing its CoWoS and HBM production capacities, according to TrendForce. “This proactive approach is expected to cut the current average delivery time of 40 weeks in half by the second quarter [of 2024], as new capacities start to come online,” TrendForce report said in its report. “This expansion aims to alleviate the supply chain bottlenecks that have hindered AI server availability due to GPU shortages.”
Shane Rau, IDC’s research vice president for computing semiconductors, said that while demand for AI chip capacity is very high, markets are adapting. “In the case of server-class GPUs, they’re increasing supply of wafers, packaging, and memories. The increased supply is key because, due to their performance and programmability, server-class GPUs will remain the platform of choice for training and running large AI models.”
Chipmakers scramble to meet the demand for AIGlobal spending on AI-focused chips is expected to hit $53 billion this year — and to more than double over the next four years, according to Gartner Research. So it’s no surprise that chipmakers are rolling out new processors as quickly as they can.
Intel has announced its plans for chips aimed at powering AI functions with its Gaudi 3 processors, and has said its Xeon 6 processors, which can run retrieval augmented generation (RAG) processes, will also be key. The Gaudi 3 GPU was purpose-built for training and running massive large language models (LLMs) that underpin genAI in data centers.
Meanwhile, AMD in its most recent earnings call, touted its MI300 GPU for AI data center workloads, which also has good market traction, according to IDC Group Vice President Mario Morales, adding that the research firm is tracking over 80 semiconductor vendors developing specialized chips for AI.
On the software side of the equation, LLM creators are also developing smaller models tailored for specific tasks; they require fewer processing resources and rely on local, proprietary data — unlike the massive, amorphous algorithms that boast hundreds of billions or even more than a trillion parameters.
Intel’s strategy going forward is similar: it wants to enable genAI on every type of computing device, from laptops to smart phones. Intel’s Xeon 6 processors will include some versions with onboard neural processing units (NPUs or “AI accelerators”) for use in workstations, PCs and edge devices. Intel also claims its Xeon 6 processors will be good enough to run smaller, more customized LLMs.
Even so, without HBMs, those processors would likely struggle to keep up with genAI’s high performance demands.
CPUs and Processors, Generative AI, Technology Industry- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »