Security-Portal.cz je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.

Kategorie

AI, cybersecurity, and quantum computing shape UK’s new 10-year economic plan

Computerworld.com [Hacking News] - 23 Červen, 2025 - 17:43

Artificial intelligence, quantum computing and cybersecurity are “frontier technologies” the UK government plans to prioritize as part of its blueprint to overhaul the nation’s economy and industries over the next decade.

That’s according to its long-awaited industrial strategy policy paper and a separate plan going into more detail on digital and other technologies.

It would perhaps have been bigger news if the government hadn’t put AI, cybersecurity and quantum computing at the heart of its plans, given that it has already trailed this heavily in a sequence of reports, including January’s AI Opportunities Action Plan.

But the hope for the tech sector expressed in the paper, titled The UK’s Modern Industrial Strategy, is still ambitious, including that by 2035 the UK should aim to be one of the world’s top three R&D superpowers and home to a tech business worth a trillion dollars.

All that has to happen in a mere decade in a country unaccustomed to talking about its future more than one four-year election cycle ahead. It’s also interventionist in tone, an idea at odds with half a century of thinking in Britain which assumed technology should be left to its own devices.

AI and quantum computing will build the companies of the future. However, because this infrastructure will be vulnerable to disruption, it will need cybersecurity innovation to ensure its operation.

“We will enable new sectors to establish themselves e.g., our rapidly growing AI sector. […] Driving investment into our internationally renowned cybersecurity sector and supporting cutting-edge innovation to address the challenges that prevent widespread technology adoption,” the government wrote in the the Digital Technologies Sector Plan.

With a combined value of £1 trillion ($1.4 trillion) the UK’s tech sector was currently the world’s third most valuable behind only the US and China, it calculated.

The Plan’s focus on AI in particular sets out ambitious uptake goals. As soon as 2030, the UK should have several AI growth zones, with 7.5 million people upskilled to use the technology while the country’s AI research capacity will grow twentyfold, the plan projects. By the same date, the CyberASAP accelerator program should be supporting 250 cybersecurity companies and 28 spinouts.

Big interventions

Some optimism is probably justified — the country is home to a good collection of AI expertise for example — but it wouldn’t be Britain if there weren’t doubts.

The first is that while the UK has a reasonable track record at creating AI, cybersecurity, and technology companies, its record of keeping them British is less positive. Two examples are Google getting its hands on AI specialist DeepMind at a bargain-basement price in 2014, and Softbank’s purchase of chip designer Arm two years later. Both are still based in the UK, but with their profits flowing elsewhere.

That’s not always an issue but, without a core of sovereign businesses, it’s debatable whether a country is really in charge of its technology ecosystem in the long run.

A second issue is the size of the government interventions necessary to fuel local technology businesses today from startup to unicorn and beyond. In a sector that thinks in the hundreds of billions, the UK Government’s budget, doled out in tens of millions in a variety of programs, remains more constrained.

There’s also doubt about whether the rest of the UK economy will be able to profit from AI developments.

“According to Cisco’s latest UK AI Readiness Index, only 10% of UK organisations are fully prepared to harness AI’s potential,” said Cico’s UK and Ireland chief executive, Sarah Walker.

Cisco collaborated with the development of the Government’s plan, but Walker pointed out that its success still depended on overcoming deeper workforce challenges:

“AI adoption and implementation is primarily a people challenge. From traditional IT roles to marketing and supply chain management, almost every job will require AI literacy in the very near future,” she said.

In some parts of the UK, this would be easier than in others. “We need to ensure up-skilling is addressed with equality, to avoid exacerbating economic gaps that already exist across demographics and regions.”

Kategorie: Hacking & Security

Canada says Salt Typhoon hacked telecom firm via Cisco flaw

Bleeping Computer - 23 Červen, 2025 - 17:23
The Canadian Centre for Cyber Security and the FBI confirm that the Chinese state-sponsored 'Salt Typhoon' hacking group is also targeting Canadian telecommunication firms, breaching a telecom provider in February. [...]
Kategorie: Hacking & Security

Revil ransomware members released after time served on carding charges

Bleeping Computer - 23 Červen, 2025 - 17:12
Four REvil ransomware members arrested in January 2022 were released by Russia on time served after they pleaded guilty to carding and malware distribution charges. [...]
Kategorie: Hacking & Security

McLaren Health Care says data breach impacts 743,000 patients

Bleeping Computer - 23 Červen, 2025 - 16:28
McLaren Health Care is warning 743,000 patients that the health system suffered a data breach caused by a July 2024 attack by the INC ransomware gang. [...]
Kategorie: Hacking & Security

GitHub’s AI billing shift signals the end of free enterprise tools era

Computerworld.com [Hacking News] - 23 Červen, 2025 - 15:11

GitHub began enforcing monthly limits on its most powerful AI coding models this week, marking the latest example of AI companies transitioning users from free or unlimited services to paid subscription tiers once adoption takes hold.

“Monthly premium request allowances for paid GitHub Copilot users are now in effect,” the company said in its update to the Copilot consumptive billing experience, confirming that billing for additional requests now starts at $0.04 each. The enforcement represents the activation of restrictions first announced by GitHub CEO Thomas Dohmke in April.

Kategorie: Hacking & Security

Steel giant Nucor confirms hackers stole data in recent breach

Bleeping Computer - 23 Červen, 2025 - 14:28
Nucor, North America's largest steel producer and recycler, has confirmed that attackers behind a recent cybersecurity incident have also stolen data from the company's network. [...]
Kategorie: Hacking & Security

TikTok-style bite-sized videos are invading enterprises

Computerworld.com [Hacking News] - 23 Červen, 2025 - 13:21

The TikTok-ification of the corporate world is well under way, as more companies turn to video snippets to communicate information to employees and customers. But when it comes to user- and AI-generated content, the rules are different for companies than for casual TikTok or Instagram users — and enterprises need to move cautiously when implementing video-generation tools, analysts said.

“There is a definite rise in the use of short form, digestible video in the corporate workplace,” said Forest Conner, senior director and analyst at Gartner. That’s because video is a more engaging way share corporate information with employees and a better way to manage time.

“As the storytelling axiom goes, ‘Show, don’t tell,’” Connor said. “Video provides a medium for showing in practice what may be hard to relay in writing.”

Many employees would rather view short videos that summarize a document or meeting, analysts said. As a result, employees themselves are becoming digital creators using new AI-driven video generation and editing tools.

Software from companies like Atlassian, Google, and Synthesia can dynamically create videos for use in presentations, to bolster communications with employees, or to train workers. The tools can create avatars, generate quick scripts, and draw insights using internal AI systems and can sometimes be better than email for sharing quick insights on collaborative projects. (Atlassian just last week introduced new creation tools in its own Loom software that include AI-powered script editing to make videos look better; the new feature doesn’t require re-recording a video.)

In part, the rising use of these video-creation tools is “a reaction to over-meeting,” said Will McKeon-White, senior analyst for infrastructure and operations at Forrester Research. Many employees feel meetings are a waste of time and hinder productivity. As an alternative, they can record short contextual snippets in Loom for use in workflow documents or to send to colleagues — allowing them to get up to speed on projects at their own pace.

“I’ve seen this more in developer environments where teams are building complex applications in a distributed environment without spending huge amounts of time in meetings,” McKeon-White said.

HR departments are finding Loom useful for dynamically creating personalized videos while onboarding new employees, said Sanchan Saxena, head of product for Teamwork Foundations at Atlassian. The quickly generated personalized videos — which Saxena called “Looms” — can include a welcome message with the employee’s name and position and can complement written materials such as employee handbooks and codes of conduct.

“We can all agree there is a faster, richer form of communication when the written document is also accompanied by a visual video that attaches to it,” Saxena said.

AI video generation company Synthesia made its name with a tool where users select an avatar, type in a script, add text or images and can produce a video in a few minutes. Over time, the company has expanded its offerings and is seeing more business uptake, said Alexandru Voica, head of corporate affairs and policy at Synthesia.

Its offerings now include an AI video assistant to convert documents into video summaries and an AI dubbing tool that localizes videos in more than 30 languages. “These products come together to form an AI video platform that covers the entire lifecycle of video production and distribution,” said Voica.

Voica noted how one Synthesia customer, Wise, has seen efficiency gains using the software for compliance and people training, creating “engaging learning experiences across their global workforce.”

Looking ahead, video content as a tool for corporate communications will likely be adopted at a team level, said McKeon-White. “It’s going to come down to the team or the department as for what they want to do in a given scenario,” he said.

Enterprises need to keep many things in mind when including videos in the corporate workflow. Managers, for instance, shouldn’t force videos on employees or create a blanket policy to adopt more video.

They can be useful, but videos are not for everyone, said Jeff Kagan, a technology analyst. “One big mistake companies make is following the preferences of the workers or executives…rather than considering different opinions. Not everyone is cutting edge,” he said.

Companies shouldn’t jump on the video bandwagon too soon, McKeon-White said. If they do, they run the risk of overwhelming employees.

“You don’t want to suddenly have work scrolling through 30 hours of video,” he said. “If you are throwing videos onto a shared repository and saying, ‘Hey, go look at that!’ That sucks. That’s not good for anybody.”

There are also many security and compliance issues to keep in mind.

AI can now detect sensitive information, such as license plate numbers, addresses, or confidential documentation, without manually reviewing the video, Conner said. “Organizations need to ensure that any content making it out the door is scrubbed for sensitive information in advance of publication.”

And with the rise of generative AI, the problem of deepfakes remains a major concern.

The uncanny accuracy of AI video avatars creates risks for executives, where their likeness could be cloned from their available video content and then used in damaging ways, Conner said.

“This has yet to happen in practice, but my guess is it’s only a matter of time,” Conner said.

Kategorie: Hacking & Security

Despite its ubiquity, RAG-enhanced AI still poses accuracy and safety risks

Computerworld.com [Hacking News] - 23 Červen, 2025 - 12:00

Retrieval-Augmented Generation (RAG) — a method used by genAI tools like Open AI’s ChatGP) to provide more accurate and informed answers — is becoming a cornerstone for generative AI (genAI) tools, “providing implementation flexibility, enhanced explainability and composability with LLMs,” according to a recent study by Gartner Research.

And by 2028, 80% of genAI business apps will be developed on existing data management platforms, with RAG a key part of future deployments.

There’s only one problem: RAG isn’t always effective. In fact, RAG, which assists genAI technologies by looking up information instead of relying only on memory, could actually be making genAI models less safe and reliable, according to recent research.

Alan Nichol, CTO at conversational AI vendor Rasa, called RAG “just a buzzword” that just means “adding a loop around large language models” and data retrieval. The hype is overblown, he said, adding that the use of “while” or “if” statements by RAG is treated like a breakthrough.

(RAG systems typically include logic that might resemble “if” or “while” conditions, such as “if” a query requires external knowledge, retrieve documents from a knowledge base, and “while” an answer might be inaccurate re-query the database or refine the result.) 

“…Top web [RAG] agents still only succeed 25% of the time, which is unacceptable in real software,” Nichol said in an earlier interview with Computerworld. “Instead, developers should focus on writing clear business logic and use LLMs to structure user input and polish search results. It’s not going to solve your problem, but it is going to feel like it is.”

Two studies, one by Bloomberg and another by The Association for Computational Linguistics (ACL) found that using RAG with large language models (LLMs) can reduce their safety, even when both the LLMs and the documents it accesses are sound. The study highlighted the need for safety research and red-teaming designed for RAG settings.

Both studies found that “unsafe” outputs such as misinformation or privacy risks increased under RAG, prompting a closer look at whether retrieved documents were to blame. The key takeaway: RAG needs strong guardrails and researchers who are actively trying to find flaws, vulnerabilities, or weaknesses in a system — often by thinking like an adversary.

How RAG works — and causes security risks

One way to think about RAG and how it works is to compare a typical genAI model to a student answering questions just from memory. The student might sometimes answer the questions from memory — but the information could also be outdated or incomplete.

A RAG system is like a student who says, “Wait, let me check my textbook or notes first,” then gives you an answer based on what they found, plus their own understanding.

Iris Zarecki, CEO of data integration services provider K2view, said most organizations now using RAG augment their genAI models with internal unstructured data such as manuals, knowledge bases, and websites. But enterprises also need to include fragmented structured data, such as customer information, to fully unlock RAG’s potential.

“For example, when customer data like customer statements, payments, and past email and call interactions with the company are retrieved by the RAG framework and fed to the LLM, it can generate a much more personalized and accurate response,” Zarecki said.

Because RAG can increase security risks involving unverified info and prompt injection, Zarecki said, enterprises should vet sources, sanitize documents, enforce retrieval limits, and validate outputs.

RAG can also create a gateway through firewalls, allowing for data leakage, according to Ram Palaniappan, CTO at TEKsystems Global Services, a tech consulting firm. “This opens a huge number of challenges in enabling secure access and ensuring the data doesn’t end up in the public domain,” Palaniappan said. “RAG poses data leakage challenges, model manipulation and poisoning challenges, securing vector DB, etc. Hence, security and data governance become very critical with RAG architecture.”

(Vector databases are commonly used in applications involving RAG, semantic search, AI agents, and recommendation systems.)

Palaniappan expects the RAG space to rapidly evolve, with improvements in security and governance through tools like the Model Context Protocol and Agent-to-Agent Protocol (A2A). “As with any emerging tech, we’ll see ongoing changes in usage, regulation, and standards,” he said. “Key areas advancing include real-time AI monitoring, threat detection, and evolving approaches to ethics and bias.”

Large Reasoning Models are also highly flawed

Apple recently published a research paper evaluating Large Reasoning Models (LRMs) such as Gemini flash thinking, Claude 3.7 Sonnet thinking and OpenAI’s o3-mini using logical puzzles of varying difficulty. Like RAG, LRMs are designed to provide better responses by incorporating a level of step-by-step reasoning in its task.

Apple’s “Illusion of Thinking” study found that as the complexity of tasks increased, both standard LLMs and LRMs saw a significant decline in accuracy — eventually reaching near-zero performance. Notably, LRMs often reduced their reasoning efforts as tasks got more difficult, indicating a tendency to “quit” rather than persist through challenges.

Even when given explicit algorithms, LRMs didn’t improve, indicating they rely on pattern recognition rather than true understanding, challenging assumptions about AI’s path to “true intelligence.”

While LRMs perform well on benchmarks, their actual reasoning abilities and limitations are not well understood. Study results show LRMs break down on complex tasks, sometimes performing worse than standard models. Their reasoning effort increases with complexity only up to a point, then unexpectedly drops.

LRMs also struggle with consistent logical reasoning and exact computation, raising questions about their true reasoning capabilities, the study found. “The fundamental benefits and limitations of LRMs remain insufficiently understood,” Apple said. “Critical questions still persist: Are these models capable of generalizable reasoning or are they leveraging different forms of pattern matching.”

Reverse RAG can improve accuracy

A newer approach, Reverse RAG (RRAG), aims to improve accuracy by adding verification and better document handling, Gartner Senior Director Analyst Prasad Pore said. Unlike typical RAG, which uses a workflow that retrieves data and then generates a response, Reverse RAG flips it to generate an answer, retrieve data to verify that answer and then regenerate that answer to be passed along to the user.

First, the model drafts potential facts or queries, then fetches supporting documents and rigorously checks each claim against those sources. Reverse RAG emphasizes fact-level verification and traceability, making outputs more reliable and auditable.

RRAG represents a significant evolution in how LLMs access, verify and generate information, Pore said. “Although traditional RAG has transformed AI reliability by connecting models to external knowledge sources and making completions contextual, RRAG offers novel approaches of verification and document handling that address challenges in genAI applications related to fact checking and truthfulness of completions.”

The bottom line is that RAG and LRM alone aren’t silver bullets, according to Zarecki. Additional methods to improve genAI output quality must include:

  • Structured grounding: Fragmented structured data, such as customer info, in RAG.
  • Fine-tuned guardrails: Zero-shot or few-shot prompts with constraints, using control tokens or instruction tuning.
  • Human-in-the-loop oversight: Especially important for high-risk domains such as healthcare, finance, or legal.
  • Multi-stage reasoning: Breaking tasks into retrieval → reasoning → generation improves factuality and reduces errors, especially when combined with tool use or function calling.

Organizations must also organize enterprise data for GenAI and RAG by ensuring privacy, real-time access, quality, scalability, and instant availability to meet chatbot latency needs.

“This means that data must address requirements like data guardrails for privacy and security, real-time integration and retrieval, data quality, and scalability at controlled costs,” Zarecki said. “Another critical requirement is the freshness of the data, and the ability of the data to be available to the LLM in split seconds, because of the conversational latency required for a chatbot.”

Kategorie: Hacking & Security

How to spot AI washing in vendor marketing

Computerworld.com [Hacking News] - 23 Červen, 2025 - 08:47
This agent is a robot

Agentic AI and AI agents are hotter than lava-fried chicken right now, and this week CIO defined how the two differ from each other. We reported that the two related technologies can work together, but CIOs should understand the difference to protect against vendor hype and obfuscation.  

And it is vendor hype that is exercising the readers of CIO, who wanted to know from Smart Answers how to spot vendor AI washing. Smart Answers may be an AI-infused chatbot, but it’s fueled by human intelligence, allowing it to know its own limitations. 

It defines AI washing as misrepresentation of basic automation or traditional algorithms as fully autonomous AI agents. Such false agents don’t possess true independent decision-making capabilities and cannot reason through multiple steps and act independently. 

Find out: What is agent washing in AI marketing? 

Windows 10: not dead yet

The imminent demise of support for Windows 10 is causing much consternation in enterprise IT. But is Microsoft really axing Windows 10? This week Computerworld reported the definitive need to know on the subject. This prompted readers to ask many questions of Smart Answers, all related to the end of Windows 10. Most often queried was the future of Microsoft 365 apps on Windows 10 after support ends.  

It’s good news and bad news. While the apps will continue to function and receive security updates until Oct. 10, 2028, users may encounter performance issues and limited support. Microsoft encourages users to upgrade to Windows 11 to avoid these potential problems. (Well, it would.) 

Find out: What happens using Microsoft 365 apps on Windows 10 after 2025?  

You say IT, we say OT

The convergence of IT and operational technology (OT) can improve security, optimize processes, and reduce costs. This week CIO reported on how some how large companies do it

Not surprisingly this prompted readers to ask Smart Answers how IT/OT collaboration can drive digital transformation. Within the answer lies one very salient point: some leaders believe that in certain sectors, rapid IT/OT convergence is critical to achieve transformation.  

Find out: How is IT/OT convergence enabling digital transformation in different industries?  

About Smart Answers 

Smart Answers is an AI-based chatbot tool designed to help you discover content, answer questions, and go deep on the topics that matter to you. Each week we send you the three most popular questions asked by our readers, and the answers Smart Answers provides. 

Kategorie: Hacking & Security

CoinMarketCap briefly hacked to drain crypto wallets via fake Web3 popup

Bleeping Computer - 22 Červen, 2025 - 23:47
CoinMarketCap, the popular cryptocurrency price tracking site, suffered a website supply chain attack that exposed site visitors to a wallet drainer campaign to steal visitors' crypto. [...]
Kategorie: Hacking & Security

Oxford City Council suffers breach exposing two decades of data

Bleeping Computer - 22 Červen, 2025 - 17:17
Oxford City Council warns it suffered a data breach where attackers accessed personally identifiable information from legacy systems. [...]
Kategorie: Hacking & Security

Rakouská vláda chce koupit spyware a prolomit šifrovanou komunikaci. Má sledovat 30 lidí ročně

Zive.cz - bezpečnost - 22 Červen, 2025 - 16:45
**Rakouská vláda se 18. června dohodla na možnosti sledování podezřelých osob. **Chce koupit spyware, který prolomí koncové šifrování. **Vládní návrh nyní poputuje do parlamentu.
Kategorie: Hacking & Security

Windows Snipping Tool now lets you create animated GIF recordings

Bleeping Computer - 22 Červen, 2025 - 16:11
​Microsoft announced that the Windows screenshot and screencast Snipping Tool utility is getting support for exporting animated GIF recordings. [...]
Kategorie: Hacking & Security

Secure RHEL Clones Chart Diverging Paths

LinuxSecurity.com - 22 Červen, 2025 - 14:27
When you're architecting a secure Linux environment, understanding where your operating system stands''both in terms of hardware compatibility and security features''isn't optional. It's critical. With RHEL 10 redefining what enterprise Linux should look like and Rocky Linux 10 and AlmaLinux 10 adapting to meet the demands of downstream users, the landscape has shifted.
Kategorie: Hacking & Security

Russian hackers bypass Gmail MFA using stolen app passwords

Bleeping Computer - 21 Červen, 2025 - 17:13
Russian hackers bypass multi-factor authentication and access Gmail accounts by leveraging app-specific passwords in advanced social engineering attacks that impersonate U.S. Department of State officials. [...]
Kategorie: Hacking & Security

WordPress Motors theme flaw mass-exploited to hijack admin accounts

Bleeping Computer - 21 Červen, 2025 - 16:09
Hackers are exploiting a critical privilege escalation vulnerability in the WordPress theme "Motors" to hijack administrator accounts and gain complete control of a targeted site. [...]
Kategorie: Hacking & Security

Jak zjistit, jestli vás hackli? Have I Been Pwned má nový web a v databázi již 15 miliard účtů

Zive.cz - bezpečnost - 21 Červen, 2025 - 12:35
** Služba Have I Been Pwned prošla redesignem a získala nové funkce. ** Každý únik má informační stránku s detaily a radami, co dělat. ** Přibyl také Dashboard, ve kterém můžete hlídat svou e-mailovou adresu.
Kategorie: Hacking & Security

GenAI — friend or foe?

Computerworld.com [Hacking News] - 21 Červen, 2025 - 12:00

Generative AI (genAI) could help people live longer and healthier lives, transform education, solve climate change, help protect endangered animals, speed up disaster response, and make work more creative, all while making daily life safer and more humane for billions worldwide. 

Or the technology could lead to massive job losses, boost cybercrime, empower rogue states, arm terrorists, enable scams, spread deepfakes and election manipulation, end democracy, and possibly lead to human extinction. 

Well, humanity? What’s it going to be?

California’s dreamin’

Last year, the California State Legislature passed a bill that would have required companies based in the state to perform expensive safety tests for large genAI models and also build in “kill switches” that could stop the technology from going rogue. 

If this kind of thing doesn’t sound like a job for state government, consider that California’s genAI companies include OpenAI, Google, Meta, Apple, Nvidia, Salesforce, Oracle, Anthropic, Anduril, Tesla, and Intel. 

The biggest genAI company outside California is Amazon; it’s based in Washington state, but has its AI division in California.

Anyway, California Gov. Gavin Newsom vetoed the bill. Instead, he asked AI experts, including Fei-Fei Li of Stanford, to recommend a policy less onerous to the industry. The resulting Joint California Policy Working Group on AI Frontier Models released a 52-page report this past week. 

The report focused on transparency, rather than testing mandates, as the solution to preventing genAI harms. The recommendation also included third-party risk assessments, whistleblower protections, and flexible rules based on real-world risk, much of which was also in the original bill. 

It’s unclear whether the legislature will incorporate the recommendations into a new bill. In general, the legislators have reacted favorably to the report, but AI companies have expressed concern about the transparency part, fearing they’ll have to reveal their secrets to competitors. 

Two kinds of risk

There are three fundamental ways that emerging AI systems could create problems and even catastrophes to people: 

1. Misalignment. Some experts fear that misaligned AI, acting creatively and automatically, will operate in its own self-interest and against the interests of people. Research and media reports show that advanced AI systems can lie, cheat, and engage in deceptive behavior. GenAI models have been caught faking compliance, hiding their true intentions, and even strategically misleading their human overseers when it serves their goals; that was seen in experiments with models like Anthropic’s Claude and Meta’s CICERO, which lied and betrayed allies in the game Diplomacy despite being trained for honesty.

2. Misuse. Malicious people, organizations, and governments could use genAI tools to launch highly effective cyberattacks, create convincing deepfakes, manipulate public opinion, automate large-scale surveillance, and control autonomous weapons or vehicles for destructive purposes. These capabilities could enable mass disruption, undermine trust, destabilize societies, and threaten lives on an unprecedented scale.

3. The collective acting on bad incentives. AI risk isn’t a simple story of rogue algorithms or evil hackers. Harms could result from collective self-interest combined with incompetence or regulatory failure. For example, when genAI-driven machines replace human workers, it’s not just the tech companies chasing efficiency. It’s also the policymakers who didn’t adopt labor laws, the business leaders who made the call, and consumers demanding ever-cheaper products. 

What’s interesting about this list of ways AI could cause harm is that all are nearly certain to happen. We know that because it’s already happening at scale, and the only certain change coming in the future is the rapidly growing power of AI. 

So, how shall we proceed? 

We can all agree that genAI is a powerful tool that is becoming more capable all the time. We want to maximize its benefit to people and minimize its threat. 

So, here’s what I believe is the question of the decade: What do we do to promote this outcome? By “we,” I mean the technology professionals, buyers, leaders, and thought leaders reading this column. 

What should we be doing, advocating, supporting, or opposing? 

I asked Andrew Rogoyski, director of Innovation and Partnerships at the UK’s Surrey Institute for People-Centred Artificial Intelligence, that question. Rogoyski works full-time to maximize AI’s benefits and minimize its harms. 

One concern with genAI systems, according to Rogoyski, is that we’re entering a realm where nobody knows how they work — even when they benefit people. As AI gets more capable, “new products appear, new materials, new medicines, we cure cancer. But actually, we won’t have any idea how it’s done,” he said. 

“One of the challenges is these decisions are being made by a few companies and a few individuals within those companies,” he said. Decisions made by a few people “will have enormous impact on…global society as a whole. And that doesn’t feel right.” He pointed out that companies like Amazon, OpenAI, and Google have far more money to devote to AI than entire governments. 

Rogoyski pointed out the conundrum exposed by solutions like the one California is trying to arrive at. At the core of the California Policy Working Group’s proposal is transparency, treating AI functionality as a kind of open-source project. On the one hand, outside experts can help flag dangers. On the other, transparency opens the technology to malicious actors. He gave the example of AI designed for biotech, something designed to engineer life-saving drugs. In the wrong hands, that same tool might be used to engineer a catastrophic bio-weapon.

According to Rogoyski, the solution won’t be found solely in some grand legislation or the spontaneous emergence of ethics in the hearts of Silicon Valley billionaires. The solution will involve broad-scale collective action by just about everyone.

It’s up to us

At the grass-roots level, we need to advocate the practice of basing our purchasing, use, and investment in AI systems that are serious about and capable with ethical practices, strong safety policies, and deep concern about alignment. 

We all need to favor companies that “do the right thing in the sense of sharing information about how they trained [their AI], what measures they put in place to stop it misbehaving and so on,” said Rogoyski.

Beyond that, we need stronger regulation based more on expert input and less on Silicon Valley businesses’ trillion-dollar aspirations. We need broad cooperation between companies and universities. 

We also need to support, in any way we can, the application of AI to our most pressing problems, including medicine, energy, climate change, income inequality, and others.

Rogoyski offers general advice for anyone worried about losing their job to AI: Look to the young. 

While older professionals might look at AI and feel threatened by it, younger people often see opportunity. “If you talk to some young creative who’s just gone to college [and] come out with a [degree in] photography, graphics, whatever it is,” he said, “They’re tremendously excited about these tools because they’re now able to do things that might have taken a $10 million budget.”

In other words, look for opportunities in AI to accelerate, enhance, and empower your own work.

And that’s generally the mindset we should all embrace: We are not powerless. We are powerful. AI is here to stay, and it’s up to all of us to make it work better for ourselves, our communities, our nations, and our world. 

Kategorie: Hacking & Security

BitoPro exchange links Lazarus hackers to $11 million crypto heist

Bleeping Computer - 20 Červen, 2025 - 19:54
The Taiwanese cryptocurrency exchange BitoPro claims the North Korean hacking group Lazarus is behind a cyberattack that led to the theft of $11,000,000 worth of cryptocurrency on May 8, 2025. [...]
Kategorie: Hacking & Security

Microsoft investigates OneDrive bug that breaks file search

Bleeping Computer - 20 Červen, 2025 - 18:39
​Microsoft is investigating a known OneDrive issue that is causing searches to appear blank for some users or return no results even when searching for files they know they've already uploaded. [...]
Kategorie: Hacking & Security
Syndikovat obsah