Security-Portal.cz je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.

Kategorie

Apple is intensely focused on its global AI efforts

Computerworld.com [Hacking News] - 30 Duben, 2024 - 17:19

Not so long ago, I can remember how Apple’s “failures” in AI made critics smile. Those smiles now seem to have faded. Instead, Apple is accelerating at speed to make people happy for a while with American AI.

How do we know the company is moving fast? With more than 160,000 direct employees globally and hundreds of thousands more across partner firms, suppliers, and the currently beleaguered App economy, when the ship that is Apple moves in a direction the rumor mill usually indicates the destination. Along those lines, we’ve heard a lot of talk across the last week.

Apple’s top secret AI labs

Apple has created a top secret AI research lab in Zurich, Switzerland. The Financial Times also claims the company has hired hundreds of leading AI researchers during the last couple of years, many of them from Google. 

These teams are focused on developing highly advanced AI models. What kind of models? In essence, these seem to be super-lightweight, highly focused neural networks capable of delivering really useful tools that function on the device.

To get a sense of what these might do, Apple researchers recently released a wave of eight new AI models capable of running on the device. The company calls them “Open-source Efficient Language Models”. 

Model behavior

These are small models trained on public data that work on the device to solve focused tasks. The aim is to make it possible to run generative AI (genAI) tools on the device itself, rather than using servers, which preserves privacy, improves efficiency, and safeguards information. These solutions promise truly mobile AI devices that will work offline, with the code is being made available to researchers on GitHub

These aren’t the first AI models to slip out of Apple’s research labs. Earlier this year, the company published AI models that can edit photos through written prompts, and another to help people optimize their use of an iPhone. Interestingly, six of the researchers named as authors of a paper describing the latter technology were former Google employees hired in the last two years. 

Making friends

Apple also seems to be exploring potential partnerships. In recent months, we’ve heard it has spoken with both Google and Baidu to make their AI models available to iPhones; last week, we heard it has recommenced discussions with OpenAI. 

This has led both to speculation of an AI-dedicated App Store from which users can access bespoke selections of third-party AI solutions and rumors Apple seeks to license third-party models to enable its devices.

Apple also seems focused on augmenting its existing apps with AI. AppleInsider claims the company is testing a version of Safari with a built in AI-powered intelligent search agent capable of providing summaries of websites.

Think ethically

Throughout all of this, Apple has maintained a tight silence about the totality of its AI strategy. Critically, however, it’s important to understand that the company is not interested in building solutions that provide incorrect or inappropriate responses and would rather be cautious than to introduce an AI that is flawed. It seeks to develop ethical, useful AI that provides real benefits to users while retaining privacy. 

This also extends to how it trains its AI models; if you look at its published research papers, you’ll find many of those it has revealed have been trained using publicly available data, rather than breaching copyright.

Apple is also investing in AI infrastructure

Apple will announce its financial results on Thursday. These aren’t expected to impress, but it seems likely much of the disappointment is already baked in. But for those of us curious about the extent to which Apple is preparing the ground for AI, it will be interesting to track how much the company is investing in capital expenditure. 

We know such spending is taking place:

  • Just over a week ago, the company announced an expansion to its Singapore campus to provide space for “new roles in AI and other key functions”, and is making similar investments in Indonesia.
  • Apple also recently acquired French AI firm Datakalab. That company specializes in on-device processing, algorithm compression, and embedded AI.
  • Hints that Apple will have some reliance on AI in the cloud are also visible on news the company has appointed former Google Sumit Gupta as director of products, Apple Cloud. Gupta has years of experience in AI infrastructure, including a previous six-year stint as chief AI Strategy officer and CTO of AI at IBM.

All of these suggest that sizable investments in the infrastructure required to power AI on two billion actively used Apple devices is already taking place.

Securing the servers with Apple Silicon

Investment extends to R&D for infrastructure. After all, it means something that Apple is allegedly considering building servers powered by Apple Silicon chips. Those servers could go some way toward providing the kind of computational power required to drive AI services in the cloud, while also mitigating the enormous energy consumption such services require.

These data points should provide some color as we accelerate toward introduction of new M4(?)-powered, AI-capable iPads at an online Apple keynote next week, followed by a little more insight at WWDC 2024 in June — and culminating with the big AI iPhone 16 reveal in fall.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Apple, Artificial Intelligence
Kategorie: Hacking & Security

Millions of Malicious 'Imageless' Containers Planted on Docker Hub Over 5 Years

The Hacker News - 30 Duben, 2024 - 15:36
Cybersecurity researchers have discovered multiple campaigns targeting Docker Hub by planting millions of malicious "imageless" containers over the past five years, once again underscoring how open-source registries could pave the way for supply chain attacks. "Over four million of the repositories in Docker Hub are imageless and have no content except for the repository
Kategorie: Hacking & Security

Millions of Malicious 'Imageless' Containers Planted on Docker Hub Over 5 Years

The Hacker News - 30 Duben, 2024 - 15:36
Cybersecurity researchers have discovered multiple campaigns targeting Docker Hub by planting millions of malicious "imageless" containers over the past five years, once again underscoring how open-source registries could pave the way for supply chain attacks. "Over four million of the repositories in Docker Hub are imageless and have no content except for the repository Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Linux Kernel Vulnerability Exposes Unauthorized Data to Hackers

LinuxSecurity.com - 30 Duben, 2024 - 14:47
A critical vulnerability was discovered in the Linux kernel's netfilter subsystem, specifically within the nf_tables component, posing potential risks to systems worldwide. The vulnerability, CVE-2024-26925 , arises from improperly releasing a mutex within the garbage collection (GC) sequence of nf_tables. It could potentially lead to race conditions and compromise the stability and security of the Linux kernel.
Kategorie: Hacking & Security

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

The Hacker News - 30 Duben, 2024 - 12:36
The U.S. government has unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence (AI)-related threats. "These guidelines are informed by the whole-of-government effort to assess AI risks across all sixteen critical infrastructure sectors, and address threats both to and from, and involving AI systems," the Department of Homeland Security (DHS)&
Kategorie: Hacking & Security

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

The Hacker News - 30 Duben, 2024 - 12:36
The U.S. government has unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence (AI)-related threats. "These guidelines are informed by the whole-of-government effort to assess AI risks across all sixteen critical infrastructure sectors, and address threats both to and from, and involving AI systems," the Department of Homeland Security (DHS)&Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Considerations for Operational Technology Cybersecurity

The Hacker News - 30 Duben, 2024 - 12:24
Operational Technology (OT) refers to the hardware and software used to change, monitor, or control the enterprise's physical devices, processes, and events. Unlike traditional Information Technology (IT) systems, OT systems directly impact the physical world. This unique characteristic of OT brings additional cybersecurity considerations not typically present in conventional IT security
Kategorie: Hacking & Security

Considerations for Operational Technology Cybersecurity

The Hacker News - 30 Duben, 2024 - 12:24
Operational Technology (OT) refers to the hardware and software used to change, monitor, or control the enterprise's physical devices, processes, and events. Unlike traditional Information Technology (IT) systems, OT systems directly impact the physical world. This unique characteristic of OT brings additional cybersecurity considerations not typically present in conventional IT security The Hacker Newshttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

What Capgemini software chief learned about AI-generated code: highly usable, ‘too many unknowns’ for production

Computerworld.com [Hacking News] - 30 Duben, 2024 - 12:00

Capgemini Engineering is made up of more than 62,000 engineers and scientists across the globe whose job it is to create products for a myriad number of clients, from industrial companies building cars, trains and planes to independent software vendors.

So, when AI-assisted code generation tools began flooding the marketplace in 2022, the global innovation and engineering consultancy took notice. Afterall, one-fifth of Capgemini’s business involves producing software products for a global clientele facing the demands of digital transformation initiatives.

According to Capgamini’s own survey data, seven in 10 organizations will be using generative AI (genAI) for software engineering in the next 12 months. Today, 30% of organizations are experimenting with it for software engineering, and an additional 42% plan to use it within a year. Only 28% of organizations are steering completely clear of the technology.

In fact, genAI already assists in writing nearly one in every eight lines of code, and that ratio is expected hit one in every five lines of code over the next 12 months, according to Capgemini.

Jiani Zhang took over as the company’s chief software officer three years ago. In that time, she’s seen the explosion of genAI’s use to increase efficiencies and productivity among software development teams. But as good as it is at producing usable software, Zhang cautioned that genAI’s output isn’t yet ready for production — or even for creating a citizen developer workforce. There remain a number of issues developers and engineers will face when piloting its use, including security concerns, intellectual property rights issues, and the threat of malware.

Jiani Zhang, Chief Software Officer at Capgemini Engineering 

Capgemini

That said, Zhang has embraced AI-generated software tools for a number of lower-risk tasks, and it has created significant efficiencies for her team. Computerworld spoke with Zhang about Capgemini Engineering’s use of AI; the following are excerpts from that interview.

What’s your responsibility at Capgemini? “I look after software that’s in products. The software is so pervasive that you actually need different categories of software and differnent ways it’s developed. And, you can imagine that there’s a huge push right now in terms of moving software [out the door].”

How did your journey with AI-generated software begin? “Originally, we thought about generative AI with a big focus on sort of creative elements. So, a lot of people were talking about building software, writing stories, building websites, generating pictures and the creation of new things in general. If you can generate pictures, why can’t you generate code? If you can write stories, why not write user stories or requirements that go into building software. That’s the mindset of the shift going on, and I think the reality is it’s a combination of a market-driven dynamic. Everyone’s kind of moving toward wanting to build a digital business. You’re effectively now competing with a lot of tech companies to hire developers to build these new digital platforms.

“So, many companies are thinking, ‘I can’t hire against these large tech companies out here in the Bay Area, for example. So, what do I do?’ They turn to AI…to deal with the fact that [they] don’t have the talent pool or the resources to actually build these digital things. That’s why I think it’s just a perfect storm, right now. There’s a lack of resources, and people really want to build digital businesses, and suddenly the idea of using generative AI to produce code can actually compensate for [a] lack of talent. Therefore, [they] can push ahead on those projects. I think that’s why there’s so much emphasis on [genAI software augmentation] and wanting to build towards that.”

How have you been using AI to create efficiencies in software development and engineering? “I would break out the software development life cycle almost into stages. There is a pre-coding phase. This is the phase where you’re writing the requirements. You’re generating the user stories, and you create epics. Your team does a lot of the planning on what they’re going to build in this area. We can see generative AI having an additive benefit there just generating a story for you. You can generate requirements using it. So, it’s helping you write things, which is what generative AI is good at doing, right? You can give it some prompts of where you want to go and it can generate these stories for you.

“The second element is that [software] building phase, which is coding. This is the phase people are very nervous about it and for very good reason, because the code generation aspect of generative AI is still almost like a little bit of wizardry. We’re not quite sure how it gets generated. And then there’s a lot of concerns regarding security, like where did this get generated from? Because, as we know, AI is still learning from something else. And you have to ask [whether] my generated code is going to be used by somebody else? So there’s a lot of interest in using it, but then there’s a lot of hesitancy in actually doing the generation side of it.

“And then you have the post-coding phase, which is everything from deployment, and testing, and all that. For that phase, I think there’s a lot of opportunity for not just generative AI, but AI in general, which is all focused around intelligent testing. So, for instance, how do you generate the right test cases? How do you know that you’re testing against the right things? We often see from a lot of clients where effectively over the years they’ve just added more and more tests to that phase, and so it got bigger and bigger and bigger. But, nobody’s actually gone in and cleaned up that phase. So, you’re running a gazillion tests. Then you still have a bunch of defects because no one’s actually cleaned up the tests of defects they are trying to detect. So, a lot of this curates better with generative AI. Specifically, it can perform a lot of test prioritization. You can look at patterns of which tests are being used and not used. And, there’s less of a concern about something going wrong in with that. I think AI tools make a very big impact in that area.

“You can see AI playing different roles in different areas. And I think that the front part has less risk and is easier to do. Maybe it doesn’t do as much as the whole code generation element, but again there’s so much hesitancy around being comfortable with the generated code.”

How important is it to make sure that your existing code base is clean or error free before using AI code generation tools? “I think it depends on what you’re starting from. With any type of AI technology, you’re starting with some sort of structure, some sort of data. You have some labeled data, you have some unlabeled data, and an AI engine is just trying to determine patterns and probabilities. So, when you say you want to generate code, well what are you basing new code that off of?

“If you were to create a large language model or any type of model, what’s your starting point? If your starting point is your code base only, then yes, all of the residual problems that you have will most likely be inherited because it’s training on bad data. Thinking about that is how you should code. A lot of people think, ‘I’m not going to be arrogant to think that my code is the best.’

“The more generic method would be to leverage larger models with more code sets. But then the more code you have gets you deeper into a security problem. Like where does all that code come from? And am I contributing to someone else’s larger code set? And what’s really scary is if you don’t know the code set well, is there a Trojan horse in there. So, there’s lot of dynamics to it.

“A lot of the clients that we face love these technologies. It’s so good, because it professes an opportunity to solve a problem, which is the shortage of talent, so as to actually build a digital business without that. But then they’re really challenged. Do I trust the results of the AI? And do I have a large enough code base that I’m comfortable using and not just imagining that some model will come from the ether to do this.”

How have you addressed the previous issue — going with a massive LLM codebase or sticking to smaller, more proprietary in-house code and data? “I think it depends on the sensitivity of the client. I think a lot of people are playing with the code generation element. I don’t think a lot of them are taking [AI] code generation to production because, like I said, there’s just so many unknowns in that area.

“What we find is more clients have figured out more of that pre-code phase, and they’re also focusing a lot on that post-code phase, because both of those are relatively low risk with a lot of gain, especially in the area of like testing because it’s a very well-known practice. There’s so much data that’s in there, you can quickly clean that up and get to some value. So, I think that’s a very low-hanging fruit. And then on the front-end side of it, you know a lot of people don’t like writing user stories, or the requirements are written poorly and so the amount of effort that can take away from that is meaningful.”

What are the issues you’ve run into with genAI code generation? “While it is the highest value [part of the equation]…, it is also about generating consistent code. But that’s the problem. Because generative AI is not prescriptive. So, when you tell it, ‘I want two ears and a tail that wags, it doesn’t actually give you a Labrador retriever every time. Sometimes it will give you a Husky. It’s just looking at what fits that [LLM]. So…when you change a parameter, it could generate completely new code. And then that completely new code means that you’re going have to redo all of the integration, deployment, all those things that comes off of it.

“There’s also a situation where even if you were able to contain your code set, build an LLM with highly curated, good engineering practices [and] software practices and complement it with your own data set — and generate code that you trust — you still can’t control whether the generated code will be the same code every single time when you make a change. I think the industry is still working to figure those elements out, refining and re-refining to see how you can have consistency.”

What are your favorite AI code-augmentation platforms? “I think it’s quite varied. I think the challenge with this market is it’s very dynamic; they keep adding new feature sets and the new feature sets kind of overlap with each other. So, it’s very hard to determine what one is best. I think there are certain ones that are leading right now, but at the same time, the dynamics of the environment [are] such that you could see something introduced that’s completely new in the next eight weeks. So, it’s quite varied. I wouldn’t say that there is a favorite right now. I think everyone is learning at this point.”

How do you deal with code errors introduced by genAI? What tools do you use to discover and correct those errors, if any? “I think that then goes into your test problem. Like I said, there’s a consistency problem that fundamentally we have to take into account, because every time we generate code it could be generated differently. Refining your test set and using that as an intelligent way of testing is a really key area to make sure that you catch those problems. I personally believe that they’re there because the software development life cycle is so vast.

“It’s all about where people want to focus the post-coding phase. That testing phase is a critical element to actually getting any of this right. …It’s an area where you can quickly leverage the AI technologies and have minimal risk introduced to your production code. And, in fact, all it does is improve it. [The genAI] is helping you be smarter in running those test sets. And those test sets are then going to be highly beneficial to your generated code as well, because now you know what your audience is also testing against.

“So if the generated code is bad, you’ll catch it in these defects. It’s worth a lot of effort to look at that specific area because, like I said, it’s a low-risk element. There’s a lot of AI tools out there for that.

“And, not everything has to be generative AI, right? You know, AI and machine learning [have] been here for quite some time, and there’s a lot of work that’s already been done to refine [them]. So, there’s a lot of benefit and improvement that’s been done to those older tools. The market has this feeling that they need to adopt AI, but AI adoption hasn’t been the smoothest. So then [developers] are saying. ‘Let’s just leapfrog and let’s just get into using generative AI.’ The reality is that you can actually fix a lot of these things based off of technology that’s didn’t just come to market 12 months ago. I think there’s there’s definitely benefit in that.”

What generative AI tools have you tried and what kind of success have you seen? “We’ve tried almost all of them. That’s the short answer. And they’ve all been very beneficial. I think that the reality is, like I said before, the landscape of genAI tools today is pretty comparable between the different cloud service providers. I don’t see a a leading one versus a non leading one. I feel like they all can do a pretty nice things.

“I think that the challenge is being up to date with what’s available because they keep releasing new features. That is encouraging, but at the same time you have to find a way to implement and use the technology in a meaningful way. At this point, the speed at which they’re pushing out these features versus the adoption in the industry is unmatched. I think there’s a lot more features than actual adoption.

“We have our Capgemini Research Institute, through which we do a lot of polls with executives, and what we found is about 30% of organizations are experimenting with genAI. And probably another 42% are going to be playing with it for the next 12 months. But that also means from an adoption perspective, those actually using it in software engineering, I think it’s only less than one-third that’s really fundamentally going to be impacting their production flow with generator. So I think the market is still very much in the experimentation phase. And so that’s why all the tool [are] pretty comparable in terms of what it can do and what it can’t do.

“And again, it’s not really about whether the feature set is greater in one platform versus another. I think it’s more the application of it to solving a business problem that makes the impact.”

Do you use AI or generative AI for any software development? Forget pre-development and post-development for the moment. Do you actually use it to generate code that you use? “We do. Even for us, it is in an experimentation phase. But we have put in a lot of work ourselves in terms of refining generative AI engines so that we can generate consistent code. We’ve actually done quite a lot of experimentation and also proof of concepts with clients on all three of those phases [pre-code modeling, code development, post-code testing]. Like I said, the pre- and post- are the easier ones because there’s less risk.

“Now, whether or not the client is comfortable enough for that [AI] generated code to go to production is a different case. So, that proof of concept we’re doing is not necessarily production. And I think…taking it to production is still something that industry has to work through in terms of their acceptance.”

How accurate is the code generated by your AI tools? Or, to put it another way, how often is that code usable? I’ve heard from other experts the accuracy rate ranges anywhere from 50% to 80% and even higher. What are you finding? “I think the code is highly usable, to be honest. I think it’s actually a pretty high percentage because the generated code, it’s not wrong. I think the concern with generated code is not whether or not it’s written correctly. I think it is written correctly. The problem, as I said, is around how that code was generated, whether there were some innate or embedded defects in it that people don’t know about. Then you know the other question is where did that generated code come from, and whether or not the generated code that you’ve created now feeds a larger pool, and is that secure?

“So imagine if I’m an industrial company and I want to create this solution, and I generate some code [whose base] came from my competitor. How do I know if this is my IP or their IP? Or if I created it, did that somehow through the ether migrate to somebody else generating that exact same code? So, it gets very tricky in that sense unless you have a very privatized genAI system.”

Even when the code itself it not usable for whatever reason, can AI-generated code still be useful? “It’s true. There’s a lot of code that in the beginning may not be usable. It’s like with any learning system, you need to give it more prompts in order to tailor it to what you need. So, if you think about basic engineering, you define some integers first. If genAI can do that, you’ve now saved yourself some time from having to type in all the defined parameters and integer parameters and all that stuff because that could all be pre-generated.

“If it doesn’t work, you can give it an additional prompt to say, ‘Well, actually I’m looking for a different set of cycles or different kind of run times,’ and then you can tweak that code as well. So instead of you starting from scratch, just like writing a paper, you can have someone else write an outline, and you can always use some of the intro, the ending, some of these things that isn’t the actual meat of the content. And the meat of the content you can continue to refine with generative AI, too. So definitely, it’s a big help. It saves you time from writing it from scratch.”

Has AI or genAI allowed you to create a citizen developer work force? “I think it’s still early. AI allows your own team to be a little faster in doing some of the things that they don’t necessarily want to do or it can cut down the toil from a developer’s perspective of generating test cases or writing user stories and whatnot. It’s pretty good at generating the outline of a code from a framework perspective. But for it to do it code generation independently, I think we’re we’re still relatively early on that.”

How effective has AI code generation been in creating efficiencies and increasing productivity? “Productivity, absolutely. I think that’s a really strong element of the developer experience. The concept is that if you hire some really good software developers, they want to be building features and new code and things of that sort, and they don’t like do more of the pre-code responsibilities. So if you can solve more of that toil for them, get rid of more of that mundane repetitive things, then they can be focused on more of the value generation element of it.

“So, for productivity, I think it’s a big boost, but it’s not about developing more code. I think often times it’s about developing better code. So instead of saying I spent hours of my day just writing a basic structure, that’s now pre-generated for me. And now I can think about how do I optimize runtimes. How do I optimize the consumption or storage or whatnot?

“So, it frees up your mind to think about additional optimizations to make your code better, rather than just figuring out what the basis of the code is.”

One of the other uses I’ve heard from engineers and developers is that genAI is often not even used to generate new code, but it’s most often used to update old code. Are you also seeing that use case? “I think all the genAI tools are going to be generating new code, but the difference is that one very important use case which you just highlighted: the old code migration element.

“What we also find with a lot of clients that have a lot of systems that are heavily outdated — they have systems that are 20-, maybe 30-years old. There could be proprietary code in there that nobody understands anymore and they’ve already lost that skill set. What you can do with AI-assisted code generation and your own team is create a corpus of knowledge of understanding of the old code so you can actually now ingest the old code and understand what that meta model is — what is it you’re trying to do? What are the right inputs and outputs? From there, you can actually generate new code in a new language that’s actually maintainable. And that’s a huge, huge benefit for clients.

“So, this is a great application of AI technology where it’s still generating your code, it’s not actually changing the old code. You don’t want it to change the old code, because it would still be unmanageable. So what you want is to have it understand the concept of what you’re trying to do so that you can actually generate new code that is much more maintainable.”

Are there any other tips you can offer people who are considering using AI for code generation? “I think it’s a great time, because the technology is super exciting. There’s a lot of different choices for people to play around with and I think there’s a lot of low-hanging fruit. I feel like generative AI is also a huge benefit, because it’s introduced or reintroduced people to the concept that AI is actually attainable.

“There’s a lot of people who think AI is like the top of the pyramid technology. They think, I’m going to get there, but only if I clean up all of my data and I get all my data ingested correctly. Then, if I go through all those steps, I can use AI. But the pervasiveness and the attractiveness of general AI is that it is attainable even before that. It’s OK to start now. You don’t have to clean up everything before you get to that point. You can actually iterate and gain improvements along the way.

“If you look at that the software development life cycle, there’s a lot of areas right now that could be low risk use of AI. I wouldn’t even say just productivity. It’s just about it being more valuable to the outcomes that you want to generate and so it’s a good opportunity to start with. It’s not the be all to end all. It’s not going to be your citizen developer, you know, But it augments your team. It increases the productivity. It reduces the toil. So, it’s just a good time to get started.”

Developer, Engineer, Generative AI
Kategorie: Hacking & Security

Managed Detection and Response in 2023

Kaspersky Securelist - 30 Duben, 2024 - 11:00

Managed Detection and Response in 2023 (PDF)

Alongside other security solutions, we provide Kaspersky Managed Detection and Response (MDR) to organizations worldwide, delivering expert monitoring and incident response 24/7. The task involves collecting telemetry for analysis by both machine-learning (ML) technologies and our dedicated Security Operations Center (SOC). On detection of a security incident, SOC puts forward a response plan, which, if approved by the customer, is actioned at the endpoint protection level. In addition, our experts give recommendations on organizing incident investigation and response.

In the annual MDR report, we present the results of analysis of SOC-detected incidents, supplying answers to the following questions:

  • Who are your potential attackers?
  • How do they currently operate?
  • How to detect their actions?

The report covers the tactics, techniques and tools most commonly used by threat actors, the nature of high-severity incidents and their distribution among MDR customers by geography and industry.

Security incident statistics for 2023 Security events

In 2023, Kaspersky Managed Detection and Response handled more than 431,000 alerts about possible suspicious activity. Of these, more than 117,000 were analyzed by ML technologies, and over 314,000 by SOC analysts. Of the manually processed security events, slightly under 90% turned out to be false positives. What is more, around 32,000 security alerts were linked to approximately 14,000 incidents reported to MDR customers.

Geographic distribution of users

In 2023, the largest concentration of Kaspersky MDR customers was in the European region (38%). In second place came Russia and the CIS (28%), in third the Asia-Pacific region (16%).

Distribution of Kaspersky MDR customers by region, 2023

Distribution of incidents by industry

Since the number of incidents largely depends on the scale of monitoring, the most objective picture is given by the distribution of the ratio of the number of incidents to the number of monitored endpoints. The diagram below shows the expected number of incidents of a given criticality per 10,000 endpoints, broken down by industry.

Expected number of incidents of varying degrees of criticality per 10,000 endpoints in different industries, 2023

In 2023, the most incidents per 10,000 devices were detected in mass media organizations, development companies and government agencies.

In terms of absolute number of incidents detected, the largest number of incidents worldwide in 2023 were recorded in the financial sector (18.3%), industrial enterprises (16.9%) and government agencies (12.5%).

Distribution of the number of Kaspersky MDR customers, all identified incidents and critical incidents by industry, 2023

General observations and recommendations

Based on the analysis of incidents detected in 2023, and on our many years of experience, we can identify the following trends in security incidents and protection measures:

  • Every year we identify targeted attacks carried out with direct human involvement. To effectively detect such attacks, besides conventional security monitoring, threat hunting is required.
  • The effectiveness of the defense mechanisms deployed by enterprises is best measured by a range of offensive exercises. Year after year, we see rising interest in projects of this kind.
  • In 2023, we identified fewer high-severity malware incidents than in previous years, but the number of incidents of medium and low criticality increased. The most effective approach to guarding against such incidents is through multi-layered protection.
  • Leveraging the MITRE ATT&CK® knowledge base supplies additional contextual information for attack detection and investigation teams. Even the most sophisticated attacks consist of simple steps and techniques, with detection of just a single step often uncovering the entire attack.

Detailed information about attacker tactics, techniques and tools, incident detection and response statistics, and defense recommendations can be found in the full report (PDF).

New U.K. Law Bans Default Passwords on Smart Devices Starting April 2024

The Hacker News - 30 Duben, 2024 - 07:57
The U.K. National Cyber Security Centre (NCSC) is calling on manufacturers of smart devices to comply with new legislation that prohibits them from using default passwords, effective April 29, 2024. "The law, known as the Product Security and Telecommunications Infrastructure act (or PSTI act), will help consumers to choose smart devices that have been designed to
Kategorie: Hacking & Security

New U.K. Law Bans Default Passwords on Smart Devices Starting April 2024

The Hacker News - 30 Duben, 2024 - 07:57
The U.K. National Cyber Security Centre (NCSC) is calling on manufacturers of smart devices to comply with new legislation that prohibits them from using default passwords, effective April 29, 2024. "The law, known as the Product Security and Telecommunications Infrastructure act (or PSTI act), will help consumers to choose smart devices that have been designed to Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Google Prevented 2.28 Million Malicious Apps from Reaching Play Store in 2023

The Hacker News - 29 Duben, 2024 - 19:07
Google on Monday revealed that almost 200,000 app submissions to its Play Store for Android were either rejected or remediated to address issues with access to sensitive data such as location or SMS messages over the past year. The tech giant also said it blocked 333,000 bad accounts from the app storefront in 2023 for attempting to distribute malware or for repeated policy violations. "In 2023,
Kategorie: Hacking & Security

Google Prevented 2.28 Million Malicious Apps from Reaching Play Store in 2023

The Hacker News - 29 Duben, 2024 - 19:07
Google on Monday revealed that almost 200,000 app submissions to its Play Store for Android were either rejected or remediated to address issues with access to sensitive data such as location or SMS messages over the past year. The tech giant also said it blocked 333,000 bad accounts from the app storefront in 2023 for attempting to distribute malware or for repeated policy violations. "In 2023,Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

How we fought bad apps and bad actors in 2023

Google Security Blog - 29 Duben, 2024 - 17:59
Posted by Steve Kafka and Khawaja Shams (Android Security and Privacy Team), and Mohet Saxena (Play Trust and Safety)

A safe and trusted Google Play experience is our top priority. We leverage our SAFE (see below) principles to provide the framework to create that experience for both users and developers. Here's what these principles mean in practice:

  • (S)afeguard our Users. Help them discover quality apps that they can trust.
  • (A)dvocate for Developer Protection. Build platform safeguards to enable developers to focus on growth.
  • (F)oster Responsible Innovation. Thoughtfully unlock value for all without compromising on user safety.
  • (E)volve Platform Defenses. Stay ahead of emerging threats by evolving our policies, tools and technology.

With those principles in mind, we’ve made recent improvements and introduced new measures to continue to keep Google Play’s users safe, even as the threat landscape continues to evolve. In 2023, we prevented 2.28 million policy-violating apps from being published on Google Play1 in part thanks to our investment in new and improved security features, policy updates, and advanced machine learning and app review processes. We have also strengthened our developer onboarding and review processes, requiring more identity information when developers first establish their Play accounts. Together with investments in our review tooling and processes, we identified bad actors and fraud rings more effectively and banned 333K bad accounts from Play for violations like confirmed malware and repeated severe policy violations.

Additionally, almost 200K app submissions were rejected or remediated to ensure proper use of sensitive permissions such as background location or SMS access. To help safeguard user privacy at scale, we partnered with SDK providers to limit sensitive data access and sharing, enhancing the privacy posture for over 31 SDKs impacting 790K+ apps. We also significantly expanded the Google Play SDK Index, which now covers the SDKs used in almost 6 million apps across the Android ecosystem. This valuable resource helps developers make better SDK choices, boosts app quality and minimizes integration risks.

Protecting the Android Ecosystem

Building on our success with the App Defense Alliance (ADA), we partnered with Microsoft and Meta as steering committee members in the newly restructured ADA under the Joint Development Foundation, part of the Linux Foundation family. The Alliance will support industry-wide adoption of app security best practices and guidelines, as well as countermeasures against emerging security risks.

Additionally, we announced new Play Store transparency labeling to highlight VPN apps that have completed an independent security review through App Defense Alliance’s Mobile App Security Assessment (MASA). When a user searches for VPN apps, they will now see a banner at the top of Google Play that educates them about the “Independent security review” badge in the Data safety section. This helps users see at-a-glance that a developer has prioritized security and privacy best practices and is committed to user safety.

To better protect our customers who install apps outside of the Play Store, we made Google Play Protect’s security capabilities even more powerful with real-time scanning at the code-level to combat novel malicious apps. Our security protections and machine learning algorithms learn from each app submitted to Google for review and we look at thousands of signals and compare app behavior. This new capability has already detected over 5 million new, malicious off-Play apps, which helps protect Android users worldwide.

More Stringent Developer Requirements and Guidelines

Last year we updated Play policies around Generative AI apps, disruptive notifications, and expanded privacy protections. We also are raising the bar for new personal developer accounts by requiring new testing requirements before developers can make their app available on Google Play. By testing their apps, getting feedback and ensuring everything is ready before they launch, developers are able to bring more high quality content to Play users. In order to increase trust and transparency, we’ve introduced expanded developer verification requirements, including D-U-N-S numbers for organizations and a new “About the developer” section.

To give users more control over their personal data, apps that enable account creation now need to provide an option to initiate account and data deletion from within the app and online. This web requirement is especially important so that a user can request account and data deletion without having to reinstall an app. To simplify the user experience, we have also incorporated this as a feature within the Data safety section of the Play Store.

With each iteration of the Android operating system (including its robust set of APIs), a myriad of enhancements are introduced, aiming to elevate the user experience, bolster security protocols, and optimize the overall performance of the Android platform. To further safeguard our customers, approximately 1.5 million applications that do not target the most recent APIs are no longer available in the Play Store to new users who have updated their devices to the latest Android version.

Looking Ahead

Protecting users and developers on Google Play is paramount and ever-evolving. We're launching new security initiatives in 2024, including removing apps from Play that are not transparent about their privacy practices.

We also recently filed a lawsuit in federal court against two fraudsters who made multiple misrepresentations to upload fraudulent investment and crypto exchange apps on Play to scam users. This lawsuit is a critical step in holding these bad actors accountable and sending a clear message that we will aggressively pursue those who seek to take advantage of our users.

We're constantly working on new ways to protect your experience on Google Play and across the entire Android ecosystem, and we look forward to sharing more.

Notes
  1. In accordance with the EU's Digital Services Act (DSA) reporting requirements, Google Play now calculates policy violations based on developer communications sent. 

Kategorie: Hacking & Security

The EU has decided to open up iPadOS

Computerworld.com [Hacking News] - 29 Duben, 2024 - 17:25

The EU has given Apple just six months to open up iPads in the same way it’s been forced to open up iPhones in Europe. The decision follows an EU determination that the iPad — which leads but does not dominate the tablet market — should be seen as a “gatekeeper.”

Apple will not have much time to comply.

What’s really interesting, as noted by AppleInsider, is the extent to which the decision to force Apple to open up iPadOS seems to have been made even though the EU’s lead anti-competition regulator, Margrethe Vestiger, says the company doesn’t actually meet the criteria for enforcement. 

It doesn’t meet the threshold, so we’ll do it anyway

“Today, we have brought Apple’s iPadOS within the scope of the DMA obligations,” said Vestager.  “Our market investigation showed that despite not meeting the thresholds, iPadOS constitutes an important gateway on which many companies rely to reach their customers.”

This triumph of ideology is just the latest poor decision from the trading bloc and comes as Apple gets ready to introduce new software, features, and artificial intelligence to its devices at its Worldwide Developer’s Conference in June

With that in mind, I expect Apple’s software development teams need Europe’s latest decision about as much as the rest of us need an unexpected utility bill. That said, I imagine the challenge has not been entirely unexpected.

Sour grapes?

To some extent you have to see that Europe is playing defense.

Not only has it lost all advantages in space research to Big Tech firms such as Space X, but the continent has arguably failed to spawn a significant homegrown Big Tech competitor. This leaves Europe reliant on US technology firms, so it’s clear the EU is attempting to loosen the hold US firms have on digital business in Europe by using the Digital Markets Act is being applied.

The EU isn’t alone; US regulators are equally determined to dent the power Apple and other major tech firms hold. Fundamental to many of the arguments made is the claim that consumers will see lower prices as a result of more open competition, but I’m highly doubtful that will happen.

So, what happens next?

Apple will likely attempt to resist the EU call to open up the iPad, but will eventually be forced to comply. Meanwhile, as sideloading intensifies on iPhones, we will see whether user privacy and safety do indeed turn out to be compatible with sideloading.

In an ideal world, the EU would hold off on any action involving iPads pending the results of that experiment. It makes sense for regulators and Apple to work constructively together to protect against any unexpected consequences as a result of the DMA before widening the threat surface. 

Perhaps user security isn’t something regulators take seriously, even though government agencies across the EU and elsewhere are extremely concerned at potential risks. Even in the US, regulators seem to want us to believe Apple’s “cloak” of privacy and security is actually being used to justify anti-competitive behavior. 

Do the benefits exceed the risks?

Experientially, at least, there’s little doubt that platforms (including the Mac) that support sideloading face more malicious activity than those that don’t. Ask any security expert and they will tell you that in today’s threat environment, it’s only a matter of time until even the most secure systems are overwhelmed. So it is inevitable some hacker somewhere will find a way to successfully exploit Apple’s newly opened platforms.

It stands to reason that ransomware, adware, and fraud attempts will increase and it is doubtful the EU will shoulder its share of the burden to protect people against any such threats that emerge as a result of its legislation.

For most consumers, the biggest benefit will be the eventual need to purchase software from across multiple store fronts, and to leave valuable personal and financial details with a wider range of payment processing firms.

The joy I personally feel at these “improvements” is far from tangible.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Apple, Apple App Store, iPad, Mobile
Kategorie: Hacking & Security

China-Linked 'Muddling Meerkat' Hijacks DNS to Map Internet on Global Scale

The Hacker News - 29 Duben, 2024 - 15:46
A previously undocumented cyber threat dubbed Muddling Meerkat has been observed undertaking sophisticated domain name system (DNS) activities in a likely effort to evade security measures and conduct reconnaissance of networks across the world since October 2019. Cloud security firm Infoblox described the threat actor as likely affiliated with the
Kategorie: Hacking & Security

China-Linked 'Muddling Meerkat' Hijacks DNS to Map Internet on Global Scale

The Hacker News - 29 Duben, 2024 - 15:46
A previously undocumented cyber threat dubbed Muddling Meerkat has been observed undertaking sophisticated domain name system (DNS) activities in a likely effort to evade security measures and conduct reconnaissance of networks across the world since October 2019. Cloud security firm Infoblox described the threat actor as likely affiliated with the Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Ubuntu 24.04 Security Enhancements Analyzed [Updated]

LinuxSecurity.com - 29 Duben, 2024 - 13:00
The release of Ubuntu 24.04 LTS , also known as Noble Numbat, brings various security enhancements and exciting new features . These improvements include unprivileged user namespace restrictions, binary hardening, AppArmor 4 , disabling old TLS versions, and upstream kernel security features.
Kategorie: Hacking & Security

Critical Security Update for Google Chrome: Implications & Recommendations

LinuxSecurity.com - 29 Duben, 2024 - 13:00
The release of Google Chrome 124 addresses four vulnerabilities, including a critical security flaw that can enable attackers to execute arbitrary code. Over the next few days or weeks, the Google Stable channel will be updated to 124.0.6367.78 for Linux. As security practitioners, Linux admins, infosec professionals, and sysadmins must be aware of the implications of such vulnerabilities and take appropriate action.
Kategorie: Hacking & Security
Syndikovat obsah