Security-Portal.cz je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.

Kategorie

Considerations for Operational Technology Cybersecurity

The Hacker News - 30 Duben, 2024 - 12:24
Operational Technology (OT) refers to the hardware and software used to change, monitor, or control the enterprise's physical devices, processes, and events. Unlike traditional Information Technology (IT) systems, OT systems directly impact the physical world. This unique characteristic of OT brings additional cybersecurity considerations not typically present in conventional IT security The Hacker Newshttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

What Capgemini software chief learned about AI-generated code: highly usable, ‘too many unknowns’ for production

Computerworld.com [Hacking News] - 30 Duben, 2024 - 12:00

Capgemini Engineering is made up of more than 62,000 engineers and scientists across the globe whose job it is to create products for a myriad number of clients, from industrial companies building cars, trains and planes to independent software vendors.

So, when AI-assisted code generation tools began flooding the marketplace in 2022, the global innovation and engineering consultancy took notice. Afterall, one-fifth of Capgemini’s business involves producing software products for a global clientele facing the demands of digital transformation initiatives.

According to Capgamini’s own survey data, seven in 10 organizations will be using generative AI (genAI) for software engineering in the next 12 months. Today, 30% of organizations are experimenting with it for software engineering, and an additional 42% plan to use it within a year. Only 28% of organizations are steering completely clear of the technology.

In fact, genAI already assists in writing nearly one in every eight lines of code, and that ratio is expected hit one in every five lines of code over the next 12 months, according to Capgemini.

Jiani Zhang took over as the company’s chief software officer three years ago. In that time, she’s seen the explosion of genAI’s use to increase efficiencies and productivity among software development teams. But as good as it is at producing usable software, Zhang cautioned that genAI’s output isn’t yet ready for production — or even for creating a citizen developer workforce. There remain a number of issues developers and engineers will face when piloting its use, including security concerns, intellectual property rights issues, and the threat of malware.

Jiani Zhang, Chief Software Officer at Capgemini Engineering 

Capgemini

That said, Zhang has embraced AI-generated software tools for a number of lower-risk tasks, and it has created significant efficiencies for her team. Computerworld spoke with Zhang about Capgemini Engineering’s use of AI; the following are excerpts from that interview.

What’s your responsibility at Capgemini? “I look after software that’s in products. The software is so pervasive that you actually need different categories of software and differnent ways it’s developed. And, you can imagine that there’s a huge push right now in terms of moving software [out the door].”

How did your journey with AI-generated software begin? “Originally, we thought about generative AI with a big focus on sort of creative elements. So, a lot of people were talking about building software, writing stories, building websites, generating pictures and the creation of new things in general. If you can generate pictures, why can’t you generate code? If you can write stories, why not write user stories or requirements that go into building software. That’s the mindset of the shift going on, and I think the reality is it’s a combination of a market-driven dynamic. Everyone’s kind of moving toward wanting to build a digital business. You’re effectively now competing with a lot of tech companies to hire developers to build these new digital platforms.

“So, many companies are thinking, ‘I can’t hire against these large tech companies out here in the Bay Area, for example. So, what do I do?’ They turn to AI…to deal with the fact that [they] don’t have the talent pool or the resources to actually build these digital things. That’s why I think it’s just a perfect storm, right now. There’s a lack of resources, and people really want to build digital businesses, and suddenly the idea of using generative AI to produce code can actually compensate for [a] lack of talent. Therefore, [they] can push ahead on those projects. I think that’s why there’s so much emphasis on [genAI software augmentation] and wanting to build towards that.”

How have you been using AI to create efficiencies in software development and engineering? “I would break out the software development life cycle almost into stages. There is a pre-coding phase. This is the phase where you’re writing the requirements. You’re generating the user stories, and you create epics. Your team does a lot of the planning on what they’re going to build in this area. We can see generative AI having an additive benefit there just generating a story for you. You can generate requirements using it. So, it’s helping you write things, which is what generative AI is good at doing, right? You can give it some prompts of where you want to go and it can generate these stories for you.

“The second element is that [software] building phase, which is coding. This is the phase people are very nervous about it and for very good reason, because the code generation aspect of generative AI is still almost like a little bit of wizardry. We’re not quite sure how it gets generated. And then there’s a lot of concerns regarding security, like where did this get generated from? Because, as we know, AI is still learning from something else. And you have to ask [whether] my generated code is going to be used by somebody else? So there’s a lot of interest in using it, but then there’s a lot of hesitancy in actually doing the generation side of it.

“And then you have the post-coding phase, which is everything from deployment, and testing, and all that. For that phase, I think there’s a lot of opportunity for not just generative AI, but AI in general, which is all focused around intelligent testing. So, for instance, how do you generate the right test cases? How do you know that you’re testing against the right things? We often see from a lot of clients where effectively over the years they’ve just added more and more tests to that phase, and so it got bigger and bigger and bigger. But, nobody’s actually gone in and cleaned up that phase. So, you’re running a gazillion tests. Then you still have a bunch of defects because no one’s actually cleaned up the tests of defects they are trying to detect. So, a lot of this curates better with generative AI. Specifically, it can perform a lot of test prioritization. You can look at patterns of which tests are being used and not used. And, there’s less of a concern about something going wrong in with that. I think AI tools make a very big impact in that area.

“You can see AI playing different roles in different areas. And I think that the front part has less risk and is easier to do. Maybe it doesn’t do as much as the whole code generation element, but again there’s so much hesitancy around being comfortable with the generated code.”

How important is it to make sure that your existing code base is clean or error free before using AI code generation tools? “I think it depends on what you’re starting from. With any type of AI technology, you’re starting with some sort of structure, some sort of data. You have some labeled data, you have some unlabeled data, and an AI engine is just trying to determine patterns and probabilities. So, when you say you want to generate code, well what are you basing new code that off of?

“If you were to create a large language model or any type of model, what’s your starting point? If your starting point is your code base only, then yes, all of the residual problems that you have will most likely be inherited because it’s training on bad data. Thinking about that is how you should code. A lot of people think, ‘I’m not going to be arrogant to think that my code is the best.’

“The more generic method would be to leverage larger models with more code sets. But then the more code you have gets you deeper into a security problem. Like where does all that code come from? And am I contributing to someone else’s larger code set? And what’s really scary is if you don’t know the code set well, is there a Trojan horse in there. So, there’s lot of dynamics to it.

“A lot of the clients that we face love these technologies. It’s so good, because it professes an opportunity to solve a problem, which is the shortage of talent, so as to actually build a digital business without that. But then they’re really challenged. Do I trust the results of the AI? And do I have a large enough code base that I’m comfortable using and not just imagining that some model will come from the ether to do this.”

How have you addressed the previous issue — going with a massive LLM codebase or sticking to smaller, more proprietary in-house code and data? “I think it depends on the sensitivity of the client. I think a lot of people are playing with the code generation element. I don’t think a lot of them are taking [AI] code generation to production because, like I said, there’s just so many unknowns in that area.

“What we find is more clients have figured out more of that pre-code phase, and they’re also focusing a lot on that post-code phase, because both of those are relatively low risk with a lot of gain, especially in the area of like testing because it’s a very well-known practice. There’s so much data that’s in there, you can quickly clean that up and get to some value. So, I think that’s a very low-hanging fruit. And then on the front-end side of it, you know a lot of people don’t like writing user stories, or the requirements are written poorly and so the amount of effort that can take away from that is meaningful.”

What are the issues you’ve run into with genAI code generation? “While it is the highest value [part of the equation]…, it is also about generating consistent code. But that’s the problem. Because generative AI is not prescriptive. So, when you tell it, ‘I want two ears and a tail that wags, it doesn’t actually give you a Labrador retriever every time. Sometimes it will give you a Husky. It’s just looking at what fits that [LLM]. So…when you change a parameter, it could generate completely new code. And then that completely new code means that you’re going have to redo all of the integration, deployment, all those things that comes off of it.

“There’s also a situation where even if you were able to contain your code set, build an LLM with highly curated, good engineering practices [and] software practices and complement it with your own data set — and generate code that you trust — you still can’t control whether the generated code will be the same code every single time when you make a change. I think the industry is still working to figure those elements out, refining and re-refining to see how you can have consistency.”

What are your favorite AI code-augmentation platforms? “I think it’s quite varied. I think the challenge with this market is it’s very dynamic; they keep adding new feature sets and the new feature sets kind of overlap with each other. So, it’s very hard to determine what one is best. I think there are certain ones that are leading right now, but at the same time, the dynamics of the environment [are] such that you could see something introduced that’s completely new in the next eight weeks. So, it’s quite varied. I wouldn’t say that there is a favorite right now. I think everyone is learning at this point.”

How do you deal with code errors introduced by genAI? What tools do you use to discover and correct those errors, if any? “I think that then goes into your test problem. Like I said, there’s a consistency problem that fundamentally we have to take into account, because every time we generate code it could be generated differently. Refining your test set and using that as an intelligent way of testing is a really key area to make sure that you catch those problems. I personally believe that they’re there because the software development life cycle is so vast.

“It’s all about where people want to focus the post-coding phase. That testing phase is a critical element to actually getting any of this right. …It’s an area where you can quickly leverage the AI technologies and have minimal risk introduced to your production code. And, in fact, all it does is improve it. [The genAI] is helping you be smarter in running those test sets. And those test sets are then going to be highly beneficial to your generated code as well, because now you know what your audience is also testing against.

“So if the generated code is bad, you’ll catch it in these defects. It’s worth a lot of effort to look at that specific area because, like I said, it’s a low-risk element. There’s a lot of AI tools out there for that.

“And, not everything has to be generative AI, right? You know, AI and machine learning [have] been here for quite some time, and there’s a lot of work that’s already been done to refine [them]. So, there’s a lot of benefit and improvement that’s been done to those older tools. The market has this feeling that they need to adopt AI, but AI adoption hasn’t been the smoothest. So then [developers] are saying. ‘Let’s just leapfrog and let’s just get into using generative AI.’ The reality is that you can actually fix a lot of these things based off of technology that’s didn’t just come to market 12 months ago. I think there’s there’s definitely benefit in that.”

What generative AI tools have you tried and what kind of success have you seen? “We’ve tried almost all of them. That’s the short answer. And they’ve all been very beneficial. I think that the reality is, like I said before, the landscape of genAI tools today is pretty comparable between the different cloud service providers. I don’t see a a leading one versus a non leading one. I feel like they all can do a pretty nice things.

“I think that the challenge is being up to date with what’s available because they keep releasing new features. That is encouraging, but at the same time you have to find a way to implement and use the technology in a meaningful way. At this point, the speed at which they’re pushing out these features versus the adoption in the industry is unmatched. I think there’s a lot more features than actual adoption.

“We have our Capgemini Research Institute, through which we do a lot of polls with executives, and what we found is about 30% of organizations are experimenting with genAI. And probably another 42% are going to be playing with it for the next 12 months. But that also means from an adoption perspective, those actually using it in software engineering, I think it’s only less than one-third that’s really fundamentally going to be impacting their production flow with generator. So I think the market is still very much in the experimentation phase. And so that’s why all the tool [are] pretty comparable in terms of what it can do and what it can’t do.

“And again, it’s not really about whether the feature set is greater in one platform versus another. I think it’s more the application of it to solving a business problem that makes the impact.”

Do you use AI or generative AI for any software development? Forget pre-development and post-development for the moment. Do you actually use it to generate code that you use? “We do. Even for us, it is in an experimentation phase. But we have put in a lot of work ourselves in terms of refining generative AI engines so that we can generate consistent code. We’ve actually done quite a lot of experimentation and also proof of concepts with clients on all three of those phases [pre-code modeling, code development, post-code testing]. Like I said, the pre- and post- are the easier ones because there’s less risk.

“Now, whether or not the client is comfortable enough for that [AI] generated code to go to production is a different case. So, that proof of concept we’re doing is not necessarily production. And I think…taking it to production is still something that industry has to work through in terms of their acceptance.”

How accurate is the code generated by your AI tools? Or, to put it another way, how often is that code usable? I’ve heard from other experts the accuracy rate ranges anywhere from 50% to 80% and even higher. What are you finding? “I think the code is highly usable, to be honest. I think it’s actually a pretty high percentage because the generated code, it’s not wrong. I think the concern with generated code is not whether or not it’s written correctly. I think it is written correctly. The problem, as I said, is around how that code was generated, whether there were some innate or embedded defects in it that people don’t know about. Then you know the other question is where did that generated code come from, and whether or not the generated code that you’ve created now feeds a larger pool, and is that secure?

“So imagine if I’m an industrial company and I want to create this solution, and I generate some code [whose base] came from my competitor. How do I know if this is my IP or their IP? Or if I created it, did that somehow through the ether migrate to somebody else generating that exact same code? So, it gets very tricky in that sense unless you have a very privatized genAI system.”

Even when the code itself it not usable for whatever reason, can AI-generated code still be useful? “It’s true. There’s a lot of code that in the beginning may not be usable. It’s like with any learning system, you need to give it more prompts in order to tailor it to what you need. So, if you think about basic engineering, you define some integers first. If genAI can do that, you’ve now saved yourself some time from having to type in all the defined parameters and integer parameters and all that stuff because that could all be pre-generated.

“If it doesn’t work, you can give it an additional prompt to say, ‘Well, actually I’m looking for a different set of cycles or different kind of run times,’ and then you can tweak that code as well. So instead of you starting from scratch, just like writing a paper, you can have someone else write an outline, and you can always use some of the intro, the ending, some of these things that isn’t the actual meat of the content. And the meat of the content you can continue to refine with generative AI, too. So definitely, it’s a big help. It saves you time from writing it from scratch.”

Has AI or genAI allowed you to create a citizen developer work force? “I think it’s still early. AI allows your own team to be a little faster in doing some of the things that they don’t necessarily want to do or it can cut down the toil from a developer’s perspective of generating test cases or writing user stories and whatnot. It’s pretty good at generating the outline of a code from a framework perspective. But for it to do it code generation independently, I think we’re we’re still relatively early on that.”

How effective has AI code generation been in creating efficiencies and increasing productivity? “Productivity, absolutely. I think that’s a really strong element of the developer experience. The concept is that if you hire some really good software developers, they want to be building features and new code and things of that sort, and they don’t like do more of the pre-code responsibilities. So if you can solve more of that toil for them, get rid of more of that mundane repetitive things, then they can be focused on more of the value generation element of it.

“So, for productivity, I think it’s a big boost, but it’s not about developing more code. I think often times it’s about developing better code. So instead of saying I spent hours of my day just writing a basic structure, that’s now pre-generated for me. And now I can think about how do I optimize runtimes. How do I optimize the consumption or storage or whatnot?

“So, it frees up your mind to think about additional optimizations to make your code better, rather than just figuring out what the basis of the code is.”

One of the other uses I’ve heard from engineers and developers is that genAI is often not even used to generate new code, but it’s most often used to update old code. Are you also seeing that use case? “I think all the genAI tools are going to be generating new code, but the difference is that one very important use case which you just highlighted: the old code migration element.

“What we also find with a lot of clients that have a lot of systems that are heavily outdated — they have systems that are 20-, maybe 30-years old. There could be proprietary code in there that nobody understands anymore and they’ve already lost that skill set. What you can do with AI-assisted code generation and your own team is create a corpus of knowledge of understanding of the old code so you can actually now ingest the old code and understand what that meta model is — what is it you’re trying to do? What are the right inputs and outputs? From there, you can actually generate new code in a new language that’s actually maintainable. And that’s a huge, huge benefit for clients.

“So, this is a great application of AI technology where it’s still generating your code, it’s not actually changing the old code. You don’t want it to change the old code, because it would still be unmanageable. So what you want is to have it understand the concept of what you’re trying to do so that you can actually generate new code that is much more maintainable.”

Are there any other tips you can offer people who are considering using AI for code generation? “I think it’s a great time, because the technology is super exciting. There’s a lot of different choices for people to play around with and I think there’s a lot of low-hanging fruit. I feel like generative AI is also a huge benefit, because it’s introduced or reintroduced people to the concept that AI is actually attainable.

“There’s a lot of people who think AI is like the top of the pyramid technology. They think, I’m going to get there, but only if I clean up all of my data and I get all my data ingested correctly. Then, if I go through all those steps, I can use AI. But the pervasiveness and the attractiveness of general AI is that it is attainable even before that. It’s OK to start now. You don’t have to clean up everything before you get to that point. You can actually iterate and gain improvements along the way.

“If you look at that the software development life cycle, there’s a lot of areas right now that could be low risk use of AI. I wouldn’t even say just productivity. It’s just about it being more valuable to the outcomes that you want to generate and so it’s a good opportunity to start with. It’s not the be all to end all. It’s not going to be your citizen developer, you know, But it augments your team. It increases the productivity. It reduces the toil. So, it’s just a good time to get started.”

Developer, Engineer, Generative AI
Kategorie: Hacking & Security

Managed Detection and Response in 2023

Kaspersky Securelist - 30 Duben, 2024 - 11:00

Managed Detection and Response in 2023 (PDF)

Alongside other security solutions, we provide Kaspersky Managed Detection and Response (MDR) to organizations worldwide, delivering expert monitoring and incident response 24/7. The task involves collecting telemetry for analysis by both machine-learning (ML) technologies and our dedicated Security Operations Center (SOC). On detection of a security incident, SOC puts forward a response plan, which, if approved by the customer, is actioned at the endpoint protection level. In addition, our experts give recommendations on organizing incident investigation and response.

In the annual MDR report, we present the results of analysis of SOC-detected incidents, supplying answers to the following questions:

  • Who are your potential attackers?
  • How do they currently operate?
  • How to detect their actions?

The report covers the tactics, techniques and tools most commonly used by threat actors, the nature of high-severity incidents and their distribution among MDR customers by geography and industry.

Security incident statistics for 2023 Security events

In 2023, Kaspersky Managed Detection and Response handled more than 431,000 alerts about possible suspicious activity. Of these, more than 117,000 were analyzed by ML technologies, and over 314,000 by SOC analysts. Of the manually processed security events, slightly under 90% turned out to be false positives. What is more, around 32,000 security alerts were linked to approximately 14,000 incidents reported to MDR customers.

Geographic distribution of users

In 2023, the largest concentration of Kaspersky MDR customers was in the European region (38%). In second place came Russia and the CIS (28%), in third the Asia-Pacific region (16%).

Distribution of Kaspersky MDR customers by region, 2023

Distribution of incidents by industry

Since the number of incidents largely depends on the scale of monitoring, the most objective picture is given by the distribution of the ratio of the number of incidents to the number of monitored endpoints. The diagram below shows the expected number of incidents of a given criticality per 10,000 endpoints, broken down by industry.

Expected number of incidents of varying degrees of criticality per 10,000 endpoints in different industries, 2023

In 2023, the most incidents per 10,000 devices were detected in mass media organizations, development companies and government agencies.

In terms of absolute number of incidents detected, the largest number of incidents worldwide in 2023 were recorded in the financial sector (18.3%), industrial enterprises (16.9%) and government agencies (12.5%).

Distribution of the number of Kaspersky MDR customers, all identified incidents and critical incidents by industry, 2023

General observations and recommendations

Based on the analysis of incidents detected in 2023, and on our many years of experience, we can identify the following trends in security incidents and protection measures:

  • Every year we identify targeted attacks carried out with direct human involvement. To effectively detect such attacks, besides conventional security monitoring, threat hunting is required.
  • The effectiveness of the defense mechanisms deployed by enterprises is best measured by a range of offensive exercises. Year after year, we see rising interest in projects of this kind.
  • In 2023, we identified fewer high-severity malware incidents than in previous years, but the number of incidents of medium and low criticality increased. The most effective approach to guarding against such incidents is through multi-layered protection.
  • Leveraging the MITRE ATT&CK® knowledge base supplies additional contextual information for attack detection and investigation teams. Even the most sophisticated attacks consist of simple steps and techniques, with detection of just a single step often uncovering the entire attack.

Detailed information about attacker tactics, techniques and tools, incident detection and response statistics, and defense recommendations can be found in the full report (PDF).

New U.K. Law Bans Default Passwords on Smart Devices Starting April 2024

The Hacker News - 30 Duben, 2024 - 07:57
The U.K. National Cyber Security Centre (NCSC) is calling on manufacturers of smart devices to comply with new legislation that prohibits them from using default passwords, effective April 29, 2024. "The law, known as the Product Security and Telecommunications Infrastructure act (or PSTI act), will help consumers to choose smart devices that have been designed to
Kategorie: Hacking & Security

New U.K. Law Bans Default Passwords on Smart Devices Starting April 2024

The Hacker News - 30 Duben, 2024 - 07:57
The U.K. National Cyber Security Centre (NCSC) is calling on manufacturers of smart devices to comply with new legislation that prohibits them from using default passwords, effective April 29, 2024. "The law, known as the Product Security and Telecommunications Infrastructure act (or PSTI act), will help consumers to choose smart devices that have been designed to Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Google Prevented 2.28 Million Malicious Apps from Reaching Play Store in 2023

The Hacker News - 29 Duben, 2024 - 19:07
Google on Monday revealed that almost 200,000 app submissions to its Play Store for Android were either rejected or remediated to address issues with access to sensitive data such as location or SMS messages over the past year. The tech giant also said it blocked 333,000 bad accounts from the app storefront in 2023 for attempting to distribute malware or for repeated policy violations. "In 2023,
Kategorie: Hacking & Security

Google Prevented 2.28 Million Malicious Apps from Reaching Play Store in 2023

The Hacker News - 29 Duben, 2024 - 19:07
Google on Monday revealed that almost 200,000 app submissions to its Play Store for Android were either rejected or remediated to address issues with access to sensitive data such as location or SMS messages over the past year. The tech giant also said it blocked 333,000 bad accounts from the app storefront in 2023 for attempting to distribute malware or for repeated policy violations. "In 2023,Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

How we fought bad apps and bad actors in 2023

Google Security Blog - 29 Duben, 2024 - 17:59
Posted by Steve Kafka and Khawaja Shams (Android Security and Privacy Team), and Mohet Saxena (Play Trust and Safety)

A safe and trusted Google Play experience is our top priority. We leverage our SAFE (see below) principles to provide the framework to create that experience for both users and developers. Here's what these principles mean in practice:

  • (S)afeguard our Users. Help them discover quality apps that they can trust.
  • (A)dvocate for Developer Protection. Build platform safeguards to enable developers to focus on growth.
  • (F)oster Responsible Innovation. Thoughtfully unlock value for all without compromising on user safety.
  • (E)volve Platform Defenses. Stay ahead of emerging threats by evolving our policies, tools and technology.

With those principles in mind, we’ve made recent improvements and introduced new measures to continue to keep Google Play’s users safe, even as the threat landscape continues to evolve. In 2023, we prevented 2.28 million policy-violating apps from being published on Google Play1 in part thanks to our investment in new and improved security features, policy updates, and advanced machine learning and app review processes. We have also strengthened our developer onboarding and review processes, requiring more identity information when developers first establish their Play accounts. Together with investments in our review tooling and processes, we identified bad actors and fraud rings more effectively and banned 333K bad accounts from Play for violations like confirmed malware and repeated severe policy violations.

Additionally, almost 200K app submissions were rejected or remediated to ensure proper use of sensitive permissions such as background location or SMS access. To help safeguard user privacy at scale, we partnered with SDK providers to limit sensitive data access and sharing, enhancing the privacy posture for over 31 SDKs impacting 790K+ apps. We also significantly expanded the Google Play SDK Index, which now covers the SDKs used in almost 6 million apps across the Android ecosystem. This valuable resource helps developers make better SDK choices, boosts app quality and minimizes integration risks.

Protecting the Android Ecosystem

Building on our success with the App Defense Alliance (ADA), we partnered with Microsoft and Meta as steering committee members in the newly restructured ADA under the Joint Development Foundation, part of the Linux Foundation family. The Alliance will support industry-wide adoption of app security best practices and guidelines, as well as countermeasures against emerging security risks.

Additionally, we announced new Play Store transparency labeling to highlight VPN apps that have completed an independent security review through App Defense Alliance’s Mobile App Security Assessment (MASA). When a user searches for VPN apps, they will now see a banner at the top of Google Play that educates them about the “Independent security review” badge in the Data safety section. This helps users see at-a-glance that a developer has prioritized security and privacy best practices and is committed to user safety.

To better protect our customers who install apps outside of the Play Store, we made Google Play Protect’s security capabilities even more powerful with real-time scanning at the code-level to combat novel malicious apps. Our security protections and machine learning algorithms learn from each app submitted to Google for review and we look at thousands of signals and compare app behavior. This new capability has already detected over 5 million new, malicious off-Play apps, which helps protect Android users worldwide.

More Stringent Developer Requirements and Guidelines

Last year we updated Play policies around Generative AI apps, disruptive notifications, and expanded privacy protections. We also are raising the bar for new personal developer accounts by requiring new testing requirements before developers can make their app available on Google Play. By testing their apps, getting feedback and ensuring everything is ready before they launch, developers are able to bring more high quality content to Play users. In order to increase trust and transparency, we’ve introduced expanded developer verification requirements, including D-U-N-S numbers for organizations and a new “About the developer” section.

To give users more control over their personal data, apps that enable account creation now need to provide an option to initiate account and data deletion from within the app and online. This web requirement is especially important so that a user can request account and data deletion without having to reinstall an app. To simplify the user experience, we have also incorporated this as a feature within the Data safety section of the Play Store.

With each iteration of the Android operating system (including its robust set of APIs), a myriad of enhancements are introduced, aiming to elevate the user experience, bolster security protocols, and optimize the overall performance of the Android platform. To further safeguard our customers, approximately 1.5 million applications that do not target the most recent APIs are no longer available in the Play Store to new users who have updated their devices to the latest Android version.

Looking Ahead

Protecting users and developers on Google Play is paramount and ever-evolving. We're launching new security initiatives in 2024, including removing apps from Play that are not transparent about their privacy practices.

We also recently filed a lawsuit in federal court against two fraudsters who made multiple misrepresentations to upload fraudulent investment and crypto exchange apps on Play to scam users. This lawsuit is a critical step in holding these bad actors accountable and sending a clear message that we will aggressively pursue those who seek to take advantage of our users.

We're constantly working on new ways to protect your experience on Google Play and across the entire Android ecosystem, and we look forward to sharing more.

Notes
  1. In accordance with the EU's Digital Services Act (DSA) reporting requirements, Google Play now calculates policy violations based on developer communications sent. 

Kategorie: Hacking & Security

The EU has decided to open up iPadOS

Computerworld.com [Hacking News] - 29 Duben, 2024 - 17:25

The EU has given Apple just six months to open up iPads in the same way it’s been forced to open up iPhones in Europe. The decision follows an EU determination that the iPad — which leads but does not dominate the tablet market — should be seen as a “gatekeeper.”

Apple will not have much time to comply.

What’s really interesting, as noted by AppleInsider, is the extent to which the decision to force Apple to open up iPadOS seems to have been made even though the EU’s lead anti-competition regulator, Margrethe Vestiger, says the company doesn’t actually meet the criteria for enforcement. 

It doesn’t meet the threshold, so we’ll do it anyway

“Today, we have brought Apple’s iPadOS within the scope of the DMA obligations,” said Vestager.  “Our market investigation showed that despite not meeting the thresholds, iPadOS constitutes an important gateway on which many companies rely to reach their customers.”

This triumph of ideology is just the latest poor decision from the trading bloc and comes as Apple gets ready to introduce new software, features, and artificial intelligence to its devices at its Worldwide Developer’s Conference in June

With that in mind, I expect Apple’s software development teams need Europe’s latest decision about as much as the rest of us need an unexpected utility bill. That said, I imagine the challenge has not been entirely unexpected.

Sour grapes?

To some extent you have to see that Europe is playing defense.

Not only has it lost all advantages in space research to Big Tech firms such as Space X, but the continent has arguably failed to spawn a significant homegrown Big Tech competitor. This leaves Europe reliant on US technology firms, so it’s clear the EU is attempting to loosen the hold US firms have on digital business in Europe by using the Digital Markets Act is being applied.

The EU isn’t alone; US regulators are equally determined to dent the power Apple and other major tech firms hold. Fundamental to many of the arguments made is the claim that consumers will see lower prices as a result of more open competition, but I’m highly doubtful that will happen.

So, what happens next?

Apple will likely attempt to resist the EU call to open up the iPad, but will eventually be forced to comply. Meanwhile, as sideloading intensifies on iPhones, we will see whether user privacy and safety do indeed turn out to be compatible with sideloading.

In an ideal world, the EU would hold off on any action involving iPads pending the results of that experiment. It makes sense for regulators and Apple to work constructively together to protect against any unexpected consequences as a result of the DMA before widening the threat surface. 

Perhaps user security isn’t something regulators take seriously, even though government agencies across the EU and elsewhere are extremely concerned at potential risks. Even in the US, regulators seem to want us to believe Apple’s “cloak” of privacy and security is actually being used to justify anti-competitive behavior. 

Do the benefits exceed the risks?

Experientially, at least, there’s little doubt that platforms (including the Mac) that support sideloading face more malicious activity than those that don’t. Ask any security expert and they will tell you that in today’s threat environment, it’s only a matter of time until even the most secure systems are overwhelmed. So it is inevitable some hacker somewhere will find a way to successfully exploit Apple’s newly opened platforms.

It stands to reason that ransomware, adware, and fraud attempts will increase and it is doubtful the EU will shoulder its share of the burden to protect people against any such threats that emerge as a result of its legislation.

For most consumers, the biggest benefit will be the eventual need to purchase software from across multiple store fronts, and to leave valuable personal and financial details with a wider range of payment processing firms.

The joy I personally feel at these “improvements” is far from tangible.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Apple, Apple App Store, iPad, Mobile
Kategorie: Hacking & Security

China-Linked 'Muddling Meerkat' Hijacks DNS to Map Internet on Global Scale

The Hacker News - 29 Duben, 2024 - 15:46
A previously undocumented cyber threat dubbed Muddling Meerkat has been observed undertaking sophisticated domain name system (DNS) activities in a likely effort to evade security measures and conduct reconnaissance of networks across the world since October 2019. Cloud security firm Infoblox described the threat actor as likely affiliated with the
Kategorie: Hacking & Security

China-Linked 'Muddling Meerkat' Hijacks DNS to Map Internet on Global Scale

The Hacker News - 29 Duben, 2024 - 15:46
A previously undocumented cyber threat dubbed Muddling Meerkat has been observed undertaking sophisticated domain name system (DNS) activities in a likely effort to evade security measures and conduct reconnaissance of networks across the world since October 2019. Cloud security firm Infoblox described the threat actor as likely affiliated with the Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Ubuntu 24.04 Security Enhancements Analyzed [Updated]

LinuxSecurity.com - 29 Duben, 2024 - 13:00
The release of Ubuntu 24.04 LTS , also known as Noble Numbat, brings various security enhancements and exciting new features . These improvements include unprivileged user namespace restrictions, binary hardening, AppArmor 4 , disabling old TLS versions, and upstream kernel security features.
Kategorie: Hacking & Security

Critical Security Update for Google Chrome: Implications & Recommendations

LinuxSecurity.com - 29 Duben, 2024 - 13:00
The release of Google Chrome 124 addresses four vulnerabilities, including a critical security flaw that can enable attackers to execute arbitrary code. Over the next few days or weeks, the Google Stable channel will be updated to 124.0.6367.78 for Linux. As security practitioners, Linux admins, infosec professionals, and sysadmins must be aware of the implications of such vulnerabilities and take appropriate action.
Kategorie: Hacking & Security

Navigating the Threat Landscape: Understanding Exposure Management, Pentesting, Red Teaming and RBVM

The Hacker News - 29 Duben, 2024 - 12:54
It comes as no surprise that today's cyber threats are orders of magnitude more complex than those of the past. And the ever-evolving tactics that attackers use demand the adoption of better, more holistic and consolidated ways to meet this non-stop challenge. Security teams constantly look for ways to reduce risk while improving security posture, but many
Kategorie: Hacking & Security

Navigating the Threat Landscape: Understanding Exposure Management, Pentesting, Red Teaming and RBVM

The Hacker News - 29 Duben, 2024 - 12:54
It comes as no surprise that today's cyber threats are orders of magnitude more complex than those of the past. And the ever-evolving tactics that attackers use demand the adoption of better, more holistic and consolidated ways to meet this non-stop challenge. Security teams constantly look for ways to reduce risk while improving security posture, but many The Hacker Newshttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

New R Programming Vulnerability Exposes Projects to Supply Chain Attacks

The Hacker News - 29 Duben, 2024 - 12:50
A security vulnerability has been discovered in the R programming language that could be exploited by a threat actor to create a malicious RDS (R Data Serialization) file such that it results in code execution when loaded and referenced. The flaw, assigned the CVE identifier CVE-2024-27322 (CVSS score: 8.8), "involves the use of promise objects and lazy evaluation in R," AI application
Kategorie: Hacking & Security

New R Programming Vulnerability Exposes Projects to Supply Chain Attacks

The Hacker News - 29 Duben, 2024 - 12:50
A security vulnerability has been discovered in the R programming language that could be exploited by a threat actor to create a malicious RDS (R Data Serialization) file such that it results in code execution when loaded and referenced. The flaw, assigned the CVE identifier CVE-2024-27322 (CVSS score: 8.8), "involves the use of promise objects and lazy evaluation in R," AI applicationNewsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

A new Windows 11 backup and recovery paradigm?

Computerworld.com [Hacking News] - 29 Duben, 2024 - 12:00

A lot has changed regarding built-in backup and recovery tools in Windows 11. Enough so, in fact, that it’s not an exaggeration to talk about a new approach to handling system backup and restore, as well as system repair and recovery.

That’s why the title for this article uses the “P-word” (paradigm). This a term much-beloved in the USA in the 1970s and ’80s, plucked from Thomas Kuhn’s The Structure of Scientific Revolutions (1972) to explain how and why radical changes happen in science.

Indeed, a list of what’s new in Windows 11 by way of backup and recovery helps set the stage for considering a veritable paradigm shift inside this latest desktop OS version:

  • The Windows Backup app, which replaces the obsolete “Backup and Restore (Windows 7) utility,” still present in Windows 10 but absent in Windows 11
  • A revamped approach inside Settings > System > Recovery, which now includes both “Fix problems using Windows Update” and “Reset this PC” options to attempt repairs to an existing OS or reinstall Windows 11 from scratch, respectively

If these elements are combined with proper use of OneDrive, they can cover the gamut of Windows backup, restore, repair, and recovery tasks. Remarkable!

Defining key R-words: Repair, Restore, Recovery, and Reset

Before we dig into the details, it’s important to define these “R-words” so that what Microsoft is doing with Windows 11 backup and recovery options makes sense.

  • Repair: Various methods for fixing Windows problems or issues that arise from a working but misbehaving OS or PC. For what it’s worth, this term encompasses the “Fix problems without resetting your PC” button in Settings > System > Recovery shown in Figure 1; it calls the native, built-in Windows 11 Get Help facility.

Figure 1: Although it’s labeled Recovery, this Windows 11 Settings pane shows Reset explicitly and Repair implicitly.

Ed Tittel / IDG

  • Restore: This is usually defined as putting things back the way they were when a particular backup was made. It is NOT shown in Figure 1, though you can get to a set of Windows Backup data that provides restore information through Advanced startup and through other means.
  • Recovery: Though it has a general meaning, Microsoft tends to view Recovery as a set of operations that enables access to a non-booting Windows PC, either to replace its boot/system image (“Reset this PC” in Figure 1 — see next item) or to boot to alternate media or the Windows Recovery environment, a.k.a. WinRE (“Advanced startup” in Figure 1) to attempt less drastic repairs: reboot from external media, attempt boot or partition repairs, replace corrupted system files, and a great deal more.
  • Reset: Remove the current disk structure on the system/boot drive with a new structure and a fresh, new Windows 11 install, keeping or discarding personal files (but not applications) as you choose.

All of the preceding R-words are intertwined. And Restore is closely related to Backup — that is, one must first perform a backup so that one has something to restore later on.

Introducing Windows Backup

If you type “Windows Backup” into the Windows 11 Start menu’s search box for versions 23H2 or later (publicly released October 31, 2023), you should see something like Figure 2 pop up:

Figure 2: Introducing Windows Backup in Windows 11 23H2.

Ed Tittel / IDG

This simply shows the Start menu entry for the Windows Backup app, which I’ll abbreviate as WB (with apologies to Warner Brothers). Interestingly enough, WB is not packaged as an app with an MSIX file, nor is it available through the Windows Store. Its setup options when launched tell you most of what you need to know, shown in Figure 3. The rest becomes clear as you drill down into its various subheadings, as I’ll explain soon.

Figure 3: The various Windows Backup options/selections let you protect/copy folders, apps, settings, and credentials. That’s about everything!

Ed Tittel / IDG

By default, here’s how things shake out in WB:

  • Folders covers the Desktop, Documents, Pictures, Videos, and Music items (a.k.a. “Library folders”) from the logged-in user’s file hierarchy. On first run, you may use a toggle to turn backup on or off. (Note: a valid Microsoft Account, or MSA, with sufficient available OneDrive storage is required to make use of WB.)
  • Apps covers both old-style .exe apps and newer MSIX apps (like those from the Microsoft Store). It will also capture and record app preferences, settings, and set-up information. This is extremely important, because it provides a way to get back apps and applications, and related configuration data, if you perform a “Reset this PC” operation on the Recovery pane shown in Figure 1 above.
  • Settings covers a bunch of stuff. That’s no surprise, given the depth and breadth of what falls under Settings’ purview in Windows, including: accessibility, personalization, language preferences and dictionary, and other Windows settings.
  • Credentials covers user account info, Wi-Fi info (SSIDs, passwords, etc.), and passwords. This handles all the keys needed to get into apps, services, websites, and so forth should you ever perform a restore operation.

Once you’ve made your folder selections and turned everything on, Windows Backup is ready to go. All you need to do is hit the Back up button at the bottom right in Figure 3, and your first backup will be underway. The first backup may take some time to complete, but when it’s finished you’ll see status info at the top of the Windows Backup info in Settings > Accounts > Windows backup, as shown in Figure 4.

Figure 4: Status information for WB appears under Settings > Accounts > Windows backup (credentials do get backed up but are not called out).

Ed Tittel / IDG

Please note again that all backed up files and information go to OneDrive. Thus, internet and OneDrive access are absolutely necessary for Windows Backup to make backup snapshots and for you to be able to access them for a restore (or new install) when they’re needed. This has some interesting wrinkles, as I’ll explain next.

The Microsoft support page “Getting the most out of your PC backup” explains Windows Backup as follows:

Your Microsoft account ties everything together, no matter where you are or what PC you’re using. This means your personalized settings will be remembered with your account, and your files are accessible from any device. You can unlock premium features like more cloud storage, ongoing technical support, and more, by purchasing a Microsoft 365 subscription for your account.

That same document also cites numerous benefits, including:

  • easy, secure access to files and data anywhere via OneDrive
  • simple transfer to a new PC as and when desired
  • protection “if anything happens to your PC” without losing precious files

This is why Windows Backup and the other tools offer a new backup paradigm in Windows 11. Used together through a specific MSA, you can move to a new PC when you want to, or get your old one back when you need to.

The restore process, WB-style

Microsoft has a support note that explains and describes WB, including initial setup, regular use, and how to restore. This last topic, entitled “How do I restore the backup?” is not just the raison d’être for backup, it’s also well worth reading closely (perhaps more than once).

Let me paraphrase and comment on that document’s contents. Backup makes itself available whenever you work on a new PC, or when you need to reinstall Windows, as you are setting it up. Once you log in with the same MSA to which the backup belongs, it will recognize that backups for the account are available to you, and the tool will interject itself into the install process to ask if there’s a backup you would like to restore. This dialog is depicted in Figure 5.

Figure 5: Once logged into an MSA, the Windows installer will offer to restore backup it keeps for that account to the current target PC.

Ed Tittel / IDG

For users with multiple PCs (and backups) the More options link at center bottom takes you to a list of options, from which you can choose the one you want. Once you’ve selected a backup, the Windows installer works with WB to copy its contents into the install presently underway. As Microsoft puts it, “When you get to your desktop everything will be right there waiting for you.”

I chose a modestly complex backup from which to restore my test virtual machine; it took less than 2 minutes to complete. That’s actually faster than my go-to third-party backup software, Macrium Reflect — but it occurs in the context of overall Windows 11 installation, so the overall time period required is on par (around 7 minutes, or 9 minutes including initial post login setup).

WB comes with a catch, however…

You’d think that capturing all the app info would mean that apps and applications would show up after a restore, ready to run. Not so. Look at Figure 6, which shows the Start menu entries for CrystalDiskInfo (a utility I install as a matter of course on my test and production PCs to measure local disk performance).

Figure 6: Instead of a pointer to the actual CrystalDiskInfo apps (32- & 64-bits), there’s an “Install” pointer!

Ed Tittel / IDG

Notice the Install link underneath the 32- and 64-bit versions. And indeed, I checked all added apps and applications I had installed on the backup source inside the restored version and found the same thing.

Here’s the thing: Windows Backup makes it easy to bring apps and applications back, but it does take some time and effort. You must work through the Start menu, downloading and installing each app, to return them to working order. That’s not exactly what I think a restore operation should be. IMO, a true restore brings everything back the way it was, ready to run and use as it was when the backup was made.

WB and the OneDrive limitation

There’s another potential catch when using WB for backup and restore. It won’t affect most users. But those who, like me, use a single MSA on multiple test and production machines must consider what adding WB into the mix means.

OneDrive shares MSA-related files across multiple PCs by design and default. WB saves backups on a per-PC basis. Thus, you must think and use the More options link in Figure 5 when performing a WB restore to select the latest snapshot from a specific Windows PC. If you’re restoring the same PC to itself, so to speak, click Restore from this PC (Figure 5, lower right) instead.

Overall, Windows Backup is a great concept and does make it easy to maintain system snapshots. The restore operation is incomplete, however, as I just explained. Now, let’s move onto Windows Repair, via the “Reinstall now” option shown in Figure 1 (repeated below in Figure 7).

More about “Reset this PC” and Windows repair

Looking back at Figure 1 (or below to Figure 7) you can see that “Reset this PC” is labeled as a Recovery option, along with other recovery options called “Fix problems…” above. The idea is that Reset this PC is an option of last resort, because it wipes out the existing disk image and replace it with a fresh, clean, new one. WB then permits admins or power users to draw from a WB backup for a specific PC in the cloud to restore some existing Windows setup — or not, perhaps to clean up the PC for handoff to another user or when preparing it for surplus sell-off or donation.

Figure 7: Recovery options include two “Fix problems…” options and “Reset PC.”

Ed Tittel / IDG

As described earlier in this article, “Fix problems without resetting your PC” provides access to Windows 11’s built-in “Get Help” troubleshooters, while the “Reinstall now” option provides the focus for the next section. All this said, “Reset this PC” provides a fallback option when the current Windows install is not amenable to those other repair techniques.

Using Windows Update to perform a repair install

Earlier this year, Microsoft introduced a new button into its Settings > System > Recovery environment in Windows 11 23H2. As shown in Figure 7 above, that button is labeled “Reinstall now” and accompanies a header that reads “Fix problems using Windows Update.” It, too, comes with interesting implications. Indeed, it’s a giant step forward for Windows repair and recovery.

What makes the “Reinstall now” button so interesting is that is shows Microsoft building into Windows itself a standard OS repair technique that’s been practiced since Windows 10 came along in late July 2015: a “repair install” or “in-place upgrade install,” which overwrites the OS files while leaving user files, apps, and many settings and preferences in place.  (See my 2018 article “How to fix Windows 10 with an in-place upgrade install” for details on how the process works and the steps involved to run such an operation manually.)

But there’s more: Windows 11’s “Reinstall now” button matches the reinstall image to whatever Windows edition, version and build it finds running on the target PC when invoked. That means behind the scenes, Microsoft is doing the same work UUP dump does to create Windows ISOs for specific Windows builds. This is quite convenient, because Windows Recovery identifies what build to reinstall, and then creates and installs a matching Windows image.

Indeed, this process takes time, because it starts with the current base for some Windows feature release (e.g., 22H2 or 23H2), then performs all necessary image manipulations to fold in subsequent updates, patches, fixes and so on. For that reason, it can take up to an hour for such a reinstall to complete on a Windows 11 PC, whereas running “setup.exe” from a mounted ISO from the Download Windows 11 page often completes in 15 minutes or less. But then, of course, you’d have to run all outstanding updates to catch Windows up to where you want it to be. That’s why there’s a time differential.

Bottom line: the new “Reinstall now” button in Windows 23H2 makes performing an in-place upgrade repair install dead simple, saving users lots of foreknowledge, thought, and effort.

If everything works, the new paradigm is golden

WB used in conjunction with MSA and OneDrive is about as simple and potentially foolproof as backup and restore get.

Do I think this new paradigm of using WB along with OneDrive, installer changes, and so forth works to back up and restore Windows 11? Yes, I do — and probably most of the time. Am I ready to forgo other forms of backup and restore to rely on WB and its supporting cast alone? By no means! I find that third-party image backup software is accurate, reliable, and speedy when it comes to backing up and restoring Windows PCs, including running versions of all apps and applications.

In a recent test of the “Reinstall now” button from Settings > Recovery in Windows 11, it took 55 minutes for that process to complete for the then-current windows image. I also used WB to restore folders, apps, settings, and credentials. That took at least another 2-3 minutes, but left pointers to app and application installers, with additional effort needed to download and reinstall those items. (This takes about 1 hour for my usual grab-bag of software programs.)

Using my favorite image backup and recovery tool, Macrium Reflect, and booting from its Rescue Media boot USB flash drive, I found and restored the entire C: drive on a test PC in under 7 minutes. This let me pick a backup from any drive on the target PC (or my network), replaced all partitions on the system/boot disk (e.g., EFI, MSR, C:\Windows, and WinRE), and left me with a complete working set of applications. I didn’t need internet access, an MSA, or OneDrive storage to run that restore, either.

Worth having, but not exclusively

Microsoft has made big and positive changes to its approach to backup and recovery. Likewise for repair, with the introduction of the “Reinstall now” button that gets all files from Windows Update. These capabilities are very much worth having, and worth using.

But these facilities rely on the Microsoft Windows installer to handle PC access and repair. They also proceed from an optimistic assumption that admins or power users can get machines working so that a successful MSA login drives the restore process from OneDrive in the cloud to proceed. When it works, that’s great.

But, given the very real possibility that access issues, networking problems, or other circumstances outside the installer’s control might present, I believe other backup and restore options remain necessary. As the saying goes, “You can never have too many backups.”

Thus, I’m happily using WB and ready to restore as the need presents. But I’m not abandoning Macrium Reflect with its bootable repair disk, backup file finder, boot repair capabilities, and so forth. That’s because I don’t see the WB approach as complete or always available.

You are free, of course, to decide otherwise (but I’d recommend against that). And most definitely the new WB approach, the new in-place repair facility, and “reset this PC” all have a place in the recovery and repair toolbox. Put them to work for you!

Backup and Recovery, Windows, Windows 11
Kategorie: Hacking & Security

Q&A: Georgia Tech dean details why the school needed a new AI supercomputer

Computerworld.com [Hacking News] - 29 Duben, 2024 - 12:00

Like many universities, Georgia Tech has been grappling with how to offer students the training they need to prepare them for a recent sea change in IT job markets — the arrival of generative AI (genAI).

Through a partnership with chipmaker Nvidia, Georgia Tech’s College of Engineering built a supercomputer dubbed AI Makerspace; it uses 20 Nvidia HGX H100 servers powered by 160 Nvidia H100 Tensor Core GPUs (graphics processing units).

Those GPUs are powerful — a single Nvidia H100 GPU would need just one second to handle a multiplication operation that would take the school’s 50,000 students 22 years to achieve. So, 160 of those GPUs give students and professors access to advanced genAI, AI and machine learning creation and training. (The move also spurred Georgia Tech to offer new AI-focused courses and minors.

Announced two weeks ago, the AI Makerspace supercomputer will initially be used by Georgia Tech’s engineering undergraduates. But it’s expected to eventually democratize access to computing resources typically prioritized for research across all colleges.

Computerworld spoke with Matthieu Bloch, the associate dean for academics at Georgia Tech’s College of Engineering, about how the new AI supercomputer will be used to train a new generation of AI experts.

The following are excerpts from that interview:

Tell me about the Makerspace project and how it came to be? “The Makerspace is really the vision of our dean, Raheem Beyah, and the school chair of Electrical and Computer Engineering (ECE), Arijit Raychowdhury, who really wanted to put AI in the hands of our students.

“In 2024 — in the post ChatGPT world — things are very different from the pre-ChatGPT world. We need a lot of computing power to do anything that’s meaningful and relevant to industry. And in a way, the devil is out of the box. People see what AI can do. But I think to get to that level of training, you need infrastructure.

Makerspace’s Nvidia H100 Tensor Core GPUs

Georgia Tech College of Engineering

“The name Makerspace also comes from this culture we have at Georgia Tech of these maker spaces, which are places where our students get to tinker, both within the classroom and outside the classroom. The Makerspace was the idea to bring the tools that you need to do AI in a way that’s relevant to do meaningful things today. So, right now, where we’re at is we’ve partnered Nvidia to essentially offer to students a supercomputer. I mean, that’s what it is.

“What makes it unique is that it’s meant for supporting students. And right now it’s in the classroom. We’re still rolling it out. We’re in phase one. So, the idea is that the students in the classroom can work on AI projects that are meaningful to industry — problems that are interesting, you know, from a pedagogical perspective, but they don’t mean a whole lot in an industry setting.”

Tell me a bit about the projects they’ve been working on with this. “I can give you a very concrete example. ChatGPT is a very typical, a very specific form of AI called generative AI. You know, it’s able to generate. In the case of ChatGPT, [that means] text in response to prompts. You might have seen a generative model that generates pictures. I think these were very popular and whatnot. And so these are the kind of things our students can do right now, …generate anything that would be, say, photo realistic.

“You need a pretty hefty computing power to train your model and then test that it’s working properly. And so that’s what our students can do. Just to give you an idea of how far we’ve come along, before we had the AI Makerspace, our students were relying largely on something called Google CoLab. CoLab is Google making some compute resources freely accessible for use. They’re really giving to us the resources they don’t use or don’t sell to their be clients. So it’s like the crumbs that remain.

“It’s very nice of them [Google] to do that, but you could only work with very [limited resources], say for training on something like 12,000 images. Now you can, for instance, train a generative model on a data set with like one million images. So you can really scale up by orders of magnitude. And then you can start generating these photo-realistic pictures that you could not generate before. That’s the most visual example I can give you.”

Can you tell me a little bit about the genAI projects the students are working on? How good is the technology at producing the results they want? “It’s a complicated question to answer. I mean, it has many layers. We’ve just launched it, like literally, the AI Makerspace was open officially two weeks ago. So right now it’s really used at scale in the classroom. The students in that class are learning how to do machine learning. [The students] have to get the data. [They] have to learn how to train a model. The students have homework projects, which consists of this fairly sophisticated model that they have to train, and that they have to test.

“Now we have a vision beyond that, what we call phase two of the Makerspace. We’re doubling the compute capacity. The idea now is that we’re going to open that to senior design projects. We’re gonna open that to something we call vertically integrated projects, in which are students essentially doing long-term research with faculty advisors over multiple years. Our students are going to do many things — certainly all of [the] engineering [school].

“We’ve given incentives to a lot of faculty to create a lot of new courses throughout the College of Engineering for AI and ML for what matters to their field. For instance, if you’re an electrical engineer, there’s a lot of hardware to it, you know you have a model for that. How do you make the model smaller so that you can put it in hardware? That’s one very tangible question that the students would ask. But if they’re, say, mechanical engineers, they might use it differently.  Maybe for them what generative AI could do is help them generate 3D models, think about structures that they would not think about naturally. And you can decline that model. The Makerspace is a massive tool. But how the tool is used is really a function of the specific domain. The goal, of course, is for Makerspace to be available beyond engineering.

“It’s already being used by our College of Computing, and we’re hoping that our co colleagues in, say, the College of Business will see the value, because they haven’t used AI yet — perhaps for financial models, predicting whether to sell or buy a stock. I think the sky is a limit. There’s no one use of AI through Makerspace. It’s an infrastructure that provides the tools. And then these tools find declinations in all different areas of expertise.”

Why is it important to have this technology at the school for students to learn about AI? “The way we’ve come to articulate this is as follows: We’re not deliverers in doomsday scenarios, where AI is going to generate terminators that are going to eradicate humanity. Okay, that’s not how we’re thinking about it.

“AI is definitely going to change things. And we think that AI is certainly going to displace a few people. I think the humans enhanced by AI will start displacing humans who don’t use AI.

“I think the way a lot of the discussion has been shaped since ChatGPT was released to the world, in universities there’s sometimes a lot of fear. Are students cheating on their essays? Are students cheating on this cheating on that? I had these discussions with my colleagues in computing. We have an intro to computing class, where they’re cheating to write their code, which I think is not the right approach to it. But, the devil is out of the box. It’s a tool that’s here, and we have to learn how to use it.

“If I can give you my best analogy: I drive my car. I don’t know how my car really works. I mean, I was never a mechanical or electrical engineer. I sort of know what it takes [for a car to run], but I’m unable to fix it. But that doesn’t mean I can’t drive it. And I think we’re at that stage with AI tools, where one needs to know how to use them because you don’t want to be the person riding a bicycle when everybody else has a car.

“Not everyone needs to be a mechanic, but everyone needs a car. And so I think we want every student at Georgia Tech to know how to use AI, and what that means for them would be different depending on their specialty, their major. But these are tools, and you need to have played with them to really start mastering them.”

In what way has AI expanded Georgia Tech’s curriculum? “We were lucky in the sense that [we’re] building that infrastructure from new. But thinking about AI, Georgia Tech has been doing it for decades. Our faculty is very research focused. They do state-of-the-art research and AI…was always there in the background — the roots of AI. We had a lot of colleagues who actually were doing machine learning without saying it in these terms.

“Then when deep learning started appearing, people were ready to grasp that. So, we were already thinking about doing it in the labs, and the integration in the curriculum was already slowly happening. And so what we decided to do was to accelerate that so the Makerspace…accelerates the other mechanisms we’ve had to give incentives to faculty, to rethink the curriculum with AI and Ml in mind.”

So what AI courses have you launched? “I can give you two examples that we’ve launched, which are, you know, very new. But I I think I’ve been quite successful already. One is we’ve officially launched an AI minor.

“The great thing about this AI minor [is that it] is a way for students to take a series of courses with a coherent and unified team, and they get credit for that on their diploma and their transcript. This minor was designed as a collaboration right now between the College of Engineering and the College of Liberal Arts.

“Then we have the ethics and policy piece. Students need to take a specially designed course on AI Ethics and AI policy. We’re thinking very holistically. AI is a technology play, but if you just train engineers to do the technology piece alone, maybe then the doomsday-Terminator scenario is a likely outcome.

“We want our students to think about the use of AI because it’s technology that can have many uses [and problems associated with it]. We talk about deep fakes. We’re worried about it for all sorts of political reasons.

“The other thing we’ve done in the College of Engineering is essentially incentivized faculty to create new undergraduate courses related to AI and ML but relevant to their own disciplines. I literally [just made the announcement] and the has college approved 10 new courses or significantly revamped courses. So, what that means is that we have courses on machine learning for smart cities, civil environmental engineering, and a course in chemical processes in chemical and bioengineering, where they’re using AI and ML for completely different things. That’s how we’re thinking of AI. It’s a tool. So the courses need to embrace that tool.”

Are students already using genAI to assist in creating applications — so software engineering and development? “Officially or unofficially? I don’t have a good answer, because the truth is, I don’t know. But what I know is that our students are using it with or without us. You know they are using generative AI because I’m willing to bet they all have a subscription to ChatGPT.

“Now in the context of the Makerspace, this is a resource you can start doing all sorts of things. Our students are using it to write lines of code absolutely.”

So what would you say is the most popular use right now of the AI Makerspace? “We haven’t officially launched it at scale for very long, so I can’t attest to that. It’s been used largely in the classroom setting for the kind of homework students could not even dream of doing before.

“We’re going to launch it and use it over the summer for an entrepreneurship program called Create X, that students can use to take ideas and go through prototype and potentially think about building startups out of these. So that’s going to be primary use over the summer, and we’re testing it over these few weeks in the context of a hackathon in partnership with Nvidia, where teams come with big problems that they want to solve. And we want to accelerate their science, to use Nvidia’s words, to by teaching them how to use that Makerspace.”

CPUs and Processors, Education Industry, Generative AI, Natural Language Processing
Kategorie: Hacking & Security

Sandbox Escape Vulnerabilities in Judge0 Expose Systems to Complete Takeover

The Hacker News - 29 Duben, 2024 - 11:58
Multiple critical security flaws have been disclosed in the Judge0 open-source online code execution system that could be exploited to obtain code execution on the target system. The three flaws, all critical in nature, allow an "adversary with sufficient access to perform a sandbox escape and obtain root permissions on the host machine," Australian
Kategorie: Hacking & Security
Syndikovat obsah