Computerworld.com [Hacking News]

Syndikovat obsah
Making technology work for business
Aktualizace: 18 min 1 sek zpět

13 tricks for more efficient Android messaging

1 Květen, 2024 - 12:00

No matter what type of Android phone you carry or how you usually use it, one thing is a near-universal constant:

You’re gonna spend a ton of time messing with messages.

The messages may be from clients, colleagues, or your cousin Crissy from Cleveland (damn it, Crissy!). But regardless of who sends ’em or what they’re about, they’re all popping up on your phone and cluttering your weary brainspace.

My fellow Android adorer, I’m here to tell you there’s a better way.

Google’s Android Messages app has gotten surprisingly good over the years. That’s no big secret. If you only rely on what you see on the surface, though, you’re missing out on some of Messages’ most powerful and underappreciated efficiency-enhancing options.

[Hey: Want even more advanced Android knowledge? Check out my free Android Shortcut Supercourse to learn tons of time-saving tricks for your phone!]

Today, we’ll explore the Android Messages app’s most effective out-of-sight superpowers. They may not be able to cut down on the number of messages you send and receive on your phone (DAMN IT, CRISSY!), but they will help you spend less time fussing with ’em. And they might just help you have a more pleasant experience, too.

Let’s dive in, shall we?

(Before you splash forward, take note: The tips on this page are all specific to the Google Messages app, which isn’t the same as the superfluous and wildly unnecessary Messages apps made by the likes of Samsung, OnePlus, and Verizon and baked into devices associated with those companies. If you’re using a phone where the Android Messages app wasn’t preinstalled or set as the default, you can download it from the Play Store and give it a whirl. You might be pleasantly surprised by what you find.)

Android Messages trick #1: Custom notifications for important people

We’ll start with what might be my favorite little-known trick within Google’s Android Messages app: With a couple quick adjustments, you can turn any of your contacts’ faces into a custom notification icon. That icon will then show up at the top of your phone whenever that person messages you for extra-easy visibility and access.

See?

A quick bit of simple setup, and bam: Anyone’s face can become their notification icon (for better or for worse!) on your phone.

JR Raphael, IDG

The only catch is that your phone needs to be running 2020’s Android 11 operating system or higher for the feature to be available. (And honestly, if your phone isn’t running Android 11 at this point, you’ve got bigger fish to fry, Francesco.) Also, Samsung has screwed around with this system for no apparent reason — a frustratingly common theme with Samsung’s heavily modified approach to Android, especially as of late — so you may or may not be able to take advantage of this on a Galaxy gadget, depending on how recently its software has been screwed up updated. (Exaggerated sigh. What more can I say?!)

On any reasonably recent Android device that sticks close to Google’s core Android interface, though, here’s how to make the magic happen:

  • The next time you get a message from someone, press and hold your finger to the notification.
  • That’ll pull up a screen that looks a little somethin’ like this:
Android’s Priority conversation setting is the key to creating custom notifications that really stand out.

JR Raphael, IDG

  • Tap the “Priority” line, then tap “Apply” to save the changes.

And that’s it: The next time that person messages you, you’ll see their profile picture in place of the standard Messages icon in your status bar, and the notification will show up in a special section above any other alerts.

Hip, hip, hoorah!

Android Messages trick #2: Important contact prioritizing

Ever wish you could keep your most important messaging threads at the top of the list for easy ongoing access?

Poof: Wish granted. No matter what kind of Android phone you’re holding or how needlessly meddled with its software may be, just hold your finger onto the conversation in question on the main Messages app screen, then tap the pushpin-shaped icon in the app’s upper bar.

You can pin up to three conversations that way, and they’ll always appear above all other threads in that main inbox view.

Android Messages trick #3: Swift appointment scheduling

The next time you’re working to plan a meeting or event with a fellow Homo sapien in Messages, make yourself a mental note of this:

Anytime someone sends you a message that includes a specific date and time, the Messages app will underline that text. See it?

That underlined time is a covert link from an incoming message to your Android calendar agenda.

JR Raphael, IDG

You’d be forgiven for failing to realize, but you can actually tap that underlined text to reveal a shortcut for opening that very same day and time in your Android calendar app of choice. It’s a great way to get a quick ‘n’ easy glimpse at your availability for the time you’re discussing.

And if you then want to create a calendar event, just look for the “Create event” command that should appear right below that very same message. That’ll fire up a new calendar event for you on the spot, with the appropriate day and time already filled in.

That button to the left of the text suggestions is a spectacular time-saver for on-the-fly event creation.

JR Raphael, IDG

Don’tcha just love simple step-savers?

Android Messages trick #4: Seamless message scheduling

If you’re ready to hammer out a response to a message right now but don’t want your reply to be sent for a while, follow the advice shared by a reader in my Android Intelligence newsletter recently and simply schedule your message for some specific future time.

The Android Messages app’s scheduling system is spectacularly useful. You can rely on it for setting reminders to be sent to clients, business-related messages to be pushed out the next morning, or context-free middle-finger emojis to be delivered to your cousin in Cleveland at ungodly hours in the middle of the night.

To tap into this productivity-boosting power, just type out your message normally — but then, instead of tapping the triangle-shaped send icon at the right of the composing window, press and hold your finger onto that same button when you’re done.

No reasonably sane person would possibly realize it, but that’ll pull up a hidden menu for selecting precisely when your message should be sent.

Send any message, anytime — no matter when you actually write it.

JR Raphael, IDG

And the person on the other end will have no way of even knowing you wrote the thing in advance.

Android Messages trick #5: Important message saving

When you run into a message you know you’ll want to reference again, save yourself the trouble of trying to dig it back up later and instead star it on the spot to make it fast as can be to find in the future.

It couldn’t be much easier to do: Whilst viewing an individual message thread, just press and hold your finger onto the specific message you want to save, then tap the star-shaped icon that appears in the bar at the top of the screen.

Then, when you want to find the message again, tap the search icon at the top of the main Messages screen and select “Starred” from the menu that comes up. That’ll show you every message you’ve starred for exceptionally effortless resurfacing.

Android Messages trick #6: Advanced message searching

Speaking of that Messages search system: Starring is sublime, but sometimes, you need to dig up an old message that you didn’t go out of your way to save.

The Android Messages app makes that even easier than you might realize. Tap that same search icon at the top of the app’s main screen — and in addition to searching your entire history message for any specific string of text, take note:

  • You can start typing out the name of anyone in your contacts, then select them from the suggestion that appears — and then type in some text to look for something specific only within messages from that one person.
  • You can use the options within the main Messages search screen to look specifically at images, videos, locations, or links people have sent you.
  • And you can combine any of those variables for even more granular finding — looking for links you sent to a particular client, for instance, or locations an out-of-town colleague sent to you.
The Android Messages app’s search system is chock-full of helpful info.

JR Raphael, IDG

How ’bout them apples?!

Android Messages trick #7: Easier-to-read text

File this next Android Messages feature under “accidental discoveries”: The next time you find yourself squinting at something in a messaging thread on your phone, try a good old-fashioned zoom gesture on the screen — placing your finger and thumb together and then spreading ’em slowly apart.

You’d never know it, but the Messages app supports that standard gesture for zooming into a conversation. The inverse applies, too: When you’re ready to zoom back out and make everything smaller, just bring your two fingers closer together.

And if those actions aren’t working for you, tap your profile picture in the upper-right corner of the main Messages screen and select “Messages settings,” then make sure the toggle next to “Pinch to zoom conversation text” is in the on position.

Android Messages trick #8: Custom conversation colors

While we’re thinking about easier reading, a brand spankin’ new Android Messages trick that’s trickling out as we speak can let you create a custom color palette for any conversations you’ve got goin’.

That way, you can always remember that texts with your significant other are in, say, purple, whereas messages with your most important client are in red. (Best not to get those two threads confused.)

This one works only with messages sent using the modern RCS messaging platform, which basically means messages involving other people on Android at this point (though that will allegedly expand to include iFolk soon — if Apple actually follows through on its years-late promise to stop deliberately dumbing down messages between iPhone users and people on other platforms).

With any currently supported conversation, though, open up the thread within Messages — then:

  • Tap the three-dot menu icon in the screen’s upper-right corner.
  • Select “Change colors” from the menu that appears. (And if you aren’t seeing it yet, even in an RCS-enabled conversation, give it a few days and check back again. This one’s actively rolling out right now, so it should reach you soon — if it hasn’t already!)
  • Pick the color scheme you prefer, then tap the Confirm button at the bottom.
Every Android Messages conversation can have its own distinctive color, if you take the time to set it up.

JR Raphael, IDG

Repeat for any other compatible conversations, and you’ll always know exactly what you’re looking at even with a fast glance — and without having to give it an ounce of active thought.

Android Messages trick #9: Enriched inline media

You know a fantastic way to waste time? I’ll tell ya: moving from one app to another just to glance at something someone sent you (like those blasted Bangles video Crissy is always blasting your way).

Well, get this: Google’s Android Messages app can let you preview and even watch entire YouTube videos without ever leaving your current conversation — and it can give you helpful previews of web links right within the app, too.

The key is to make sure you’ve got the associated options enabled:

  • Tap your profile picture in the upper-right corner of the main Messages screen.
  • Select “Messages settings,” then tap “Automatic previews.”
  • Make sure the toggle next to “Show all previews” is on and active.

Now, the next time someone sends you a video link, you’ll see the video’s thumbnail and description right then and there, within the Messages conversation:

Videos expanded in-line within Messages — easy peasy.

JR Raphael, IDG

With web pages, Messages will show you just enough of a preview to let you make an educated decision about whether you want to tap the link or not.

Web links gain useful extra context once you enable the right option within the Android Messages settings.

JR Raphael, IDG

Almost painfully sensible, wouldn’t ya say?

Android Messages trick #10: Smarter shortcuts

If I had to pick the simplest Android Messages trick for enhancing your efficiency, it’d be embracing the built-in shortcuts Google gives us for faster message actions.

From the main Messages screen, you can swipe left or right on any message to perform an instant action — archiving the conversation, permanently deleting it, or toggling it between read and unread status.

All you’ve gotta do is mosey your way back into the Messages app’s settings areas and tap on the “Swipe actions” item to set things up the way you want…

Step-saving swipes within Messages — now available for your customization.

JR Raphael, IDG

…and then, just remember to actually use those gestures moving forward. (That part’s on you.)

Android Messages trick #11: Automated cleanup

Certain services love to send confirmation codes via text messaging when you sign in or try to perform some action. It may not be the most advisable or effective form of extra security, but — well, it’s better than nothing. And for better or for worse, it’s a pretty common tactic.

Core security considerations aside, the most irksome part of these confirmation codes is having ’em clutter up your messages list at every Goog-forsaken moment. But the Google-made Android Messages app can actually take care of that for you, without any ongoing effort — if you take about 20 seconds to make the right tweak now.

Here’s the secret:

  • Tappity-tap that comely character in the upper-right corner of the main Messages screen (y’know, the one whose appearance has a striking resemblance to your oversized head).
  • Tap “Messages settings” in the menu that comes up, then select “Messages organization.”
  • Within that curiously created section, you’ll see only one option: “Auto-delete OTPs after 24 hrs.” OTP may not exactly be an everyday, universally known abbreviation, but fear not — for it isn’t an erroneous reference to an early 90s rap hit with equally ambiguous meaning. Nope: It stands for one-time password, which is the same thing we’re thinking about here.
  • Flip that toggle into the on and active position, then flip a finger of your choice to all the confirmation codes in your messages list and rest easy knowing they’ll be auto-purged a day after their arrival from that point forward.

Who’s down with OTP? Every last homie. (I apologize.)

Android Messages trick #12: Instant reactions

Slack-style reactions may seem silly on the surface, but they serve an important communication purpose in allowing you to quickly acknowledge a message without having to carry the conversation on further. Whether it’s a thumbs-up, a clapping hands symbol, or even perhaps an occasional burrito emoji, it really can be a handy way to say “Yup, got it” (or “Yup, want beefy goodness”) without having to use a single word.

You probably know you can summon a reaction within the Android Messages app by pressing and holding a specific message within a conversation and then selecting from the list of available emoji options — right? But beyond that, Messages packs an even faster way to issue a reaction in the blink of an eye.

And here it is: Simply double-tap your finger onto any individual message within a conversation. That’ll apply the heart reaction to it without the need for any long-press or symbol selection.

It’d be nice if there were a way to customize which reaction is used for that action by default — so that, obviously, we could all change it to the burrito emoji, since that’s what any sane person uses most often — but if and when a heart will do the job, now you’ve got a super-easy way to bring it into any conversation with a fast finger tap.

Android Messages trick #13: Less annoying iPhone interactions

Last but not least in our list of magnificent Messages enhancements is something specific for your conversations with the Apple-adoring animals in your life. And it relates to those very same sorts of reactions we were just going over.

One obnoxious side effect of Apple’s “no one exists outside of iOS” mentality, y’see, is the way the iPhone’s equivalent of those reactions show up on Android. Plain and simple, they show up as — well, plain and simple text messages, instead of coming through as reactions.

Surely you’ve encountered this, right? Those pointless messages you get from iGoobers that say stuff like “Loved ‘Please stop texting me, Crissy'”?

Well, get this: Google’s Android Messages app is actually able to intercept those absurd platform-specific reactions and turn ’em into standard reactions instead of plain-text interruptions. And it’ll take you all of 12 seconds to enable the option:

  • Head back into the Messages app’s settings.
  • Tap “Advanced.”
  • Look for the line labeled “Show iPhone reactions as emoji” and make sure the toggle next to it is in the on position.

All that’s left is to breathe a heavy sigh of relief — and to send Crissy a well-deserved burrito reaction.

Hey: Don’t let the learning stop here. Get six full days of advanced shortcut knowledge with my free Android Shortcut Supercourse. You’ll discover tons of time-saving tricks!

Android, Google, Messaging Apps, Mobile Apps, Smartphones
Kategorie: Hacking & Security

LLM deployment flaws that catch IT by surprise

1 Květen, 2024 - 12:00

For all of the promise of LLMs (large language models) to handle a seemingly infinite number of enterprise tasks, IT executives are discovering that they can be extremely delicate, opting to ignore guardrails and other limitations with the slightest provocation. 

For example, if an end user innocuously — or an attacker maliciously — inputs too much data into an LLM query window, no error message is returned and the system won’t seemingly crash. But the LLM will often instantly override its programming and disable all guardrails. 

“The friction is that I can’t add a bazillion lines of code. One of the biggest threats around [LLMs] is an efficient jailbreak of overflow,” said Dane Sherrets, a senior solutions architect at HackerOne. “Give it so much information and it will overflow. It will forget its systems prompts, its training, its fine-tuning.” (AI research startup Anthropic, which makes the Claude family of LLMs, wrote a detailed look at this security hole.) 

Consider the case of a publicly held company that has to severely restrict access to not-yet-reported financials. Or a military contractor that needs to limit access to weapons blueprints to those with a specific clearance level. If an LLM becomes overloaded and ignores those restrictions, the consequences will be severe.

And that’s just one of the ways that LLM guardrails can fail. These systems are generally cloud-based, controlled by the vendor who owns the license to those particular LLM algorithms. A few enterprises (weapons manufacturers working for the government, for example) take the LLM code and solely run it on-premises in an air-gapped environment, but they are the rare exceptions.

IT leaders deploying LLMs have uncovered other subtle but serious flaws that put their systems and data at risk and/or fail to deliver useful results. Here are five major LLM issues to be aware of — and avoid — before it’s too late.

LLMs that see too much

One massive flaw in today’s LLM systems — which Microsoft acknowledged on March 6 when it introduced a new SharePoint feature for use with its Copilot LLM — is the ability to access a wide range of SharePoint files that are not intended to be shared. 

With Copilot, “when you enable access for a user, it replicates the access that they have. It can then access anything that they have access to, whether they know it or not,” said Nick Mullen, the IT governance manager for a Fortune 500 insurance company.

“The SharePoint repository runs in the background, but it also has access to anything that is public in your entire ecosystem. A lot of these sites are public by default,” said Mullen, who also runs his own security company called Sanguine Security.

Available in public preview, the new feature is called Restricted SharePoint Search. Microsoft says the feature “allows you to restrict both organization-wide search and Copilot experiences to a curated set of SharePoint sites of your choice.”

The current default option is for public access. According to Microsoft’s support documentation, “Before the organization uses Restricted SharePoint Search, Alex [a hypothetical user] can see not only his own personal contents, like his OneDrive files, chats, emails, contents that he owns or visited, but also content from some sites that haven’t undergone access permission review or Access Control Lists (ACL) hygiene, and doesn’t have data governance applied.” Because Alex has access to sensitive information (even if he’s not aware of it), so does Copilot.

The same problem applies to any corporate data storage environment. IT must thoroughly audit users’ data access priveleges and lock down sensitive data before allowing them to run queries with an LLM.

LLMs with the keys to the kingdom

Part of the problem with LLMs today is that they are often unintentionally given broad or even unlimited access to all enterprise systems. Far worse, Mullen said, is that most of the current enterprise defensive systems will not detect and therefore not block the LLM, even if it goes rogue. 

This means that enterprises have “the most powerful and intuitive search engine that can search across everything,” he said. “Historically, that type of internal scanning would fire off an alert. But LLMs are different. This is an entirely new threat vector that is extremely difficult to detect. EDR [endpoint detection and response] is not going to pick it up because it’s behaving as expected. Right now, there is not a good way to secure that. Depending on who is compromised, an attacker could gain access to a treasure trove.”

Added Mullen: “LLMs are very temperamental, and people are getting a little bit ahead of themselves. The technology is so new that a lot of the risks are still unknown. It’s a scenario where it’s not going to be known until you see it. It’s the law of unintended consequences. [IT is] turning [LLMs] on and giving them access to an insane amount of resources, which should give every organization pause.”

Artur Kiulian, the founder of PolyAgent, a nonprofit research lab focused on AI issues, sees many enterprises embracing LLMs too quickly, before the proper controls can be put into place.

“Most enterprises that are implementing LLMs are at the stage of experimentation,” Kiulian said. “Most companies use the guardrails of prompt engineering. It’s not enough. You need permission-based controls. Most enterprises are simply not there yet.”

HackerOne’s Sherrets agreed with how risky LLMs are today: “It can interact with other applications. It’s terrifying because you are giving black box control over doing things in your internal infrastructure. What utilities is the LLM touching?”

David Guarrera, a principal with EY Americas Technology Consulting who leads GenerativeAI initiatives, is also concerned about the risks posed by early enterprise LLM deployments. “There are a lot of new emerging attacks where you can trick the LLMs into getting around the guardrails. Random strings that make the LLM go crazy. Organizations need to be aware of these risks,” Guarrera said.

He advises enterprises to create isolated independent protections for sensitive systems, such as payroll or supply chain. IT needs “permissions that are handled outside of the LLM’s [access]. We need to think deeply how we engineer access to these systems. You have to do it at the data layer, something that is invisible to the LLM. You also need to engineer a robust authentication layer,” he said.

LLMs with a civil service mentality

Another concern is trying to program LLMs to manage need-to-know rules, the idea that the system will restrict some data, sharing it only with people with certain roles in the company or who work in specific departments.

This runs into what some describe as the civil service mentality problem. That is where someone is trained on the rules and might even memorize the rules, but they are not trained on why the rules were initially created. Without that background, they can’t make an informed decision about when an exception is warranted, and they therefore tend to interpret the rules strictly and literally.

That is also true for LLMs. But much sensitive enterprise data is not nearly that binary.

Take the earlier example of the finances of a publicly held company. It is true that data about unannounced finances for this quarter have to be restricted to a handful of authorized people. But has the LLM been programmed to know that the data is instantly world-readable as soon as it is announced and filed with the SEC? And that only the data reported is now public, while unreported data is still proprietary?

A related issue: Let’s say that it is crunch time for the finances to be prepared for filing, and the CFO asks for — and is granted — permission for an additional 30 people from different company business units to temporarily help with the filings. Does someone think to reprogram the LLM to grant temporary data access to those 30 temporary resources? And does someone remember to go back and remove their access once they return to their regular roles?

Unrecognized glitches

Another LLM concern is more practical. Veteran IT managers have many years of experience working with all manner of software. Their experience teaches them how systems look when they crash, such as slowing down, halting, generating error messages, and throwing out screens of garbage characters. But when an LLM glitches — its version of crashing — it doesn’t act that way.

“When traditional software is broken, it’s obvious: screens don’t load, error messages are everywhere. When [LLM] software is broken, it’s much more opaque: you don’t get glaring errors, you just get a model with bad predictions,” said Kevin Walsh, head of artificial intelligence at HubSpot. “It may take weeks or months of having the LLM out in the real world before hearing from users that it’s not solving the problem it is supposed to.”

That could be significant, because if IT doesn’t recognize that there is a problem quickly, its attempts to fix and limit the system will be delayed, possibly making the response too late to stop the damage.

Because LLMs fail differently and in far more hidden ways than traditional software, IT needs to set up far more tracking, testing, and monitoring. It might be a routine assignment for someone to test the LLM each morning.

Unrealistic expectations

Allie Mellen, principal analyst for SecOps and AI security tools at Forrester says there is an inaccurate perception of LLMs, often because LLMs do such a persuasive job of impersonating human thought.

“We have this flawed perception of generative AI because it appears more human. It can’t have original thoughts. It just anticipates the next word. The expectation that it can write code is way overblown,” she said.

LLMs need to handled very carefully, she added. “There are many ways around the guardrails. An individual might come up with a slightly different prompt” to get around programmed restrictions, she said.

IT “must focus on what can realistically be implemented in realistic use cases,” Mellen said. “Don’t treat it as though LLMs are hammers and all of your problems are nails. The [LLM] capabilities are being oversold by most of the business world — investors and executives.”

Generative AI, IT Operations
Kategorie: Hacking & Security

10 ways to turn off Windows’ worst ads

1 Květen, 2024 - 12:00

Both Windows 11 and Windows 10 are full of advertisements and other Microsoft-provided messages that pop up seemingly everywhere and can get in the way of your day-to-day routines. And then there are things that aren’t exactly ads — noisy notifications about viral online articles on MSN, for instance, where Microsoft gets a cut of the advertising. 

Want to get rid of all the annoying ads and pop-ups you can? After a few tweaks, Windows will quiet down and stop bothering you so much when you’re trying to get work done. (Alas, Microsoft doesn’t make it possible to turn off everything, so don’t be surprised if you still see a few surprises even after following this guide.) 

I’ve got so many more useful PC tips and tricks to share with you! Sign up for my free Windows Intelligence newsletter — three things to try every Friday. Plus, get free copies of Paul Thurrott’s Windows 11 and Windows 10 Field Guides (a $10 value) for signing up. 

Disable Start menu ads

Windows 11 is getting advertisements for apps in its Start menu — something Windows 10 PCs already have. To avoid seeing these: 

  • In Windows 11, open the Settings app and head to Personalization > Start. Turn off “Show recommendations for tips, shortcuts, new apps, and more.” 
  • In Windows 10, open the Settings app and head to Personalization > Start. Turn off “Show suggestions occasionally in Start.”  
Get rid of notification ads and full-screen prompts 

Windows might sometimes send you notification pop-ups with “tips and suggestions.” These tips can include recommendations to use Microsoft Edge and messages pushing the Microsoft Rewards points program. Additionally, Windows sometimes shows you “finish setting up your PC” prompts with messages about using OneDrive and Microsoft 365. To get rid of these: 

  • In Windows 11, open the Settings app and head to System > Notifications. Scroll down to the bottom of the screen, expand the “Additional settings” section, and uncheck the three options here: “Get tips and suggestions when using Windows,” “Suggest ways to get the most out of Windows and finish setting up this device,” and “Show the Windows welcome experience after updates and when signed in to show what’s new and suggested.” 
  • In Windows 10, open the Settings app and head to System > Notifications & actions. Turn off these three options: “Show me the Windows welcome experience after updates and occasionally when I sign in to highlight what’s new and suggested,” “Suggest ways I can finish setting up my device to get the most out of Windows,” and “Get tips, tricks, and suggestions as you use Windows” options. 
Stop seeing ads in Settings 

Windows shows you more “suggestions” for subscriptions like Microsoft 365, Copilot Pro, and Xbox Game Pass in the Settings app. To get rid of these: 

  • In Windows 11, open the Settings app and head to Privacy & security > General. Turn off “Show me suggested content in the Settings app.” 
  • In Windows 10, open the Settings app and head to Privacy > General. Turn off “Show me suggested content in the Settings app.” 

The Settings app now pushes Microsoft’s subscription services hard. 

Chris Hoffman, IDG

Hide ads in File Explorer 

Microsoft has used banners in File Explorer to show advertisements for OneDrive storage. To avoid seeing these: 

  • In Windows 11, open File Explorer, click the “…” menu on the toolbar, and select “Options.” Click over to the “View” tab, scroll down to near the bottom of the list, and uncheck “Show sync provider notifications.” Click “OK.” 
  • In Windows 10, open File Explorer, click the “View” tab on the ribbon, and click “Options.” Click over to the “View” tab, scroll down to near the bottom of the list, and uncheck “Show sync provider notifications.” Click “OK.” 
Avoid lock screen ads 

Windows PCs can use Microsoft’s Windows Spotlight feature to see regularly updated background images on their lock screen. It’s a nice feature, but Microsoft has also used it to push full-screen advertisements for PC games and advertising-type messages. To stop this from happening: 

  • In Windows 11, open the Settings app and head to Personalization > Background. Set “Personalize your background” to something like “Picture” and choose whatever picture you like — anything but “Windows Spotlight.” 
  • In Windows 10, open the Settings app, head to Personalization > Lock screen. Click the “Background” box and select “Picture” or “Slideshow” — anything but “Windows Spotlight.” Turn off the “Get fun facts, tips, and more from Windows and Cortana on your lock screen” switch here, too. (It won’t appear if Windows Spotlight is turned off.) 

Personally, I put up with this — I’d rather have the fresh lock-screen images, even if I see an advertisement every now and then. It’s up to you. 

Hide clutter in the search pane 

The search box on the taskbar and the pop-up search experience both have “highlights” that recommend all kinds of shopping content, games, and other viral things. To turn those off: 

  • In Windows 11, open the Settings app and head to Privacy & security > Search permissions. Scroll down and turn off “Show search highlights” here. 
  • Windows 10 does not have this feature, so there’s nothing to turn off. 

The search pane normally recommends shopping and games when you start a search. 

Chris Hoffman, IDG

Never see feedback popups 

Windows might sometimes ask for feedback about your PC experience: Would you recommend Windows to other people? To avoid these interruptions and stop Windows from asking for feedback: 

  • In Windows 11, open the Settings app and head to Privacy & security > Diagnostics & feedback. Click the “Feedback frequency” box and set it to “Never.” 
  • In Windows 10, open the Settings app and head to Privacy > Diagnostics & feedback. Scroll down and set the “Feedback frequency” box to “Never.” 
Turn off the viral firehose in Widgets 

Windows 11’s Widgets experience pushes viral news articles and shows stock price movements on your taskbar by default. Windows 10 has a similar feature that also recommends viral stories. To turn off Widgets completely: 

  • In Windows 11, right-click an empty spot on the taskbar, select “Taskbar settings,” and turn off “Widgets.” 
  • In Windows 10, right-click an empty spot on the taskbar, point to “News and interests,” and select “Turn off.” 

Or, you can just turn off those viral stories: 

  • In Windows 11, click the Widgets icon at the left side of the taskbar, click the gear icon at the top-right corner of the Widgets pane, click “Show or hide feeds,” and turn off “My feed.” 
  • Windows 10 doesn’t let you turn off the viral story feed while keeping the weather on the taskbar. 
Windows 11’s Widgets feed is still the most annoying part of the operating system. 

Chris Hoffman, IDG

Toss apps that come stuck to your Start menu 

Windows PCs come with a bunch of app shortcuts “pinned” to their Start menus. Most of these apps aren’t technically installed yet — they’ll just be installed if you click their shortcuts. For example, you might see apps like “Luminar Neo – AI Photo Editor” and “Grammarly” pinned to your Start menu. To get rid of them: 

  • In Windows 11, open the Start menu. Look at the list of pinned apps. Right-click apps you don’t use and select either “Uninstall” or “Unpin from Start.” 
  • In Windows 10, open the Start menu. Look at the list of pinned app tiles on the right side of the menu. Right-click apps you want to get out of there and select either “Uninstall” or “Unpin from Start.” 

If your Windows 10 PC is old enough, you might even see a tile for Candy Crush! (Amusingly enough, Microsoft now owns Candy Crush after its controversial acquisition of Activision-Blizzard.) 

You might also want to uninstall bundled apps you don’t want. For example, many new PCs come with a trial of McAfee antivirus — you can uninstall McAfee antivirus if you’re not going to use it. 

Clean up Microsoft Edge

The Microsoft Edge browser is stuffed full of viral news stories, AI features, links to MSN games, recommendations for coupons, and all kinds of other additional things. You can avoid them by switching to another web browser, but if you want to use Edge, here are a few steps you can take: 

  • Clean up Edge’s Start page: Open a new tab in Microsoft Edge, click the gear icon at the top-right corner of the page, turn off “Content,” and turn off “Show sponsored background.” 
  • Turn off the sidebar: Click the gear icon at the bottom of the sidebar on the right side of the Edge browser. Uncheck “Always show sidebar.” 
  • Get rid of shopping notifications: Click the menu icon near the top-right corner of the Edge browser window and choose “Settings.” Select “Privacy, search, and services” at the left side of the Settings page, scroll down to the “Services” section, and turn off “Save time and money with Shopping in Microsoft Edge.” 

If you like some of these features — that’s fine! But there’s a lot going on in Edge, and just changing these few settings should quieten things down. 

Using Edge becomes a much more peaceful experience after you clean up its new tab page. 

Chris Hoffman, IDG

More PC annoyances you can end 

If you’d like to take control over your PC, be sure to check out my guide on how to sign in with a local account. There’s a secret handshake you can use while setting up your computer. 

Still find Windows annoying? Some of the biggest annoyances on Windows 11 and Windows 10 PCs aren’t ads at all! Here’s a list of 10 Windows annoyances — and how to fix them. For example, you can turn off Bing search in the Start menu completely — but Microsoft buries this option and makes it hard to find. 

Want something that’s not annoying? Get even more Windows insights, tips, and tricks with my free Windows Intelligence newsletter, which brings you three new things to try every Friday. Plus, get free Windows 10 and 11 Field Guides as soon as you sign up. 

Microsoft, Operating Systems, Windows, Windows 10, Windows 11, Windows PCs
Kategorie: Hacking & Security

Amazon Q Business now available with new app-builder capabilities

30 Duben, 2024 - 22:37

Amazon Web Services (AWS) on Tuesday said its generative AI-based assistant for business applications — Amazon Q Business — is now generally available.

Introduced at re:Invent last year, Amazon Q Business can be used to have conversations, solve problems, generate content, gain insights, and take action by connecting to a company’s information repositories, data, and enterprise systems, AWS  said. 

To use Q as an assistant for business apps, enterprises first need to configure the generative AI (genAI) assistant by connecting it to existing data sources, which can include AWS’ S3 storage service as well as applications from vendors including Salesforce, Microsoft, Google, and Slack. 

Q currently supports connectors for more than 40 tools and applications.

Additionally, AWS has added a new app-building capability to Amazon Q Business, which is a web-based application.

Named Amazon Q Apps and currently in preview, the feature will allow enterprise users, including business users, to develop applications based on their enterprise data using natural language.

“With Q Apps, employees simply describe the app they want, in natural language, or they can take an existing conversation where Amazon Q Business helped them solve a problem, and with one click, Q will instantly generate an app that accomplishes their desired task that can be easily shared across their enterprise,” said Mai-Lan Tomsen Bukovec, vice president of technology at AWS. 

Q Apps could include HR or marketing apps designed either to onboard employees or to automate tasks. They can be accessed via the Amazon Q Business application environment, the company said; Q Apps is enabled by default and can be switched off from the Amazon Q Business console.

Bukovec said a Q App is made up of a collection of cards, with each card serving as a  user interface element that can be combined with other cards to generate an application.

“Cards take in user input, support file uploads, connect to other cards, generate text output, and allow actions through Amazon Q Business plugins,” the company said in a blog post

Enterprise users can add to the Q app, edit it, or delete a card, AWS said. 

At a more basic level, text output and plugin cards contain prompt instructions that determine how Amazon Q Business is queried to generate a response. 

“When enterprise users use the Amazon Q Apps Creator, relevant cards are automatically generated with prefilled prompts. Users can further refine these prompts using simple, natural language,” the company said.

“When writing or editing a prompt for a card, your users can reference other cards using ‘@’ mention to select from the list of cards in the app. Users can also instruct in the prompt to reference your enterprise data already in Amazon Q Business,” AWS said.

Amazon Q Apps developed by one enterprise user can be shared with other users across the company via the Amazon Q Apps library.

Q Apps can also be copied and customized by other users to create a new version. 

Amazon Q Business is available in two subscription models — Lite and Pro — which are priced at $3 and $20 per user, per month, respectively. The Pro subscription offers Amazon Q Apps, extended capabilities via custom plugins, and the ability to gain insights via Q in QuickSight. These capabilities are not available in the Lite subscription pack.

Additionally, the Pro subscription allows enterprise users to receive responses in a conversational interface up to approximately seven pages compared to the Lite pack’s limit of one page.

Amazon Web Services, Enterprise Applications, Generative AI, Productivity Software
Kategorie: Hacking & Security

CHIPS Act is working as billions of dollars in payouts is divvied out to semiconductor makers

30 Duben, 2024 - 19:44

More than a year and a half after the CHIPS and Science Act was signed into law, the Biden Administration has begun divvying up $52.7 billion in funding and tax incentives meant to spur semiconductor production on US soil, though the actual funding has yet to be dispersed.

Over the past several months, the administration, which championed the legislation, has allocated about $29 billion in funding among chipmakers, including Samsung, TSMC and Intel. In return, various chip designers and makers have pledged about $300 billion in current and future projects in the US, according to the White House.

The most recent announcement last week was for $6.14 billion toward Micron’s plans to build new fabrication plants in upstate New York and perform upgrades elsewhere. To date, Intel has reaped the most in promised funds: $8.5 billion.

The Department of Commerce, which is administering the CHIPS Act, has spent months negotiating with semiconductor designers and fabricators to gain commitments from the companies and to achieve specific milestones in their projects before getting government payouts.

For example, negotiations between the federal government and TSMC resulted in the Taiwanese semiconductor designer and manufacturer being promised $6.6 billion in CHIPS Act funding; in return, the company bringing its most advanced 2nm process technology to US shores and adding plans for a third fabrication plant to its Arizona site.

An artist’s rendition of Micron’s proposed fabrication plant, to be located in Onondaga County, NY. The plant will be the size of 40 football fields and is expected to provide close to 50,000 jobs for the region.

Micron Technologies

At the beginning of 2023, TSMC, the world’s largest chip maker, began construction on its second chip fabrication plant near Phoenix, AZ. For Biden, TSMC’s three plants represented the flagship of the CHIPS Act incentive program. The TSMC project, however, stalled, and the company announced it had pushed back its completion date to 2025 due to problems finding skilled labor.

TSMC had promised to make a $40 billion investment in its US chip production plant. The investment represents the largest ever foreign investment in Arizona and one of the largest in US history.

Micron said it might spend up to $100 billion over the next 20 years to expand its US facilities, including a $15 billion memory chip plant in its home base of Boise, ID.

Industry analysts say the CHIPS Act is having its desired effect — the largest semiconductor designers and makers are investing in the US. By 2030, research firm IDC expects that 30% of the leading edge chip techology will be produced in the US, Western Europe, and Japan.

“Today, the semiconductor supply chain is concentrated in Asia,” said Mario Morales, a group vice president at IDC. “In fact, 100% of the global leading-edge chip capacity — 5nm and below — is only available in Taiwan and Korea. This will change dramatically by the end of the decade as leading-edge manufacturing is reestablished in the western hemisphere and in Japan.”

The latest round of CHIPS Act funding will support Micron’s construction of the first two fabs of a planned four-building “megafab” focused on leading-edge DRAM chip production. Each fab will have 600,000 square feet of cleanrooms, totaling 2.4 million square feet across the four facilities — the largest amount of cleanroom space ever announced in the US and the size of nearly 40 football fields.

The CHIPS Act’s purpose was to strengthen American supply chain resilience after the pandemic and counter China’s rising share of the market. The US share of global semiconductor fabrication capacity has fallen from about 36% in 1990 to about 10% in 2020, according to a Congressional Research Service report. Meanwhile, China’s share of chip manufacturing has grown nearly 50% over the past two years and now comprises about 18% of the world’s supply.

More CHIPS Acts to come?

The White House has argued that CHIPS Act spending will grow America’s share of the world’s leading-edge chip market to 20% by 2030. But experts say more government incentives will be needed to sustain and continue that growth domestically.

“The first CHIPS Act is just the start, there will be more funds needed to sustain and include other parts of the supply chain like materials, OSAT, design, and tools,” Morales said. “I expect that a second CHIPS Act will likely be higher in value than the first and will be approved sometime in the second half of this decade — in 2026 or 2027.”

Gaurav Gupta, a vice president analyst at Gartner Research, agreed with Morales that more funding is needed, and that while the current funding closes the capital cost gap, it does not do much for future operational costs. “Various factors will continue to make it expensive for fabs to be competitive here, like the regulatory and compliance framework that causes delays, labor compensation, higher utility rates, etc,” he said.

“So, if the US government really wants the needle to move and make this current CHIPS Act have a real impact, I expect version 2.0, 3.0 and onwards to come. When and what amounts they would be is hard to predict for now,” Morales said.

The current CHIPS Act includes $39 billion in subsidies for chip manufacturing on US soil along with 25% investment tax credits for costs of manufacturing equipment, and $13 billion for semiconductor research and workforce training.

In December, Computerworld contacted the Department of Commerce to discover why funds from the CHIPs Act had yet to be distributed. The Department said at the time it was still in “complex negotiations” with chip manufacturers to ensure the money is wisely spent.

In February, the Administration announced $1.5 billion for GlobalFoundries to support the development and expansion of facilities in Malta, NY, and Burlington, VT.

Last month, more multi-billion-dollar distributions were announced, including $8.5 billion for Intel to support investments across four states, (Chandler, AZ; Rio Rancho, NM; New Albany, OH; and Hillsboro, OR) to construct logic fabs, modernize advanced packaging facilities, and invest in R&D.

Along with Micron this month, $6.4 billion was allocated for Samsung to build an R&D facility, and advanced packaging fabs in Taylor, TX, and to expand a current-generation and mature-node facility in Austin, TX. And $6.6 billion was earmarked for TSMC to support the development of three greenfield leading-edge fabs in Phoenix, AZ.

Chip production, the supply-chain crisis, and the new law


In 2021, the decline in domestic chip production was exposed by a worldwide supply-chain crisis that led to calls for reshoring manufacturing to the US. After more than a year of work from the Administration to respond to acute semiconductor shortages, Congress in August 2022 passed the measure. With the CHIPS Act spurring them on, semiconductor makers including  IntelSamsungMicronTSMC, and Texas Instruments unveiled plans for a number of new plants on US soil. (Qualcomm, in partnership with GlobalFoundries, also said it would invest $4.2 billion to double chip production in its Malta, NY facility.)

Companies became eligible In February 2023 to apply for the first round of CHIPS Act incentives totaling $39 billion for the construction of large-scale fabrication facilities. Last September, a second funding opportunity for small-scale fabrication projects opened.

The Commerce Department said the CHIPS ACT has moved extremely fast for a government program. For example, as part of the funding application process, the agency has received over 530 statements of interest from companies in 42 states. The department also received 120 pre-applications and full applications for funding.

CPUs and Processors, Government
Kategorie: Hacking & Security

Apple is intensely focused on its global AI efforts

30 Duben, 2024 - 17:19

Not so long ago, I can remember how Apple’s “failures” in AI made critics smile. Those smiles now seem to have faded. Instead, Apple is accelerating at speed to make people happy for a while with American AI.

How do we know the company is moving fast? With more than 160,000 direct employees globally and hundreds of thousands more across partner firms, suppliers, and the currently beleaguered App economy, when the ship that is Apple moves in a direction the rumor mill usually indicates the destination. Along those lines, we’ve heard a lot of talk across the last week.

Apple’s top secret AI labs

Apple has created a top secret AI research lab in Zurich, Switzerland. The Financial Times also claims the company has hired hundreds of leading AI researchers during the last couple of years, many of them from Google. 

These teams are focused on developing highly advanced AI models. What kind of models? In essence, these seem to be super-lightweight, highly focused neural networks capable of delivering really useful tools that function on the device.

To get a sense of what these might do, Apple researchers recently released a wave of eight new AI models capable of running on the device. The company calls them “Open-source Efficient Language Models”. 

Model behavior

These are small models trained on public data that work on the device to solve focused tasks. The aim is to make it possible to run generative AI (genAI) tools on the device itself, rather than using servers, which preserves privacy, improves efficiency, and safeguards information. These solutions promise truly mobile AI devices that will work offline, with the code is being made available to researchers on GitHub

These aren’t the first AI models to slip out of Apple’s research labs. Earlier this year, the company published AI models that can edit photos through written prompts, and another to help people optimize their use of an iPhone. Interestingly, six of the researchers named as authors of a paper describing the latter technology were former Google employees hired in the last two years. 

Making friends

Apple also seems to be exploring potential partnerships. In recent months, we’ve heard it has spoken with both Google and Baidu to make their AI models available to iPhones; last week, we heard it has recommenced discussions with OpenAI. 

This has led both to speculation of an AI-dedicated App Store from which users can access bespoke selections of third-party AI solutions and rumors Apple seeks to license third-party models to enable its devices.

Apple also seems focused on augmenting its existing apps with AI. AppleInsider claims the company is testing a version of Safari with a built in AI-powered intelligent search agent capable of providing summaries of websites.

Think ethically

Throughout all of this, Apple has maintained a tight silence about the totality of its AI strategy. Critically, however, it’s important to understand that the company is not interested in building solutions that provide incorrect or inappropriate responses and would rather be cautious than to introduce an AI that is flawed. It seeks to develop ethical, useful AI that provides real benefits to users while retaining privacy. 

This also extends to how it trains its AI models; if you look at its published research papers, you’ll find many of those it has revealed have been trained using publicly available data, rather than breaching copyright.

Apple is also investing in AI infrastructure

Apple will announce its financial results on Thursday. These aren’t expected to impress, but it seems likely much of the disappointment is already baked in. But for those of us curious about the extent to which Apple is preparing the ground for AI, it will be interesting to track how much the company is investing in capital expenditure. 

We know such spending is taking place:

  • Just over a week ago, the company announced an expansion to its Singapore campus to provide space for “new roles in AI and other key functions”, and is making similar investments in Indonesia.
  • Apple also recently acquired French AI firm Datakalab. That company specializes in on-device processing, algorithm compression, and embedded AI.
  • Hints that Apple will have some reliance on AI in the cloud are also visible on news the company has appointed former Google Sumit Gupta as director of products, Apple Cloud. Gupta has years of experience in AI infrastructure, including a previous six-year stint as chief AI Strategy officer and CTO of AI at IBM.

All of these suggest that sizable investments in the infrastructure required to power AI on two billion actively used Apple devices is already taking place.

Securing the servers with Apple Silicon

Investment extends to R&D for infrastructure. After all, it means something that Apple is allegedly considering building servers powered by Apple Silicon chips. Those servers could go some way toward providing the kind of computational power required to drive AI services in the cloud, while also mitigating the enormous energy consumption such services require.

These data points should provide some color as we accelerate toward introduction of new M4(?)-powered, AI-capable iPads at an online Apple keynote next week, followed by a little more insight at WWDC 2024 in June — and culminating with the big AI iPhone 16 reveal in fall.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Apple, Artificial Intelligence
Kategorie: Hacking & Security

What Capgemini software chief learned about AI-generated code: highly usable, ‘too many unknowns’ for production

30 Duben, 2024 - 12:00

Capgemini Engineering is made up of more than 62,000 engineers and scientists across the globe whose job it is to create products for a myriad number of clients, from industrial companies building cars, trains and planes to independent software vendors.

So, when AI-assisted code generation tools began flooding the marketplace in 2022, the global innovation and engineering consultancy took notice. Afterall, one-fifth of Capgemini’s business involves producing software products for a global clientele facing the demands of digital transformation initiatives.

According to Capgamini’s own survey data, seven in 10 organizations will be using generative AI (genAI) for software engineering in the next 12 months. Today, 30% of organizations are experimenting with it for software engineering, and an additional 42% plan to use it within a year. Only 28% of organizations are steering completely clear of the technology.

In fact, genAI already assists in writing nearly one in every eight lines of code, and that ratio is expected hit one in every five lines of code over the next 12 months, according to Capgemini.

Jiani Zhang took over as the company’s chief software officer three years ago. In that time, she’s seen the explosion of genAI’s use to increase efficiencies and productivity among software development teams. But as good as it is at producing usable software, Zhang cautioned that genAI’s output isn’t yet ready for production — or even for creating a citizen developer workforce. There remain a number of issues developers and engineers will face when piloting its use, including security concerns, intellectual property rights issues, and the threat of malware.

Jiani Zhang, Chief Software Officer at Capgemini Engineering 

Capgemini

That said, Zhang has embraced AI-generated software tools for a number of lower-risk tasks, and it has created significant efficiencies for her team. Computerworld spoke with Zhang about Capgemini Engineering’s use of AI; the following are excerpts from that interview.

What’s your responsibility at Capgemini? “I look after software that’s in products. The software is so pervasive that you actually need different categories of software and differnent ways it’s developed. And, you can imagine that there’s a huge push right now in terms of moving software [out the door].”

How did your journey with AI-generated software begin? “Originally, we thought about generative AI with a big focus on sort of creative elements. So, a lot of people were talking about building software, writing stories, building websites, generating pictures and the creation of new things in general. If you can generate pictures, why can’t you generate code? If you can write stories, why not write user stories or requirements that go into building software. That’s the mindset of the shift going on, and I think the reality is it’s a combination of a market-driven dynamic. Everyone’s kind of moving toward wanting to build a digital business. You’re effectively now competing with a lot of tech companies to hire developers to build these new digital platforms.

“So, many companies are thinking, ‘I can’t hire against these large tech companies out here in the Bay Area, for example. So, what do I do?’ They turn to AI…to deal with the fact that [they] don’t have the talent pool or the resources to actually build these digital things. That’s why I think it’s just a perfect storm, right now. There’s a lack of resources, and people really want to build digital businesses, and suddenly the idea of using generative AI to produce code can actually compensate for [a] lack of talent. Therefore, [they] can push ahead on those projects. I think that’s why there’s so much emphasis on [genAI software augmentation] and wanting to build towards that.”

How have you been using AI to create efficiencies in software development and engineering? “I would break out the software development life cycle almost into stages. There is a pre-coding phase. This is the phase where you’re writing the requirements. You’re generating the user stories, and you create epics. Your team does a lot of the planning on what they’re going to build in this area. We can see generative AI having an additive benefit there just generating a story for you. You can generate requirements using it. So, it’s helping you write things, which is what generative AI is good at doing, right? You can give it some prompts of where you want to go and it can generate these stories for you.

“The second element is that [software] building phase, which is coding. This is the phase people are very nervous about it and for very good reason, because the code generation aspect of generative AI is still almost like a little bit of wizardry. We’re not quite sure how it gets generated. And then there’s a lot of concerns regarding security, like where did this get generated from? Because, as we know, AI is still learning from something else. And you have to ask [whether] my generated code is going to be used by somebody else? So there’s a lot of interest in using it, but then there’s a lot of hesitancy in actually doing the generation side of it.

“And then you have the post-coding phase, which is everything from deployment, and testing, and all that. For that phase, I think there’s a lot of opportunity for not just generative AI, but AI in general, which is all focused around intelligent testing. So, for instance, how do you generate the right test cases? How do you know that you’re testing against the right things? We often see from a lot of clients where effectively over the years they’ve just added more and more tests to that phase, and so it got bigger and bigger and bigger. But, nobody’s actually gone in and cleaned up that phase. So, you’re running a gazillion tests. Then you still have a bunch of defects because no one’s actually cleaned up the tests of defects they are trying to detect. So, a lot of this curates better with generative AI. Specifically, it can perform a lot of test prioritization. You can look at patterns of which tests are being used and not used. And, there’s less of a concern about something going wrong in with that. I think AI tools make a very big impact in that area.

“You can see AI playing different roles in different areas. And I think that the front part has less risk and is easier to do. Maybe it doesn’t do as much as the whole code generation element, but again there’s so much hesitancy around being comfortable with the generated code.”

How important is it to make sure that your existing code base is clean or error free before using AI code generation tools? “I think it depends on what you’re starting from. With any type of AI technology, you’re starting with some sort of structure, some sort of data. You have some labeled data, you have some unlabeled data, and an AI engine is just trying to determine patterns and probabilities. So, when you say you want to generate code, well what are you basing new code that off of?

“If you were to create a large language model or any type of model, what’s your starting point? If your starting point is your code base only, then yes, all of the residual problems that you have will most likely be inherited because it’s training on bad data. Thinking about that is how you should code. A lot of people think, ‘I’m not going to be arrogant to think that my code is the best.’

“The more generic method would be to leverage larger models with more code sets. But then the more code you have gets you deeper into a security problem. Like where does all that code come from? And am I contributing to someone else’s larger code set? And what’s really scary is if you don’t know the code set well, is there a Trojan horse in there. So, there’s lot of dynamics to it.

“A lot of the clients that we face love these technologies. It’s so good, because it professes an opportunity to solve a problem, which is the shortage of talent, so as to actually build a digital business without that. But then they’re really challenged. Do I trust the results of the AI? And do I have a large enough code base that I’m comfortable using and not just imagining that some model will come from the ether to do this.”

How have you addressed the previous issue — going with a massive LLM codebase or sticking to smaller, more proprietary in-house code and data? “I think it depends on the sensitivity of the client. I think a lot of people are playing with the code generation element. I don’t think a lot of them are taking [AI] code generation to production because, like I said, there’s just so many unknowns in that area.

“What we find is more clients have figured out more of that pre-code phase, and they’re also focusing a lot on that post-code phase, because both of those are relatively low risk with a lot of gain, especially in the area of like testing because it’s a very well-known practice. There’s so much data that’s in there, you can quickly clean that up and get to some value. So, I think that’s a very low-hanging fruit. And then on the front-end side of it, you know a lot of people don’t like writing user stories, or the requirements are written poorly and so the amount of effort that can take away from that is meaningful.”

What are the issues you’ve run into with genAI code generation? “While it is the highest value [part of the equation]…, it is also about generating consistent code. But that’s the problem. Because generative AI is not prescriptive. So, when you tell it, ‘I want two ears and a tail that wags, it doesn’t actually give you a Labrador retriever every time. Sometimes it will give you a Husky. It’s just looking at what fits that [LLM]. So…when you change a parameter, it could generate completely new code. And then that completely new code means that you’re going have to redo all of the integration, deployment, all those things that comes off of it.

“There’s also a situation where even if you were able to contain your code set, build an LLM with highly curated, good engineering practices [and] software practices and complement it with your own data set — and generate code that you trust — you still can’t control whether the generated code will be the same code every single time when you make a change. I think the industry is still working to figure those elements out, refining and re-refining to see how you can have consistency.”

What are your favorite AI code-augmentation platforms? “I think it’s quite varied. I think the challenge with this market is it’s very dynamic; they keep adding new feature sets and the new feature sets kind of overlap with each other. So, it’s very hard to determine what one is best. I think there are certain ones that are leading right now, but at the same time, the dynamics of the environment [are] such that you could see something introduced that’s completely new in the next eight weeks. So, it’s quite varied. I wouldn’t say that there is a favorite right now. I think everyone is learning at this point.”

How do you deal with code errors introduced by genAI? What tools do you use to discover and correct those errors, if any? “I think that then goes into your test problem. Like I said, there’s a consistency problem that fundamentally we have to take into account, because every time we generate code it could be generated differently. Refining your test set and using that as an intelligent way of testing is a really key area to make sure that you catch those problems. I personally believe that they’re there because the software development life cycle is so vast.

“It’s all about where people want to focus the post-coding phase. That testing phase is a critical element to actually getting any of this right. …It’s an area where you can quickly leverage the AI technologies and have minimal risk introduced to your production code. And, in fact, all it does is improve it. [The genAI] is helping you be smarter in running those test sets. And those test sets are then going to be highly beneficial to your generated code as well, because now you know what your audience is also testing against.

“So if the generated code is bad, you’ll catch it in these defects. It’s worth a lot of effort to look at that specific area because, like I said, it’s a low-risk element. There’s a lot of AI tools out there for that.

“And, not everything has to be generative AI, right? You know, AI and machine learning [have] been here for quite some time, and there’s a lot of work that’s already been done to refine [them]. So, there’s a lot of benefit and improvement that’s been done to those older tools. The market has this feeling that they need to adopt AI, but AI adoption hasn’t been the smoothest. So then [developers] are saying. ‘Let’s just leapfrog and let’s just get into using generative AI.’ The reality is that you can actually fix a lot of these things based off of technology that’s didn’t just come to market 12 months ago. I think there’s there’s definitely benefit in that.”

What generative AI tools have you tried and what kind of success have you seen? “We’ve tried almost all of them. That’s the short answer. And they’ve all been very beneficial. I think that the reality is, like I said before, the landscape of genAI tools today is pretty comparable between the different cloud service providers. I don’t see a a leading one versus a non leading one. I feel like they all can do a pretty nice things.

“I think that the challenge is being up to date with what’s available because they keep releasing new features. That is encouraging, but at the same time you have to find a way to implement and use the technology in a meaningful way. At this point, the speed at which they’re pushing out these features versus the adoption in the industry is unmatched. I think there’s a lot more features than actual adoption.

“We have our Capgemini Research Institute, through which we do a lot of polls with executives, and what we found is about 30% of organizations are experimenting with genAI. And probably another 42% are going to be playing with it for the next 12 months. But that also means from an adoption perspective, those actually using it in software engineering, I think it’s only less than one-third that’s really fundamentally going to be impacting their production flow with generator. So I think the market is still very much in the experimentation phase. And so that’s why all the tool [are] pretty comparable in terms of what it can do and what it can’t do.

“And again, it’s not really about whether the feature set is greater in one platform versus another. I think it’s more the application of it to solving a business problem that makes the impact.”

Do you use AI or generative AI for any software development? Forget pre-development and post-development for the moment. Do you actually use it to generate code that you use? “We do. Even for us, it is in an experimentation phase. But we have put in a lot of work ourselves in terms of refining generative AI engines so that we can generate consistent code. We’ve actually done quite a lot of experimentation and also proof of concepts with clients on all three of those phases [pre-code modeling, code development, post-code testing]. Like I said, the pre- and post- are the easier ones because there’s less risk.

“Now, whether or not the client is comfortable enough for that [AI] generated code to go to production is a different case. So, that proof of concept we’re doing is not necessarily production. And I think…taking it to production is still something that industry has to work through in terms of their acceptance.”

How accurate is the code generated by your AI tools? Or, to put it another way, how often is that code usable? I’ve heard from other experts the accuracy rate ranges anywhere from 50% to 80% and even higher. What are you finding? “I think the code is highly usable, to be honest. I think it’s actually a pretty high percentage because the generated code, it’s not wrong. I think the concern with generated code is not whether or not it’s written correctly. I think it is written correctly. The problem, as I said, is around how that code was generated, whether there were some innate or embedded defects in it that people don’t know about. Then you know the other question is where did that generated code come from, and whether or not the generated code that you’ve created now feeds a larger pool, and is that secure?

“So imagine if I’m an industrial company and I want to create this solution, and I generate some code [whose base] came from my competitor. How do I know if this is my IP or their IP? Or if I created it, did that somehow through the ether migrate to somebody else generating that exact same code? So, it gets very tricky in that sense unless you have a very privatized genAI system.”

Even when the code itself it not usable for whatever reason, can AI-generated code still be useful? “It’s true. There’s a lot of code that in the beginning may not be usable. It’s like with any learning system, you need to give it more prompts in order to tailor it to what you need. So, if you think about basic engineering, you define some integers first. If genAI can do that, you’ve now saved yourself some time from having to type in all the defined parameters and integer parameters and all that stuff because that could all be pre-generated.

“If it doesn’t work, you can give it an additional prompt to say, ‘Well, actually I’m looking for a different set of cycles or different kind of run times,’ and then you can tweak that code as well. So instead of you starting from scratch, just like writing a paper, you can have someone else write an outline, and you can always use some of the intro, the ending, some of these things that isn’t the actual meat of the content. And the meat of the content you can continue to refine with generative AI, too. So definitely, it’s a big help. It saves you time from writing it from scratch.”

Has AI or genAI allowed you to create a citizen developer work force? “I think it’s still early. AI allows your own team to be a little faster in doing some of the things that they don’t necessarily want to do or it can cut down the toil from a developer’s perspective of generating test cases or writing user stories and whatnot. It’s pretty good at generating the outline of a code from a framework perspective. But for it to do it code generation independently, I think we’re we’re still relatively early on that.”

How effective has AI code generation been in creating efficiencies and increasing productivity? “Productivity, absolutely. I think that’s a really strong element of the developer experience. The concept is that if you hire some really good software developers, they want to be building features and new code and things of that sort, and they don’t like do more of the pre-code responsibilities. So if you can solve more of that toil for them, get rid of more of that mundane repetitive things, then they can be focused on more of the value generation element of it.

“So, for productivity, I think it’s a big boost, but it’s not about developing more code. I think often times it’s about developing better code. So instead of saying I spent hours of my day just writing a basic structure, that’s now pre-generated for me. And now I can think about how do I optimize runtimes. How do I optimize the consumption or storage or whatnot?

“So, it frees up your mind to think about additional optimizations to make your code better, rather than just figuring out what the basis of the code is.”

One of the other uses I’ve heard from engineers and developers is that genAI is often not even used to generate new code, but it’s most often used to update old code. Are you also seeing that use case? “I think all the genAI tools are going to be generating new code, but the difference is that one very important use case which you just highlighted: the old code migration element.

“What we also find with a lot of clients that have a lot of systems that are heavily outdated — they have systems that are 20-, maybe 30-years old. There could be proprietary code in there that nobody understands anymore and they’ve already lost that skill set. What you can do with AI-assisted code generation and your own team is create a corpus of knowledge of understanding of the old code so you can actually now ingest the old code and understand what that meta model is — what is it you’re trying to do? What are the right inputs and outputs? From there, you can actually generate new code in a new language that’s actually maintainable. And that’s a huge, huge benefit for clients.

“So, this is a great application of AI technology where it’s still generating your code, it’s not actually changing the old code. You don’t want it to change the old code, because it would still be unmanageable. So what you want is to have it understand the concept of what you’re trying to do so that you can actually generate new code that is much more maintainable.”

Are there any other tips you can offer people who are considering using AI for code generation? “I think it’s a great time, because the technology is super exciting. There’s a lot of different choices for people to play around with and I think there’s a lot of low-hanging fruit. I feel like generative AI is also a huge benefit, because it’s introduced or reintroduced people to the concept that AI is actually attainable.

“There’s a lot of people who think AI is like the top of the pyramid technology. They think, I’m going to get there, but only if I clean up all of my data and I get all my data ingested correctly. Then, if I go through all those steps, I can use AI. But the pervasiveness and the attractiveness of general AI is that it is attainable even before that. It’s OK to start now. You don’t have to clean up everything before you get to that point. You can actually iterate and gain improvements along the way.

“If you look at that the software development life cycle, there’s a lot of areas right now that could be low risk use of AI. I wouldn’t even say just productivity. It’s just about it being more valuable to the outcomes that you want to generate and so it’s a good opportunity to start with. It’s not the be all to end all. It’s not going to be your citizen developer, you know, But it augments your team. It increases the productivity. It reduces the toil. So, it’s just a good time to get started.”

Developer, Engineer, Generative AI
Kategorie: Hacking & Security

The EU has decided to open up iPadOS

29 Duben, 2024 - 17:25

The EU has given Apple just six months to open up iPads in the same way it’s been forced to open up iPhones in Europe. The decision follows an EU determination that the iPad — which leads but does not dominate the tablet market — should be seen as a “gatekeeper.”

Apple will not have much time to comply.

What’s really interesting, as noted by AppleInsider, is the extent to which the decision to force Apple to open up iPadOS seems to have been made even though the EU’s lead anti-competition regulator, Margrethe Vestiger, says the company doesn’t actually meet the criteria for enforcement. 

It doesn’t meet the threshold, so we’ll do it anyway

“Today, we have brought Apple’s iPadOS within the scope of the DMA obligations,” said Vestager.  “Our market investigation showed that despite not meeting the thresholds, iPadOS constitutes an important gateway on which many companies rely to reach their customers.”

This triumph of ideology is just the latest poor decision from the trading bloc and comes as Apple gets ready to introduce new software, features, and artificial intelligence to its devices at its Worldwide Developer’s Conference in June

With that in mind, I expect Apple’s software development teams need Europe’s latest decision about as much as the rest of us need an unexpected utility bill. That said, I imagine the challenge has not been entirely unexpected.

Sour grapes?

To some extent you have to see that Europe is playing defense.

Not only has it lost all advantages in space research to Big Tech firms such as Space X, but the continent has arguably failed to spawn a significant homegrown Big Tech competitor. This leaves Europe reliant on US technology firms, so it’s clear the EU is attempting to loosen the hold US firms have on digital business in Europe by using the Digital Markets Act is being applied.

The EU isn’t alone; US regulators are equally determined to dent the power Apple and other major tech firms hold. Fundamental to many of the arguments made is the claim that consumers will see lower prices as a result of more open competition, but I’m highly doubtful that will happen.

So, what happens next?

Apple will likely attempt to resist the EU call to open up the iPad, but will eventually be forced to comply. Meanwhile, as sideloading intensifies on iPhones, we will see whether user privacy and safety do indeed turn out to be compatible with sideloading.

In an ideal world, the EU would hold off on any action involving iPads pending the results of that experiment. It makes sense for regulators and Apple to work constructively together to protect against any unexpected consequences as a result of the DMA before widening the threat surface. 

Perhaps user security isn’t something regulators take seriously, even though government agencies across the EU and elsewhere are extremely concerned at potential risks. Even in the US, regulators seem to want us to believe Apple’s “cloak” of privacy and security is actually being used to justify anti-competitive behavior. 

Do the benefits exceed the risks?

Experientially, at least, there’s little doubt that platforms (including the Mac) that support sideloading face more malicious activity than those that don’t. Ask any security expert and they will tell you that in today’s threat environment, it’s only a matter of time until even the most secure systems are overwhelmed. So it is inevitable some hacker somewhere will find a way to successfully exploit Apple’s newly opened platforms.

It stands to reason that ransomware, adware, and fraud attempts will increase and it is doubtful the EU will shoulder its share of the burden to protect people against any such threats that emerge as a result of its legislation.

For most consumers, the biggest benefit will be the eventual need to purchase software from across multiple store fronts, and to leave valuable personal and financial details with a wider range of payment processing firms.

The joy I personally feel at these “improvements” is far from tangible.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Apple, Apple App Store, iPad, Mobile
Kategorie: Hacking & Security

A new Windows 11 backup and recovery paradigm?

29 Duben, 2024 - 12:00

A lot has changed regarding built-in backup and recovery tools in Windows 11. Enough so, in fact, that it’s not an exaggeration to talk about a new approach to handling system backup and restore, as well as system repair and recovery.

That’s why the title for this article uses the “P-word” (paradigm). This a term much-beloved in the USA in the 1970s and ’80s, plucked from Thomas Kuhn’s The Structure of Scientific Revolutions (1972) to explain how and why radical changes happen in science.

Indeed, a list of what’s new in Windows 11 by way of backup and recovery helps set the stage for considering a veritable paradigm shift inside this latest desktop OS version:

  • The Windows Backup app, which replaces the obsolete “Backup and Restore (Windows 7) utility,” still present in Windows 10 but absent in Windows 11
  • A revamped approach inside Settings > System > Recovery, which now includes both “Fix problems using Windows Update” and “Reset this PC” options to attempt repairs to an existing OS or reinstall Windows 11 from scratch, respectively

If these elements are combined with proper use of OneDrive, they can cover the gamut of Windows backup, restore, repair, and recovery tasks. Remarkable!

Defining key R-words: Repair, Restore, Recovery, and Reset

Before we dig into the details, it’s important to define these “R-words” so that what Microsoft is doing with Windows 11 backup and recovery options makes sense.

  • Repair: Various methods for fixing Windows problems or issues that arise from a working but misbehaving OS or PC. For what it’s worth, this term encompasses the “Fix problems without resetting your PC” button in Settings > System > Recovery shown in Figure 1; it calls the native, built-in Windows 11 Get Help facility.

Figure 1: Although it’s labeled Recovery, this Windows 11 Settings pane shows Reset explicitly and Repair implicitly.

Ed Tittel / IDG

  • Restore: This is usually defined as putting things back the way they were when a particular backup was made. It is NOT shown in Figure 1, though you can get to a set of Windows Backup data that provides restore information through Advanced startup and through other means.
  • Recovery: Though it has a general meaning, Microsoft tends to view Recovery as a set of operations that enables access to a non-booting Windows PC, either to replace its boot/system image (“Reset this PC” in Figure 1 — see next item) or to boot to alternate media or the Windows Recovery environment, a.k.a. WinRE (“Advanced startup” in Figure 1) to attempt less drastic repairs: reboot from external media, attempt boot or partition repairs, replace corrupted system files, and a great deal more.
  • Reset: Remove the current disk structure on the system/boot drive with a new structure and a fresh, new Windows 11 install, keeping or discarding personal files (but not applications) as you choose.

All of the preceding R-words are intertwined. And Restore is closely related to Backup — that is, one must first perform a backup so that one has something to restore later on.

Introducing Windows Backup

If you type “Windows Backup” into the Windows 11 Start menu’s search box for versions 23H2 or later (publicly released October 31, 2023), you should see something like Figure 2 pop up:

Figure 2: Introducing Windows Backup in Windows 11 23H2.

Ed Tittel / IDG

This simply shows the Start menu entry for the Windows Backup app, which I’ll abbreviate as WB (with apologies to Warner Brothers). Interestingly enough, WB is not packaged as an app with an MSIX file, nor is it available through the Windows Store. Its setup options when launched tell you most of what you need to know, shown in Figure 3. The rest becomes clear as you drill down into its various subheadings, as I’ll explain soon.

Figure 3: The various Windows Backup options/selections let you protect/copy folders, apps, settings, and credentials. That’s about everything!

Ed Tittel / IDG

By default, here’s how things shake out in WB:

  • Folders covers the Desktop, Documents, Pictures, Videos, and Music items (a.k.a. “Library folders”) from the logged-in user’s file hierarchy. On first run, you may use a toggle to turn backup on or off. (Note: a valid Microsoft Account, or MSA, with sufficient available OneDrive storage is required to make use of WB.)
  • Apps covers both old-style .exe apps and newer MSIX apps (like those from the Microsoft Store). It will also capture and record app preferences, settings, and set-up information. This is extremely important, because it provides a way to get back apps and applications, and related configuration data, if you perform a “Reset this PC” operation on the Recovery pane shown in Figure 1 above.
  • Settings covers a bunch of stuff. That’s no surprise, given the depth and breadth of what falls under Settings’ purview in Windows, including: accessibility, personalization, language preferences and dictionary, and other Windows settings.
  • Credentials covers user account info, Wi-Fi info (SSIDs, passwords, etc.), and passwords. This handles all the keys needed to get into apps, services, websites, and so forth should you ever perform a restore operation.

Once you’ve made your folder selections and turned everything on, Windows Backup is ready to go. All you need to do is hit the Back up button at the bottom right in Figure 3, and your first backup will be underway. The first backup may take some time to complete, but when it’s finished you’ll see status info at the top of the Windows Backup info in Settings > Accounts > Windows backup, as shown in Figure 4.

Figure 4: Status information for WB appears under Settings > Accounts > Windows backup (credentials do get backed up but are not called out).

Ed Tittel / IDG

Please note again that all backed up files and information go to OneDrive. Thus, internet and OneDrive access are absolutely necessary for Windows Backup to make backup snapshots and for you to be able to access them for a restore (or new install) when they’re needed. This has some interesting wrinkles, as I’ll explain next.

The Microsoft support page “Getting the most out of your PC backup” explains Windows Backup as follows:

Your Microsoft account ties everything together, no matter where you are or what PC you’re using. This means your personalized settings will be remembered with your account, and your files are accessible from any device. You can unlock premium features like more cloud storage, ongoing technical support, and more, by purchasing a Microsoft 365 subscription for your account.

That same document also cites numerous benefits, including:

  • easy, secure access to files and data anywhere via OneDrive
  • simple transfer to a new PC as and when desired
  • protection “if anything happens to your PC” without losing precious files

This is why Windows Backup and the other tools offer a new backup paradigm in Windows 11. Used together through a specific MSA, you can move to a new PC when you want to, or get your old one back when you need to.

The restore process, WB-style

Microsoft has a support note that explains and describes WB, including initial setup, regular use, and how to restore. This last topic, entitled “How do I restore the backup?” is not just the raison d’être for backup, it’s also well worth reading closely (perhaps more than once).

Let me paraphrase and comment on that document’s contents. Backup makes itself available whenever you work on a new PC, or when you need to reinstall Windows, as you are setting it up. Once you log in with the same MSA to which the backup belongs, it will recognize that backups for the account are available to you, and the tool will interject itself into the install process to ask if there’s a backup you would like to restore. This dialog is depicted in Figure 5.

Figure 5: Once logged into an MSA, the Windows installer will offer to restore backup it keeps for that account to the current target PC.

Ed Tittel / IDG

For users with multiple PCs (and backups) the More options link at center bottom takes you to a list of options, from which you can choose the one you want. Once you’ve selected a backup, the Windows installer works with WB to copy its contents into the install presently underway. As Microsoft puts it, “When you get to your desktop everything will be right there waiting for you.”

I chose a modestly complex backup from which to restore my test virtual machine; it took less than 2 minutes to complete. That’s actually faster than my go-to third-party backup software, Macrium Reflect — but it occurs in the context of overall Windows 11 installation, so the overall time period required is on par (around 7 minutes, or 9 minutes including initial post login setup).

WB comes with a catch, however…

You’d think that capturing all the app info would mean that apps and applications would show up after a restore, ready to run. Not so. Look at Figure 6, which shows the Start menu entries for CrystalDiskInfo (a utility I install as a matter of course on my test and production PCs to measure local disk performance).

Figure 6: Instead of a pointer to the actual CrystalDiskInfo apps (32- & 64-bits), there’s an “Install” pointer!

Ed Tittel / IDG

Notice the Install link underneath the 32- and 64-bit versions. And indeed, I checked all added apps and applications I had installed on the backup source inside the restored version and found the same thing.

Here’s the thing: Windows Backup makes it easy to bring apps and applications back, but it does take some time and effort. You must work through the Start menu, downloading and installing each app, to return them to working order. That’s not exactly what I think a restore operation should be. IMO, a true restore brings everything back the way it was, ready to run and use as it was when the backup was made.

WB and the OneDrive limitation

There’s another potential catch when using WB for backup and restore. It won’t affect most users. But those who, like me, use a single MSA on multiple test and production machines must consider what adding WB into the mix means.

OneDrive shares MSA-related files across multiple PCs by design and default. WB saves backups on a per-PC basis. Thus, you must think and use the More options link in Figure 5 when performing a WB restore to select the latest snapshot from a specific Windows PC. If you’re restoring the same PC to itself, so to speak, click Restore from this PC (Figure 5, lower right) instead.

Overall, Windows Backup is a great concept and does make it easy to maintain system snapshots. The restore operation is incomplete, however, as I just explained. Now, let’s move onto Windows Repair, via the “Reinstall now” option shown in Figure 1 (repeated below in Figure 7).

More about “Reset this PC” and Windows repair

Looking back at Figure 1 (or below to Figure 7) you can see that “Reset this PC” is labeled as a Recovery option, along with other recovery options called “Fix problems…” above. The idea is that Reset this PC is an option of last resort, because it wipes out the existing disk image and replace it with a fresh, clean, new one. WB then permits admins or power users to draw from a WB backup for a specific PC in the cloud to restore some existing Windows setup — or not, perhaps to clean up the PC for handoff to another user or when preparing it for surplus sell-off or donation.

Figure 7: Recovery options include two “Fix problems…” options and “Reset PC.”

Ed Tittel / IDG

As described earlier in this article, “Fix problems without resetting your PC” provides access to Windows 11’s built-in “Get Help” troubleshooters, while the “Reinstall now” option provides the focus for the next section. All this said, “Reset this PC” provides a fallback option when the current Windows install is not amenable to those other repair techniques.

Using Windows Update to perform a repair install

Earlier this year, Microsoft introduced a new button into its Settings > System > Recovery environment in Windows 11 23H2. As shown in Figure 7 above, that button is labeled “Reinstall now” and accompanies a header that reads “Fix problems using Windows Update.” It, too, comes with interesting implications. Indeed, it’s a giant step forward for Windows repair and recovery.

What makes the “Reinstall now” button so interesting is that is shows Microsoft building into Windows itself a standard OS repair technique that’s been practiced since Windows 10 came along in late July 2015: a “repair install” or “in-place upgrade install,” which overwrites the OS files while leaving user files, apps, and many settings and preferences in place.  (See my 2018 article “How to fix Windows 10 with an in-place upgrade install” for details on how the process works and the steps involved to run such an operation manually.)

But there’s more: Windows 11’s “Reinstall now” button matches the reinstall image to whatever Windows edition, version and build it finds running on the target PC when invoked. That means behind the scenes, Microsoft is doing the same work UUP dump does to create Windows ISOs for specific Windows builds. This is quite convenient, because Windows Recovery identifies what build to reinstall, and then creates and installs a matching Windows image.

Indeed, this process takes time, because it starts with the current base for some Windows feature release (e.g., 22H2 or 23H2), then performs all necessary image manipulations to fold in subsequent updates, patches, fixes and so on. For that reason, it can take up to an hour for such a reinstall to complete on a Windows 11 PC, whereas running “setup.exe” from a mounted ISO from the Download Windows 11 page often completes in 15 minutes or less. But then, of course, you’d have to run all outstanding updates to catch Windows up to where you want it to be. That’s why there’s a time differential.

Bottom line: the new “Reinstall now” button in Windows 23H2 makes performing an in-place upgrade repair install dead simple, saving users lots of foreknowledge, thought, and effort.

If everything works, the new paradigm is golden

WB used in conjunction with MSA and OneDrive is about as simple and potentially foolproof as backup and restore get.

Do I think this new paradigm of using WB along with OneDrive, installer changes, and so forth works to back up and restore Windows 11? Yes, I do — and probably most of the time. Am I ready to forgo other forms of backup and restore to rely on WB and its supporting cast alone? By no means! I find that third-party image backup software is accurate, reliable, and speedy when it comes to backing up and restoring Windows PCs, including running versions of all apps and applications.

In a recent test of the “Reinstall now” button from Settings > Recovery in Windows 11, it took 55 minutes for that process to complete for the then-current windows image. I also used WB to restore folders, apps, settings, and credentials. That took at least another 2-3 minutes, but left pointers to app and application installers, with additional effort needed to download and reinstall those items. (This takes about 1 hour for my usual grab-bag of software programs.)

Using my favorite image backup and recovery tool, Macrium Reflect, and booting from its Rescue Media boot USB flash drive, I found and restored the entire C: drive on a test PC in under 7 minutes. This let me pick a backup from any drive on the target PC (or my network), replaced all partitions on the system/boot disk (e.g., EFI, MSR, C:\Windows, and WinRE), and left me with a complete working set of applications. I didn’t need internet access, an MSA, or OneDrive storage to run that restore, either.

Worth having, but not exclusively

Microsoft has made big and positive changes to its approach to backup and recovery. Likewise for repair, with the introduction of the “Reinstall now” button that gets all files from Windows Update. These capabilities are very much worth having, and worth using.

But these facilities rely on the Microsoft Windows installer to handle PC access and repair. They also proceed from an optimistic assumption that admins or power users can get machines working so that a successful MSA login drives the restore process from OneDrive in the cloud to proceed. When it works, that’s great.

But, given the very real possibility that access issues, networking problems, or other circumstances outside the installer’s control might present, I believe other backup and restore options remain necessary. As the saying goes, “You can never have too many backups.”

Thus, I’m happily using WB and ready to restore as the need presents. But I’m not abandoning Macrium Reflect with its bootable repair disk, backup file finder, boot repair capabilities, and so forth. That’s because I don’t see the WB approach as complete or always available.

You are free, of course, to decide otherwise (but I’d recommend against that). And most definitely the new WB approach, the new in-place repair facility, and “reset this PC” all have a place in the recovery and repair toolbox. Put them to work for you!

Backup and Recovery, Windows, Windows 11
Kategorie: Hacking & Security

Q&A: Georgia Tech dean details why the school needed a new AI supercomputer

29 Duben, 2024 - 12:00

Like many universities, Georgia Tech has been grappling with how to offer students the training they need to prepare them for a recent sea change in IT job markets — the arrival of generative AI (genAI).

Through a partnership with chipmaker Nvidia, Georgia Tech’s College of Engineering built a supercomputer dubbed AI Makerspace; it uses 20 Nvidia HGX H100 servers powered by 160 Nvidia H100 Tensor Core GPUs (graphics processing units).

Those GPUs are powerful — a single Nvidia H100 GPU would need just one second to handle a multiplication operation that would take the school’s 50,000 students 22 years to achieve. So, 160 of those GPUs give students and professors access to advanced genAI, AI and machine learning creation and training. (The move also spurred Georgia Tech to offer new AI-focused courses and minors.

Announced two weeks ago, the AI Makerspace supercomputer will initially be used by Georgia Tech’s engineering undergraduates. But it’s expected to eventually democratize access to computing resources typically prioritized for research across all colleges.

Computerworld spoke with Matthieu Bloch, the associate dean for academics at Georgia Tech’s College of Engineering, about how the new AI supercomputer will be used to train a new generation of AI experts.

The following are excerpts from that interview:

Tell me about the Makerspace project and how it came to be? “The Makerspace is really the vision of our dean, Raheem Beyah, and the school chair of Electrical and Computer Engineering (ECE), Arijit Raychowdhury, who really wanted to put AI in the hands of our students.

“In 2024 — in the post ChatGPT world — things are very different from the pre-ChatGPT world. We need a lot of computing power to do anything that’s meaningful and relevant to industry. And in a way, the devil is out of the box. People see what AI can do. But I think to get to that level of training, you need infrastructure.

Makerspace’s Nvidia H100 Tensor Core GPUs

Georgia Tech College of Engineering

“The name Makerspace also comes from this culture we have at Georgia Tech of these maker spaces, which are places where our students get to tinker, both within the classroom and outside the classroom. The Makerspace was the idea to bring the tools that you need to do AI in a way that’s relevant to do meaningful things today. So, right now, where we’re at is we’ve partnered Nvidia to essentially offer to students a supercomputer. I mean, that’s what it is.

“What makes it unique is that it’s meant for supporting students. And right now it’s in the classroom. We’re still rolling it out. We’re in phase one. So, the idea is that the students in the classroom can work on AI projects that are meaningful to industry — problems that are interesting, you know, from a pedagogical perspective, but they don’t mean a whole lot in an industry setting.”

Tell me a bit about the projects they’ve been working on with this. “I can give you a very concrete example. ChatGPT is a very typical, a very specific form of AI called generative AI. You know, it’s able to generate. In the case of ChatGPT, [that means] text in response to prompts. You might have seen a generative model that generates pictures. I think these were very popular and whatnot. And so these are the kind of things our students can do right now, …generate anything that would be, say, photo realistic.

“You need a pretty hefty computing power to train your model and then test that it’s working properly. And so that’s what our students can do. Just to give you an idea of how far we’ve come along, before we had the AI Makerspace, our students were relying largely on something called Google CoLab. CoLab is Google making some compute resources freely accessible for use. They’re really giving to us the resources they don’t use or don’t sell to their be clients. So it’s like the crumbs that remain.

“It’s very nice of them [Google] to do that, but you could only work with very [limited resources], say for training on something like 12,000 images. Now you can, for instance, train a generative model on a data set with like one million images. So you can really scale up by orders of magnitude. And then you can start generating these photo-realistic pictures that you could not generate before. That’s the most visual example I can give you.”

Can you tell me a little bit about the genAI projects the students are working on? How good is the technology at producing the results they want? “It’s a complicated question to answer. I mean, it has many layers. We’ve just launched it, like literally, the AI Makerspace was open officially two weeks ago. So right now it’s really used at scale in the classroom. The students in that class are learning how to do machine learning. [The students] have to get the data. [They] have to learn how to train a model. The students have homework projects, which consists of this fairly sophisticated model that they have to train, and that they have to test.

“Now we have a vision beyond that, what we call phase two of the Makerspace. We’re doubling the compute capacity. The idea now is that we’re going to open that to senior design projects. We’re gonna open that to something we call vertically integrated projects, in which are students essentially doing long-term research with faculty advisors over multiple years. Our students are going to do many things — certainly all of [the] engineering [school].

“We’ve given incentives to a lot of faculty to create a lot of new courses throughout the College of Engineering for AI and ML for what matters to their field. For instance, if you’re an electrical engineer, there’s a lot of hardware to it, you know you have a model for that. How do you make the model smaller so that you can put it in hardware? That’s one very tangible question that the students would ask. But if they’re, say, mechanical engineers, they might use it differently.  Maybe for them what generative AI could do is help them generate 3D models, think about structures that they would not think about naturally. And you can decline that model. The Makerspace is a massive tool. But how the tool is used is really a function of the specific domain. The goal, of course, is for Makerspace to be available beyond engineering.

“It’s already being used by our College of Computing, and we’re hoping that our co colleagues in, say, the College of Business will see the value, because they haven’t used AI yet — perhaps for financial models, predicting whether to sell or buy a stock. I think the sky is a limit. There’s no one use of AI through Makerspace. It’s an infrastructure that provides the tools. And then these tools find declinations in all different areas of expertise.”

Why is it important to have this technology at the school for students to learn about AI? “The way we’ve come to articulate this is as follows: We’re not deliverers in doomsday scenarios, where AI is going to generate terminators that are going to eradicate humanity. Okay, that’s not how we’re thinking about it.

“AI is definitely going to change things. And we think that AI is certainly going to displace a few people. I think the humans enhanced by AI will start displacing humans who don’t use AI.

“I think the way a lot of the discussion has been shaped since ChatGPT was released to the world, in universities there’s sometimes a lot of fear. Are students cheating on their essays? Are students cheating on this cheating on that? I had these discussions with my colleagues in computing. We have an intro to computing class, where they’re cheating to write their code, which I think is not the right approach to it. But, the devil is out of the box. It’s a tool that’s here, and we have to learn how to use it.

“If I can give you my best analogy: I drive my car. I don’t know how my car really works. I mean, I was never a mechanical or electrical engineer. I sort of know what it takes [for a car to run], but I’m unable to fix it. But that doesn’t mean I can’t drive it. And I think we’re at that stage with AI tools, where one needs to know how to use them because you don’t want to be the person riding a bicycle when everybody else has a car.

“Not everyone needs to be a mechanic, but everyone needs a car. And so I think we want every student at Georgia Tech to know how to use AI, and what that means for them would be different depending on their specialty, their major. But these are tools, and you need to have played with them to really start mastering them.”

In what way has AI expanded Georgia Tech’s curriculum? “We were lucky in the sense that [we’re] building that infrastructure from new. But thinking about AI, Georgia Tech has been doing it for decades. Our faculty is very research focused. They do state-of-the-art research and AI…was always there in the background — the roots of AI. We had a lot of colleagues who actually were doing machine learning without saying it in these terms.

“Then when deep learning started appearing, people were ready to grasp that. So, we were already thinking about doing it in the labs, and the integration in the curriculum was already slowly happening. And so what we decided to do was to accelerate that so the Makerspace…accelerates the other mechanisms we’ve had to give incentives to faculty, to rethink the curriculum with AI and Ml in mind.”

So what AI courses have you launched? “I can give you two examples that we’ve launched, which are, you know, very new. But I I think I’ve been quite successful already. One is we’ve officially launched an AI minor.

“The great thing about this AI minor [is that it] is a way for students to take a series of courses with a coherent and unified team, and they get credit for that on their diploma and their transcript. This minor was designed as a collaboration right now between the College of Engineering and the College of Liberal Arts.

“Then we have the ethics and policy piece. Students need to take a specially designed course on AI Ethics and AI policy. We’re thinking very holistically. AI is a technology play, but if you just train engineers to do the technology piece alone, maybe then the doomsday-Terminator scenario is a likely outcome.

“We want our students to think about the use of AI because it’s technology that can have many uses [and problems associated with it]. We talk about deep fakes. We’re worried about it for all sorts of political reasons.

“The other thing we’ve done in the College of Engineering is essentially incentivized faculty to create new undergraduate courses related to AI and ML but relevant to their own disciplines. I literally [just made the announcement] and the has college approved 10 new courses or significantly revamped courses. So, what that means is that we have courses on machine learning for smart cities, civil environmental engineering, and a course in chemical processes in chemical and bioengineering, where they’re using AI and ML for completely different things. That’s how we’re thinking of AI. It’s a tool. So the courses need to embrace that tool.”

Are students already using genAI to assist in creating applications — so software engineering and development? “Officially or unofficially? I don’t have a good answer, because the truth is, I don’t know. But what I know is that our students are using it with or without us. You know they are using generative AI because I’m willing to bet they all have a subscription to ChatGPT.

“Now in the context of the Makerspace, this is a resource you can start doing all sorts of things. Our students are using it to write lines of code absolutely.”

So what would you say is the most popular use right now of the AI Makerspace? “We haven’t officially launched it at scale for very long, so I can’t attest to that. It’s been used largely in the classroom setting for the kind of homework students could not even dream of doing before.

“We’re going to launch it and use it over the summer for an entrepreneurship program called Create X, that students can use to take ideas and go through prototype and potentially think about building startups out of these. So that’s going to be primary use over the summer, and we’re testing it over these few weeks in the context of a hackathon in partnership with Nvidia, where teams come with big problems that they want to solve. And we want to accelerate their science, to use Nvidia’s words, to by teaching them how to use that Makerspace.”

CPUs and Processors, Education Industry, Generative AI, Natural Language Processing
Kategorie: Hacking & Security

Dropbox adds end-to-end encryption for team folders

26 Duben, 2024 - 13:16

Dropbox now offers end-to-end encryption and key management for customers on certain paid plans, part of a range of updates to the file sharing application announced this week. 

Customers files are already encrypted “at rest” using 256-bit Advanced Encryption Standards, said Dropbox, but the end-to-end encryption integrated into team folders offers an added layer of security. 

The change means that only the sender and recipient can access content, with “not even Dropbox” able to view customers files, the company said in a blog post Wednesday. 

Dropbox said it will also provide customers with access to encryption keys, managed by FIPS 140-2 Level 3 key management services.

Information on how to activate and manage team folder encryption is available on the Dropbox website. The company warned that end-to-end encryption restricts certain features in the app, such as the ability to share files with users outside of a team, and might not be suitable for all files stored in a Dropbox account.

Other security features include the ability to manage team membership and invites from the Dropbox admin dashboard, and an updated Trust Center that contains security and compliance information related to Dropbox products. 

The security features are now available to customers on Dropbox Advanced, Business Plus, and Enterprise plans.

Dropbox announced several other new features as part of the latest release.

It will be easier to collaborate with colleagues on certain Microsoft files from within the Dropbox application, with a co-authoring feature that lets multiple users edit Word, Excel, and PowerPoint documents at the same time. Users can also see who’s working on a document and any edits made in real-time. (That feature is currently in beta.)

There’s also an integration between Dropbox Replay and Microsoft OneDrive, which lets users pull files from Microsoft’s file storage platform into the video and audio collaboration tool more easily for reviews and approvals. 

Dropbox Replay will also get new features, including the ability to review and approve additional file types such as PDF and PSD files, integration with music production application Avid Pro Tools, and dynamic watermarking to help protect proprietary content.

Other updates include changes to the Dropbox’s website UI, following a revamp last October. The new capabilities let users preview files more easily, pin favorite files to the navigation bar, and access suggest quick actions for files.

Cloud Storage, Collaboration Software, Productivity Software, Vendors and Providers
Kategorie: Hacking & Security

Android versions: A living history from 1.0 to 15

26 Duben, 2024 - 11:45

What a long, strange trip it’s been.

From its inaugural release to today, Android has transformed visually, conceptually and functionally — time and time again. Google’s mobile operating system may have started out scrappy, but holy moly, has it ever evolved.

Here’s a fast-paced tour of Android version highlights from the platform’s birth to present. (Feel free to skip ahead if you just want to see what’s new in Android 14 or the actively-under-development Android 15 beta release.)

Android versions 1.0 to 1.1: The early days

Android made its official public debut in 2008 with Android 1.0 — a release so ancient it didn’t even have a cute codename.

Things were pretty basic back then, but the software did include a suite of early Google apps like Gmail, Maps, Calendar, and YouTube, all of which were integrated into the operating system — a stark contrast to the more easily updatable standalone-app model employed today.

The Android 1.0 home screen and its rudimentary web browser (not yet called Chrome).

T-Mobile

Android version 1.5: Cupcake

With early 2009’s Android 1.5 Cupcake release, the tradition of Android version names was born. Cupcake introduced numerous refinements to the Android interface, including the first on-screen keyboard — something that’d be necessary as phones moved away from the once-ubiquitous physical keyboard model.

Cupcake also brought about the framework for third-party app widgets, which would quickly turn into one of Android’s most distinguishing elements, and it provided the platform’s first-ever option for video recording.

Cupcake was all about the widgets.

Android Police Android version 1.6: Donut

Android 1.6, Donut, rolled into the world in the fall of 2009. Donut filled in some important holes in Android’s center, including the ability for the OS to operate on a variety of different screen sizes and resolutions — a factor that’d be critical in the years to come. It also added support for CDMA networks like Verizon, which would play a key role in Android’s imminent explosion.

Android’s universal search box made its first appearance in Android 1.6.

Google

Android versions 2.0 to 2.1: Eclair

Keeping up the breakneck release pace of Android’s early years, Android 2.0, Eclair, emerged just six weeks after Donut; its “point-one” update, also called Eclair, came out a couple months later. Eclair was the first Android release to enter mainstream consciousness thanks to the original Motorola Droid phone and the massive Verizon-led marketing campaign surrounding it.

Verizon’s “iDon’t” ad for the Droid.

The release’s most transformative element was the addition of voice-guided turn-by-turn navigation and real-time traffic info — something previously unheard of (and still essentially unmatched) in the smartphone world. Navigation aside, Eclair brought live wallpapers to Android as well as the platform’s first speech-to-text function. And it made waves for injecting the once-iOS-exclusive pinch-to-zoom capability into Android — a move often seen as the spark that ignited Apple’s long-lasting “thermonuclear war” against Google.

The first versions of turn-by-turn navigation and speech-to-text, in Eclair.

Google

Android version 2.2: Froyo

Just four months after Android 2.1 arrived, Google served up Android 2.2, Froyo, which revolved largely around under-the-hood performance improvements.

Froyo did deliver some important front-facing features, though, including the addition of the now-standard dock at the bottom of the home screen as well as the first incarnation of Voice Actions, which allowed you to perform basic functions like getting directions and making notes by tapping an icon and then speaking a command.

Google’s first real attempt at voice control, in Froyo.

Google

Notably, Froyo also brought support for Flash to Android’s web browser — an option that was significant both because of the widespread use of Flash at the time and because of Apple’s adamant stance against supporting it on its own mobile devices. Apple would eventually win, of course, and Flash would become far less common. But back when it was still everywhere, being able to access the full web without any black holes was a genuine advantage only Android could offer.

Android version 2.3: Gingerbread

Android’s first true visual identity started coming into focus with 2010’s Gingerbread release. Bright green had long been the color of Android’s robot mascot, and with Gingerbread, it became an integral part of the operating system’s appearance. Black and green seeped all over the UI as Android started its slow march toward distinctive design.

It was easy being green back in the Gingerbread days.

JR Raphael / IDG

Android 3.0 to 3.2: Honeycomb

2011’s Honeycomb period was a weird time for Android. Android 3.0 came into the world as a tablet-only release to accompany the launch of the Motorola Xoom, and through the subsequent 3.1 and 3.2 updates, it remained a tablet-exclusive (and closed-source) entity.

Under the guidance of newly arrived design chief Matias Duarte, Honeycomb introduced a dramatically reimagined UI for Android. It had a space-like “holographic” design that traded the platform’s trademark green for blue and placed an emphasis on making the most of a tablet’s screen space.

Honeycomb: When Android got a case of the holographic blues.

JR Raphael / IDG

While the concept of a tablet-specific interface didn’t last long, many of Honeycomb’s ideas laid the groundwork for the Android we know today. The software was the first to use on-screen buttons for Android’s main navigational commands; it marked the beginning of the end for the permanent overflow-menu button; and it introduced the concept of a card-like UI with its take on the Recent Apps list.

Android version 4.0: Ice Cream Sandwich

With Honeycomb acting as the bridge from old to new, Ice Cream Sandwich — also released in 2011 — served as the platform’s official entry into the era of modern design. The release refined the visual concepts introduced with Honeycomb and reunited tablets and phones with a single, unified UI vision.

ICS dropped much of Honeycomb’s “holographic” appearance but kept its use of blue as a system-wide highlight. And it carried over core system elements like on-screen buttons and a card-like appearance for app-switching.

The ICS home screen and app-switching interface.

JR Raphael / IDG

Android 4.0 also made swiping a more integral method of getting around the operating system, with the then-revolutionary-feeling ability to swipe away things like notifications and recent apps. And it started the slow process of bringing a standardized design framework — known as “Holo” — all throughout the OS and into Android’s app ecosystem.

Android versions 4.1 to 4.3: Jelly Bean

Spread across three impactful Android versions, 2012 and 2013’s Jelly Bean releases took ICS’s fresh foundation and made meaningful strides in fine-tuning and building upon it. The releases added plenty of poise and polish into the operating system and went a long way in making Android more inviting for the average user.

Visuals aside, Jelly Bean brought about our first taste of Google Now — the spectacular predictive-intelligence utility that’s sadly since devolved into a glorified news feed. It gave us expandable and interactive notifications, an expanded voice search system, and a more advanced system for displaying search results in general, with a focus on card-based results that attempted to answer questions directly.

Multiuser support also came into play, albeit on tablets only at this point, and an early version of Android’s Quick Settings panel made its first appearance. Jelly Bean ushered in a heavily hyped system for placing widgets on your lock screen, too — one that, like so many Android features over the years, quietly disappeared a couple years later.

Jelly Bean’s Quick Settings panel and short-lived lock screen widget feature.

JR Raphael / IDG

Android version 4.4: KitKat

Late-2013’s KitKat release marked the end of Android’s dark era, as the blacks of Gingerbread and the blues of Honeycomb finally made their way out of the operating system. Lighter backgrounds and more neutral highlights took their places, with a transparent status bar and white icons giving the OS a more contemporary appearance.

Android 4.4 also saw the first version of “OK, Google” support — but in KitKat, the hands-free activation prompt worked only when your screen was already on and you were either at your home screen or inside the Google app.

The release was Google’s first foray into claiming a full panel of the home screen for its services, too — at least, for users of its own Nexus phones and those who chose to download its first-ever standalone launcher.

The lightened KitKat home screen and its dedicated Google Now panel.

JR Raphael / IDG

Android versions 5.0 and 5.1: Lollipop

Google essentially reinvented Android — again — with its Android 5.0 Lollipop release in the fall of 2014. Lollipop launched the still-present-today Material Design standard, which brought a whole new look that extended across all of Android, its apps and even other Google products.

The card-based concept that had been scattered throughout Android became a core UI pattern — one that would guide the appearance of everything from notifications, which now showed up on the lock screen for at-a-glance access, to the Recent Apps list, which took on an unabashedly card-based appearance.

Lollipop and the onset of Material Design.

JR Raphael / IDG

Lollipop introduced a slew of new features into Android, including truly hands-free voice control via the “OK, Google” command, support for multiple users on phones and a priority mode for better notification management. It changed so much, unfortunately, that it also introduced a bunch of troubling bugs, many of which wouldn’t be fully ironed out until the following year’s 5.1 release.

Android version 6.0: Marshmallow

In the grand scheme of things, 2015’s Marshmallow was a fairly minor Android release — one that seemed more like a 0.1-level update than anything deserving of a full number bump. But it started the trend of Google releasing one major Android version per year and that version always receiving its own whole number.

Marshmallow’s most attention-grabbing element was a screen-search feature called Now On Tap — something that, as I said at the time, had tons of potential that wasn’t fully tapped. Google never quite perfected the system and ended up quietly retiring its brand and moving it out of the forefront the following year.

Marshmallow and the almost-brilliance of Google Now on Tap.

JR Raphael / IDG

Android 6.0 did introduce some stuff with lasting impact, though, including more granular app permissions, support for fingerprint readers, and support for USB-C.

Android versions 7.0 and 7.1: Nougat

Google’s 2016 Android Nougat releases provided Android with a native split-screen mode, a new bundled-by-app system for organizing notifications, and a Data Saver feature. Nougat added some smaller but still significant features, too, like an Alt-Tab-like shortcut for snapping between apps.

Android 7.0 Nougat and its new native split-screen mode.

JR Raphael / IDG

Perhaps most pivotal among Nougat’s enhancements, however, was the launch of the Google Assistant — which came alongside the announcement of Google’s first fully self-made phone, the Pixel, about two months after Nougat’s debut. The Assistant would go on to become a critical component of Android and most other Google products and is arguably the company’s foremost effort today.

Android version 8.0 and 8.1: Oreo

Android Oreo added a variety of niceties to the platform, including a native picture-in-picture mode, a notification snoozing option, and notification channels that offer fine control over how apps can alert you.

Oreo adds several significant features to the operating system, including a new picture-in-picture mode.

JR Raphael / IDG

The 2017 release also included some noteworthy elements that furthered Google’s goal of aligning Android and Chrome OS and improving the experience of using Android apps on Chromebooks, and it was the first Android version to feature Project Treble — an ambitious effort to create a modular base for Android’s code with the hope of making it easier for device-makers to provide timely software updates.

Android version 9: Pie

The freshly baked scent of Android Pie, a.k.a. Android 9, wafted into the Android ecosystem in August of 2018. Pie’s most transformative change was its hybrid gesture/button navigation system, which traded Android’s traditional Back, Home, and Overview keys for a large, multifunctional Home button and a small Back button that appeared alongside it as needed.

Android 9 introduces a new gesture-driven system for getting around phones, with an elongated Home button and a small Back button that appears as needed.

JR Raphael / IDG

Pie included some noteworthy productivity features, too, such as a universal suggested-reply system for messaging notifications, a new dashboard of Digital Wellbeing controls, and more intelligent systems for power and screen brightness management. And, of course, there was no shortage of smaller but still-significant advancements hidden throughout Pie’s filling, including a smarter way to handle Wi-Fi hotspots, a welcome twist to Android’s Battery Saver mode, and a variety of privacy and security enhancements.

Android version 10

Google released Android 10 — the first Android version to shed its letter and be known simply by a number, with no dessert-themed moniker attached — in September of 2019. Most noticeably, the software brought about a totally reimagined interface for Android gestures, this time doing away with the tappable Back button altogether and relying on a completely swipe-driven approach to system navigation.

Android 10 packed plenty of other quietly important improvements, including an updated permissions system with more granular control over location data along with a new system-wide dark theme, a new distraction-limiting Focus Mode, and a new on-demand live captioning system for any actively playing media.

Android 10’s new privacy permissions model adds some much-needed nuance into the realm of location data.

JR Raphael / IDG

Android version 11

Android 11, launched at the start of September 2020, was a pretty substantial Android update both under the hood and on the surface. The version’s most significant changes revolve around privacy: The update built upon the expanded permissions system introduced in Android 10 and added in the option to grant apps location, camera, and microphone permissions only on a limited, single-use basis.

Android 11 also made it more difficult for apps to request the ability to detect your location in the background, and it introduced a feature that automatically revokes permissions from any apps you haven’t opened lately. On the interface level, Android 11 included a refined approach to conversation-related notifications along with a new streamlined media player, a new Notification History section, a native screen-recording feature, and a system-level menu of connected-device controls.

Android 11’s new media player appears as part of the system Quick Settings panel, while the new connected-device control screen comes up whenever you press and hold your phone’s physical power button.

JR Raphael / IDG

Android version 12

Google officially launched the final version of Android 12 in October 2021, alongside the launch of its Pixel 6 and Pixel 6 Pro phones.

In a twist from the previous several Android versions, the most significant progressions with Android 12 were mostly on the surface. Android 12 featured the biggest reimagining of Android’s interface since 2014’s Android 5.0 (Lollipop) version, with an updated design standard known as Material You — which revolves around the idea of you customizing the appearance of your device with dynamically generated themes based on your current wallpaper colors. Those themes automatically change anytime your wallpaper changes, and they extend throughout the entire operating system interface and even into the interfaces of apps that support the standard.

Android 12 ushered in a whole new look and feel for the operating system, with an emphasis on simple color customization.

Google

Surface-level elements aside, Android 12 brought a (long overdue) renewed focus to Android’s widget system along with a host of important foundational enhancements in the areas of performance, security, and privacy. The update provided more powerful and accessible controls over how different apps are using your data and how much information you allow apps to access, for instance, and it included a new isolated section of the operating system that allows AI features to operate entirely on a device, without any potential for network access or data exposure.

Android version 13

Android 13, launched in August 2022, is one of Google’s strangest Android versions yet. The software is simultaneously one of the most ambitious updates in Android history and one of the most subtle version changes to date. It’s an unusual duality, and it ultimately all comes down to what type of device you’re using to experience the software.

On the former front, Android 13 introduces a whole new interface design for both tablets and foldable phones, with a renewed focus on creating an exceptional large-screen experience in the operating system itself and within apps (as first observed and reported by Computerworld in January). The enhancements in that area include a fresh framework and series of guidelines for app optimizations along with a more capable split-screen mode for multitasking and a ChromeOS-like desktop-style taskbar that makes it easy to access frequently used apps from anywhere — enhancements we now know were aimed initially at Google’s Pixel Fold and Pixel Tablet devices, though their impact and effects have certainly stretched beyond those two products.

Google

On the latter front, Android 13 also laid the groundwork for the Pixel Tablet to function as a stationary Smart Display and then allow you to detach its screen and use it as a tablet. The software introduced support for a whole new series of shared-surface widgets and screensavers along with an expanded multiuser profile system for that purpose.

On regular phones, Android 13 is much less significant — and in fact, most people probably won’t even notice its arrival. Along with some minor visual refinements, the software introduces an expanded clipboard system that allows you to see and edit text as it’s copied, a native QR code scanning function within the Android Quick Settings area, and a smattering of under-the-hood improvements connected to privacy, security, and performance.

Android version 14

Following a full eight months of out-in-the-open refinement, Google’s 14th Android version landed at the start of October 2023 in the midst of the company’s Pixel 8 and Pixel 8 Pro launch event.

Like the version before it, Android 14 doesn’t look like much on the surface. That’s in part because of the trend of Google moving more and more toward a development cycle that revolves around smaller ongoing updates to individual system-level elements year-round — something that’s actually a significant advantage for Android users, even if it does have an awkward effect on people’s perception of progress.

But despite the subtle nature of its first impression, Android 14 includes a fair amount of noteworthy new stuff. The software introduces a new system for dragging and dropping text between apps, for instance, as well as a new series of native customization options for the Android lock screen.

Android 14 includes options for completely changing the appearance of the lock screen as well as for customizing which shortcuts show up on it.

JR Raphael / IDG

Android 14 provides a number of new improvements to privacy and security, too, including a new settings-integrated dashboard for managing all your health and fitness data and controlling which apps and devices can access it. And it adds in a more info-rich and context-requiring system for seeing exactly why apps want access to your location when they request such a permission.

The software also sports a series of significant accessibility additions, such as an enhanced on-demand magnifier, an easier way to increase font size in any app, improved support for hearing aid connections, and a built-in option to have your phone flash its camera light anytime a new notification arrives.

Beyond that, Android 14 features a first taste of Google’s AI-generated custom wallpaper creator, though that’s available only on the Pixel 8 and Pixel 8 Pro to start.

You can generate all sorts of interesting wallpapers in seconds via Android 14’s AI generator feature — but only on the Pixel 8 or Pixel 8 Pro for now.

JR Raphael / IDG

Android 14 rolled out to Google’s actively supported Pixel devices in early October, on the day of its release, and has been making its way slowly but surely to other Android phones and tablets in the weeks since.

Android version 15 (beta)

Google got its first developer preview of 2024’s Android 15 update out into the universe in late February and moved on to a public beta build of the software in early April.

As typically happens with new Android versions, the beta software is starting out somewhat bare bones — with most of the higher-profile, headline-worthy features still out of sight and under wraps. At this point, the official elements Google’s discussing and making visible revolve primarily around under-the-hood improvements and developer-aimed adjustments.

That being said, we have a pretty good idea of what front-facing features we might see as the development moves forward. Bits of behind-the-scenes code suggest Android 15 could include a new system for stopping overly aggressive notifications, along with a simpler way to tap into Android’s split-screen system, an “Adaptive Touch” feature that’d automatically adjust your screen to be as responsive as possible in different scenarios, and the long-awaited return of the now-12-year-old lock screen widgets conceptin certain contexts, at least.

It also looks increasingly likely that Android 15 will introduce a new “Private Space” option that’ll make it possible to create a separate, locked-down profile for securing especially sensitive apps and data (something some device-makers have added into their versions of the software for some time but that’s never before been a consistent, official part of the platform).

Google is expected to release four Android 15 beta versions in all, with a final release following sometime in the late summer to early fall months — as early as July, potentially, though that timing is always a floating target. (For context, remember: Android 14 didn’t arrive until early October. Android 13 before it showed up in mid-August. Anything’s possible.)

We’ll almost certainly hear much more about Android 15’s progress and at least some of the software’s significant features at the Google I/O company conference in May, so stay tuned: This story is only just getting started, and the biggest news is absolutely still ahead of us.

This article was originally published in November 2017 and most recently updated in April 2024.

Android, Mobile, Small and Medium Business, Smartphones
Kategorie: Hacking & Security

The unspoken obnoxiousness of Google’s Gemini improvements

26 Duben, 2024 - 11:45

Depending on your perspective, two very different stories are a-brewin’ right now here in the land o’ Googley matters.

In one corner, Gemini, the AI-powered chatbot Google galumphed into the world this year, is getting better! I’ve lost count of the number of assorted little improvements that’ve shown up for the tool as of late, as it relates to Android and its increasingly apparent role as the platform’s next-generation virtual assistant.

At the same time, that progression emphasizes an unpleasant and often unspoken truth around Google’s rush to get its jolly green generative AI giant everywhere imaginable: This thing was rolled out way too soon and long before it was ready. It’s being forced down our throats for the sake of Google’s business benefit and at the expense of our user experience. And it’s being awkwardly and hurriedly shaped into a role it wasn’t designed to serve in a rushed-out, piecemeal manner instead of in the thoughtful, meticulous way that would have made it much more palatable for those of us who rely on Google’s services.

That first fork is the story Google wants to tell. The latter one, though, is the reality most of us in the real world are experiencing.

And I hate to be the bearer of bad news, but I’ve got a sneaking feeling it’s only gonna get worse from here.

[No sugar-coating, no nonsense. Get level-headed perspective on the news that matters with my free Android Intelligence newsletter. Three things to know and try every Friday!]

Google, Assistant, and the rocky road to Gemini

Let’s back up for a second to set the scene around this saga — ’cause goodness gracious, is it a strange yet simultaneously very characteristic-seeming tale, for anyone who’s been watching Google long.

Back in 2016, y’see, Google launched one of its biggest company-wide initiatives ever — a saucy little somethin’ called Google Assistant.

It’s hard now to even convey just how big of a deal Assistant’s arrival was at that point and then continued to be over the years that followed, up until extremely recently. Much like a certain plus-symbol-involving service we shall no longer name (pour one out…), Google set out to make Assistant appear everywhere.

From early on, the service connected to countless other Google services — from its integration into every imaginable corner of Android and ChromeOS to its home at the core of all Google-associated smart displays, TV systems, and speakers. It came to Android Auto, even, and showed up as the branding behind all sorts of smart Pixel calling features.

Heck, Google’s presence at tech conventions turned into a literal Assistant playground for years, with endless plastering of Assistant branding everywhere you looked and — well, stuff like this:

Assistant became the common thread across all of Google’s high-profile products, and even that was still just the start of the company’s grand Assistant ambitions. A few short years ago, Google was laying the groundwork for Assistant to evolve into its own fully featured platform.

As a certain greasy-beaked tech philosopher put it at the time:

Assistant is the Google platform of the future. Whether we’re talking about Smart Displays, the Home Hub, or Android devices, the operating system is but a pawn in Assistant’s larger-scale and higher-stakes game.

It seems safe to say Google devoted more time, energy, and likely also money toward building up Assistant as a tool and a brand than any other effort in recent memory. And, not surprisingly, it worked! Those of us who spend our days within the Google ecosystem learned to depend on Assistant for all sorts of tasks, ranging from quick actions across Android to cross-platform memory storage and on-demand smart device control.

And then — well, you know what happened, right? ChatGPT showed up. The tech industry freaked out about its future. And everything related to Assistant went to hell in a handbag.

The Assistant-to-Gemini (d)evolution

The signs of Assistant flailing first started showing up mid-last-year.

I’d gotten message after message from readers of my Android Intelligence newsletter and members of my Intelligence Insider Community asking the same basic question: What in the world is going on with Google Assistant?

People were noticing that Assistant was acting oddly and becoming more and more erratic. Commands that once worked fine were suddenly leading to strange results. Sometimes, Assistant just wouldn’t answer at all — or would return random errors anytime you tried to summon it.

I was among the lucky who didn’t run into any such issues for a while, but that’s absolutely changed in more recent months. The many Assistant-associated gizmos scattered around my home — speakers, screens, and other “Hey Google”-responding gadgets in practically every room — have become exercises in frustration. Error after error, failed command after failed command.

Plain and simple, what was once a reliable tech tool has turned into a steaming hot mess of disappointment. Even my kids, once obsessed with what they saw as the all-knowing and omnipotent Google genie, have now taken to berating and insulting the brand with inspired barbs like “Hey Google, why are you so stupid?” and — more direct yet — “Hey Google, you suck.”

While Google has yet to officially give us any guidance about its plans for the future of Assistant and how Gemini might fit into that, it’s become increasingly apparent over time that the company’s moving away from Assistant and devoting its time and resources toward building up Gemini as its replacement.

And that brings us to the other side of our current unchosen reality: When Gemini first showed up as an “experimental” Assistant alternative on Android earlier this year, it was an underwhelming glimpse at a future precisely none of us asked for — and that’s putting it mildly.

As I wrote back in February:

The real problem with Gemini as the Android assistant is that Google’s forgotten why a phone assistant actually matters — and what we, as actual users in the real world, need from such a service.

Using Gemini in place of Google Assistant feels like having a square peg awkwardly forced into a round hole. It feels more like an awkward adaptation of an AI chatbot than a phone assistant — something that’s half-baked at present and not at all intended or appropriate for this context.

And the more time you spend using Gemini, the more apparent that disconnect becomes.

To Google’s credit, in the time since then, Gemini as an Android assistant has only continued to get better. It’s clear that Google’s working to fill in the gaps and bring the service up to speed with Google Assistant as quickly as possible.

Heck, this week alone, we’ve seen signs of Google cookin’ up a Gemini automation system to take the place of Assistant routines, an improvement that’d allow Gemini to interact with streaming apps and control audio playback on your phone (something that, yes, has thus far not been possible), and a series of under-the-hood enhancements that’d make Gemini faster and more “natural” to interact with on Android.

Hey, that’s great! All of it. (Remember our diverging dialogues from a minute ago?)

But more than anything, it highlights just how badly Google has bungled this transition — and how much trust it’s rightfully gonna cost it in the eyes of its most committed users.

The Gemini on Android reality

The truth is that using Gemini as an assistant on Android, to borrow a phrase from my wise progeny, still sucks. And it’s not because its generative AI parlor tricks aren’t up to snuff. That stuff — all the new tricks Gemini brings to the table — honestly doesn’t matter all that much for most of us in this context.

As I mused a couple months back:

When it comes to an on-demand mobile device assistant, we don’t need the ability to have mediocre text or creepy images created for us from anywhere across Android. We need a fast, consistent, reliable system for interacting with our phone and other connected devices, getting things accomplished with our core productivity services, and getting short bursts of basic info spoken aloud to us in response to simple questions.

What Google’s scrambling to do now is catch Gemini up with those foundational basics — the table stakes, in other words, and the bare minimum of what makes for an effective and reliable phone assistant. And while it may be making impressive strides toward that goal, in the meantime, Gemini continues to fail at the core tasks we actually need from a service of this nature while also continuing to be pushed as a default for more and more people who never asked for it. And at the same time, Assistant itself — thanks to its apparent abandonment within Google — is also now flailing at those same tasks, which it once handled handily.

That adds up to create a lose-lose for everyone — except for maybe Google’s business department, which can now tell investors it’s pushing new boundaries and leading the way in generative AI development.

The most maddening part of all is that it didn’t have to be like this. Google could have thoughtfully worked out the best parts of Gemini, as an assistant, and then integrated those elements into the existing Assistant framework in a way that’d feel like an upgrade and expansion instead of a rug-swept-out-from-under-us degradation. It could have kept the system and the brand it spent years building up while simply sprinkling new capabilities in instead of doing its typical Google thing of pulling a complete 180, giving up on something entirely, and then leaving us — as its users — to sort out the mess.

More than anything, Google could have waited until Gemini was actually in a reasonably ready-for-prime state before rolling it out and pushing Android phone owners to switch over to it — while simultaneously dropping the ball on the existing Assistant platform and leaving us all in a lurch with no clear answers or direction.

All of this brings me back once more to the question I posed at the start of this year, when the effects of the tech industry’s hype-driven AI obsession were just starting to become apparent:

How much of the current rush to cram some form of “AI” into everything imaginable is actually about what’s useful and advantageous for us, as human users of these creations? And how much is more about chasing the latest buzzword du jour and finding a reason to use the term “AI,” no matter what it accomplishes or how it fits into our lives?

I’ve said it before, and I’ll say it again: Most of us don’t need a “creative” chatbot at our fingertips all day long, in every area of our Android experience. We don’t need on-demand image and text generation at our constant beck and call. And we certainly don’t need long-winded, on-screen answers of questionable accuracy for our short spoken questions.

What we need is a simple, reliable task-handler and an accurate and concise info-relayer. Assistant established the framework for that. And seeing Google throw that all away now and start over from scratch with Gemini — while forcing us to suffer along with its slog back toward a state of basic reliability — sure doesn’t feel like “progress.”

We may well reach a point where Gemini genuinely grows into a fully featured, reliable replacement for Google Assistant on Android and across the rest of Google’s ecosystem. I certainly hope so! But instead of bringing us to that point in a calm, carefully planned, and sensible-seeming manner, Google’s forcing us through a painfully rocky long-term transition — with a new tool that isn’t up to snuff, an old tool that’s being left to fall apart at the seams, and no meaningful guidance as to how all of this will ultimately play out.

That doesn’t seem like a smart way to handle things to me. And, I don’t know about you, but in this case, I don’t even need a half-baked virtual assistant to tell me why.

Get unmatched insight on all the news that matters with my free Android Intelligence newsletter. Three things to know and try in your inbox every Friday, straight from me to you.

Android, Google, Google Assistant, Voice Assistants
Kategorie: Hacking & Security

Google can’t seem to quit cookies, delays killing them again

25 Duben, 2024 - 18:52

Google this week once again said it will delay plans to eliminate third-party identity tracking software — cookies — from its Chrome browser and from Android OS. Now, it plans to remove them by 2025.

The tech giant said the latest delay is due to “ongoing challenges related to reconciling divergent feedback from the industry, regulators and developers.” 

As far back as 2019, Google was telling users it planned to limit third-party cookies and phase them out in Chrome and other Chromium open-source browsers by 2022. In 2020, it delayed its plans to eliminate them through its Privacy Sandbox initiative. Then in 2022, Google pushed back its plans to 2023. And last year, it delayed the plans again — to the second half of 2024.

In January, it again said it would find alternatives to cookies for identifying users and discovering their habits, but was pushing back plans to eliminate trackers.

“We recognize that there are ongoing challenges related to reconciling divergent feedback from the industry, regulators and developers, and will continue to engage closely with the entire ecosystem,” Google wrote in a blog post this week.

“For marketers, the message is clear: get off cookies now,” said said Ken Weiner, chief technology officer at digital advertising platform GumGum. “Most of the industry, including mobile and other browsers like Safari, have already moved away from cookies or never used them in the first place. Don’t wait for Google’s shifting timeline to take action; the transition should be happening now. Keep in mind that regardless of cookies, the web’s future—driven by consumer preferences and regulatory changes—is identity-less. Contextual targeting is the best way forward.”

Google has been working with the UK’s Competition and Markets Authority (CMA) and the Information Commissions Office (ICO) on its plans to use its Privacy Sandbox instead of cookies. The British regulatory authority and others have voiced concerns about Google’s plan, saying it could “unfairly hinder competition” by giving preference to Google’s own advertising products, which would increase the company’s market dominance.

“We remain committed to engaging closely with the CMA and ICO and we hope to conclude that process this year,” the company said. “Assuming we can reach an agreement, we envision proceeding with third-party cookie deprecation starting early next year.”

cookie is a small file that is downloaded onto a computer when the user visits a website. They can do helpful things, such as remembering preferences, recording what has been added to a shopping basket, and counting the number of people viewing a website. They can also use a person’s identity to allow third parties to bombard users with emails and targeted online ads.

Cookies often ingest and retain sensitive consumer information such as login credentials, personally identifiable information, and browsing history. As a result, the move away from cookies should help reduce some cybersecurity risks.

Over the past few years, the online advertising industry has been undergoing a sea change as regulators restricted how cookies can be used and browser providers moved away from them in response to consumer outcries over privacy. “They often feel surveilled; some even find it ‘creepy’ that a website can show them ads related to their behavior elsewhere,” according to a recent study by the HEC Paris Business School.

Google has said its Privacy Sandbox project will create new standards for websites to access user information without compromising privacy by sharing a subset of user information without relying on third-party cookies. “It will provide publishers with safer alternatives to existing technology, so they can continue building digital businesses while your data stays private,” the company said on its website.

For Android device users, Google will introduce new solutions that operate without cross-app identifiers — including Google Play services’s Advertising ID, which will limit data sharing with third parties and offer a user-resettable, and user-deletable ID for advertising.

Google Chrome, which is used for about 66% of all internet traffic, impacts more consumers than any other browser, so changing the way it tracks users would also have market-changing consequences.

“In the short term, there will be some disruption with advertisers struggling to market themselves effectively,” said Roger Beharry Lall, research director for IDC’s Advertising Technologies and SMB Marketing Applications practice. “This may seem good for consumers who are ‘cookie free.’ However, there will likely just be more irrelevant ads flooding the media trying to find an audience. So, it’s a bit of a double-edged sword.”

Browser Security, Browsers, Chrome, Chrome OS, Privacy
Kategorie: Hacking & Security

Apple reportedly cuts Vision Pro production due to low demand

25 Duben, 2024 - 17:51

Apple has cut Vision Pro production due to low demand for the $3,500 mixed reality headset, according to Ming-Chi Kuo, an Apple analyst at TF International Securities. 

Apple reduced shipments to between 400,000 and 500,000 units for the year, despite “market expectations” of around sales of 700,000 to 800,000 units, Kuo said in a blog post Wednesday.

Apple cut orders ahead of a planned international launch for the device, said Kuo,

Because of a sharp fall-off in demand in the US, Kuo had earlier claimed that between  160,000 and 180,000 of the spatial computing devices were sold in preorders before the February launch. But sales quickly slowed after that early burst of interest and Apple now takes a “conservative view” of demand outside of the US, Kuo said this week. (Bloomberg’s Mark Gurman reported this week that demand for Vision Pro demos in Apple stores has also fallen off significantly as interest waned.)

Apple now expects Vision Pro shipments to decline year on year in 2025, Kuo said, prompting Apple to review and adjust its product roadmap; plans for a second Vision Pro in 2025 are now reportedly on hold.

Apple may shed more light on the situation when it releases its Q2 financial results next Thursday.

Wider predictions for Vision Pro sales have varied significantly, and it’s difficult to point to a consensus on market expectations.  According to an Ars Technica report in June 2023, Wedbush Securities forecast around 150,000 units in the first year of sales;  Morgan Stanley expected sales of around 850,000; and Goldman Sachs predicted sales as high as 5 million.

It was rumored that Apple initially hoped to sell 1 million of the devices in the first year on sale, according to a Financial Times report in 2023; that expectation was later revised down due to production issues. 

Morgan Stanley analysts predicted in January that Apple would ship between 300,000 and 400,000 headsets in 2024, according to a CNBC report, while a Wedbush analyst put the figure at 600,000 units for the year, according to Business Insider

Kuo himself forecast sales of around 500,000 units for 2024, according to a January blog post.  

What seems clear is that the Vision Pro will make up a relatively small part of the total market for AR/VR devices, which remains a niche product category.

While the first-generation device is powerful and impressive, said Ramon Llamas, research director with IDC’s devices and displays team, consumers still need to be convinced of its value and utility. 

“As a multimedia consumption device, it is pushed up hard against consumers’ large screen televisions and computers,” he said. As a workplace productivity device, he added, it “remains to be seen exactly how it increases efficiency and productivity altogether.

“On top of this, the price most likely makes a lot of people balk,” said Llamas. 

The wider market for AR/VR devices is expected to return to growth in 2024, up 44% from the previous year to 9.7 million units, according to IDC data. This follows a tough year in 2023, when headset sales declined 23.5%. 

Meanwhile, Apple touted the enterprise potential of the Vision Pro earlier this month.

“There’s tremendous opportunity for businesses to reimagine what’s possible using Apple Vision Pro at work,” Susan Prescott, Apple’s vice president of worldwide developer relations and enterprise marketing, said in a blog post, pointing to VisionOS apps from the likes of SAP, Lowe’s, Porsche, and others. 

Apple, Augmented Reality, Virtual Reality
Kategorie: Hacking & Security

The end of non-compete agreements is a tech job earthquake

25 Duben, 2024 - 12:00

Frankly, I didn’t think the Federal Trade Commission (FTC) had the guts to ban non-compete agreements that prevent many workers from joining rival companies. I was wrong. On Tuesday, by a 3-2 party-line vote, the agency’s Democratic majority decided to do just that.

Though they’ve long been called “agreements,” anyone who’s ever had to sign one knows that would-be employees seldom have any choice in the matter. You agree and get the job, or you don’t and stay on the unemployment line. And, oh, by the way, 30% to 40% of workers are required to sign non-competes after they’ve already accepted a job

That’s why labor unions, liberal think tanks, and millions of employees hate them. 

You might think non-competes are only an issue for top tech engineers, software developers, and executives. Wrong — so, so wrong. 

Sure, historically, companies used these agreements to lock down highly skilled workers and executives with access to trade secrets or proprietary information. But that hasn’t been the case for decades. According to the Economic Policy Institute, a third of companies now require all their workers to sign non-competes. That includes “valued” employees such as hourly workers in minimum wage jobs doing janitorial duties or food service. 

Couldn’t this be fought in the courts? Technically, sure it can. But as the Trembly law firm put it, “Non-compete litigation is typically fast-paced and expensive.” The key word in that sentence is “expensive.” If you’re an employee seeking to get free of a non-compete, unless the company you’re moving to will fight for you, you won’t be able to afford the lawsuit. 

The FTC argues that while requiring workers in low-end jobs to sign non-competes is an overstretch, valuable employees shouldn’t be restricted either. After all, the agency claims, “Trade secret laws and non-disclosure agreements (NDAs) both provide employers with well-established means to protect proprietary and other sensitive information. Researchers estimate that over 95% of workers with a non-compete already have an NDA.”

In addition, as FTC Chair Lina M. Khan said: “Non-compete clauses keep wages low, suppress new ideas, and rob the American economy of dynamism, including from the more than 8,500 new startups that would be created a year once non-competes are banned.”

I don’t know if the end of non-competes would do all that. But I do know that in the decades I’ve been writing about technology, I’ve seen non-competes become iron collars around the necks of tech’s best and brightest workers, help desk staffers and even the people who keep the offices clean. 

I understand businesses want to reduce competition and prevent their workers from easily jumping ship, but I’ve never thought non-compete agreements were the right way to do so. Want to keep your best staffers? Pay them, let them work from home, and give them a pathway to promotion. This isn’t rocket science. 

Nevertheless, my attorney friends tell me that their corporate employers or clients had fits when word of the FTC ruling came out. You would have thought a lightning bolt had fried their stock prices out of the blue sky. 

Really? While I was surprised by the FTC action, anyone who’s been paying attention knew that non-compete agreements were getting walloped left, right, and sideways. 

True, as Republican Commissioner Andrew Ferguson said, the ruling “nullifies more than 30 million existing contracts and forecloses tens of millions of future contracts.” That’s a big deal. But, again, the writing was on the wall. 

That’s why, while way too many CEOs are having conniptions at the moment, business and law-savvy groups such as the US Chamber of Commerce  immediately sued the FTC, seeking to overturn the decision. They were ready. 

Their lawyers are arguing that the ban applies to a host of contracts that could not harm competition in any way. Besides, the FTC didn’t have the power to issue such a ban. And, in any case, such a categorical ban wasn’t legal. Those are the arguments, at least.

Who’s right? Who’s wrong? Stick around and find out. I have every expectation that this will grind its way through the court system all the way to the Supreme Court sometime in the late 2020s. (I expect, by the way, that the issue that will decide the case won’t have anything directly to do with the FTC’s ruling; it’ll revolve around whether the FTC has the power to make such a fundamental legal policy change.)

In the meantime, you have about four months to decide what to do about your non-compete agreements before the FTC ruling goes into effect. Once it hits, all existing non-compete agreements will be nullified, except for those applying to executives in “policy-making positions” who make at least $151,164 a year. And the ruling won’t let your company impose any new non-competes, even on executives.

Personally, I’d dump any non-compete agreements immediately and rewrite my employment contracts to use NDAs and trade secrets in their place. No matter what the courts decide, employees hate non-compete agreements — that won’t change.

And what does all this mean for top tech talent that’s been feeling trapped? It’s time to talk to your bosses about whether they really want to keep you around and explain that the carrot of a better deal will be a whole lot sweeter than the threat of a non-compete clause. If they don’t hear you? Get ready to walk. The doors are opening.

Careers, Government, IT Jobs, IT Skills, Regulation
Kategorie: Hacking & Security

Meta opens its mixed-reality Horizon OS to other headset makers

24 Duben, 2024 - 17:19

Meta will license the software underpinning its Quest headsets to third-party hardware manufacturers in a bid to spur wider adoption of mixed-reality technology. 

Access to Horizon OS — the operating system used in Quest devices — should reduce barriers to market for hardware makers seeking to create new products, Meta said. And software developers will benefit from a larger audience for mixed-reality apps that can be sold in Meta’s Horizon app store (formerly Quest Store). 

“Developers will have a much larger range of hardware that can run their apps, and more device makers will expand their market to a wider range of users, much like we’ve seen with PCs and smartphones,” Meta said in a blog post Monday.

​The push for an open ecosystem offers Meta a chance to build on its early dominance in the AR/VR market, say analysts, while lowering the barrier for entry to hardware makers. 

“This is a smart move for Meta to diversify their hardware ecosystem, while also working to make Meta Horizon OS the standard mixed-reality headset OS,” said Will McKeon-White, senior analyst at Forrester.“Before, they were effectively dependent on Oculus sales — this decouples their OS from their headset and hardware efforts.”

What is Horizon OS — and who will use it?

Based on a modified version of Google’s Android operating system, Meta’s Horizon OS is the result of a decade of work to build virtual and mixed reality products, the company said. Meta has spent billions of dollars in recent years to create devices such as the Quest 3 and Quest Pro. This includes adding features such as real-time video pass-through, “inside-out” tracking that tracks a user’s movements and position, and spatial anchors that allow digital objects to be fixed in physical space. 

Several companies have already lined up to incorporate Horizon OS into their hardware: Lenovo, Asus, and Microsoft’s Xbox gaming business. 

Asus and Xbox will focus on gaming, while Lenovo — which previously partnered with Meta to produce the Oculus Rift S — will develop headsets targeted at “productivity, learning and development,” Meta said. It may take a couple of years before the devices are available, Meta CEO Mark Zuckerberg said in an Instagram video message Monday.  

By opening its OS to others in the market, Meta is “taking a page out of the Google playbook,” said Ramon Llamas, research director with IDC’s devices and displays team, referencing Android’s position in the smartphone and smartwatch markets. 

“Google put together the platform and a bunch of vendors ran with it,” he said, noting that Google’s own hardware competes with partners such as Samsung and others that rely on Android.

As with Android on smartphones, access to a pre-existing software ecosystem is a big draw for mixed-reality headset vendors, particularly at an early stage of the market when demand remains low.

Eliminating barriers for hardware makers

Creating a mixed-reality headset is a significant engineering challenge for hardware makers, Llamas said, and the need to also build the underlying software compounds the issue. “Especially if you’re a small player, that’s a terrific hurdle to cross,” he said.

Some of those issues are now being removed, however. Headset vendors can now get specialized VR chipsets from Qualcomm, with a software platform available from Meta. “That’s an attractive value proposition — this is going to remove a lot of barriers for a lot of companies,” said Llamas.

A more mature ecosystem could help drive customer adoption. By incorporating Meta’s Horizon OS, hardware vendors could find it easier to convince customers they have the requisite apps and ecosystem to support their product, said Anshel Sag, principal analyst at Moor Insights and Strategy. 

Despite the potential benefits, hardware vendors need to consider whether partnering with Meta is the right strategy. “It remains to be seen who else is going to jump on board,” he said. “There’s a lot to like, but do your due diligence and make sure this is a good fit for you.”

Competition from Apple, Google                         

Meta is the dominant player in AR/VR market, accounting for over 60% of units sold in Q4 of 2023, according to IDC data. It’s a large chunk of a relatively small market, with IDC forecasting 9.7 million devices will be sold globally this year

Though demand for mixed-reality devices has not yet taken off, Meta faces competition on several fronts. The launch of Apple’s Vision Pro earlier this year provided a new rival (alongside validation of the device category), though Apple’s costly device is only expected to sell a few hundred thousand units this year.

Google is also expected to provide the operating system for Samsung’s mixed-reality headset that’s due to launch later this year. While Google is the major challenger to Apple’s mobile ecosystem with Android, the extent of its ambitions in the AR/VR market remain aren’t clear at this stage.

Meta, which has made the biggest investment in mixed-reality technologies in recent years, has an early advantage, said Sag, with a relatively strong library of 3D apps compared to Google and Apple; the latter two are effectively starting from scratch with their own software efforts. 

The decision to provide access to its OS could provide another advantage going forward. “Meta has the headstart here…, opening their ecosystem and making development easier will be a challenge to Apple and Google and will ultimately benefit everyone, with more competition among ecosystems,” said Sag.

For mixed-reality device customers, increased competition in the market should be good news. 

“The real winner in all this is going to be the end user,” said Llamas. “It’s going to be the consumer for now, and it’s going to be the enterprise user shortly thereafter.”

Augmented Reality, Google, Virtual Reality
Kategorie: Hacking & Security

Microsoft uses its genAI leverage against China — prelude to a tech Cold War?

24 Duben, 2024 - 12:00

Back in the 19th century, if the United States or some other military power wanted to bend a smaller country to its will, it would often display its might with a show of force, sending a fearsome display of gunboats just offshore its target. The naval display usually made its point: not a single shell had to be fired for the smaller nation to accede to the demands of the day. 

It was known as gunboat diplomacy.

Today, gunboats no longer rule the world. Tech (and, increasingly, generative AI) do. And Microsoft is now working hand in glove with the federal government to use its considerable genAI might to win what is being called a “tech Cold War” the US is waging against China.

The cooperation has just begun, but it’s already bearing fruit, getting a powerful genAI company based in the United Arab Emirates to cut its ties to China and align with the US. At first blush, it sounds like a win-win: What can possibly be bad about boxing out China from the Middle East, increasing US cooperation with Arab states, and showering profits on a US company for its help?

As it turns out, a lot could wrong. There are significant dangers when the most powerful (and wealthiest) nation on the planet works so closely and secretly with the world leader in AI. The biggest danger: by cooperating so closely with Microsoft, is the US giving up on ever trying to reign in genAI, which researchers have already warned could represent an existential threat to humanity if not regulated properly?

Let’s look at the how the federal government and Microsoft worked together to outmaneuver China and push it out of G42, the most influential AI company in the Middle East, and what that means for emerging plans to regulate genAI tools and platforms.

Boxing China out of the Middle East’s Best AI Company

The immediate target of this round of tech diplomacy is the United Arab Emirates-based G42, which is about as well-connected as any company can be. The New York Times describes it as “a crown jewel for the UAE, which is building an artificial intelligence industry as an alternative to oil income.” It’s controlled by Sheikh Tahnoon bin Zayed, the UAE’s national security adviser, who is among the most powerful members of Abu Dhabi’s royal family, according to Forbes.

The Times says G42 is right in the middle of US efforts to blunt  “China’s ambitions to gain supremacy in the world’s cutting-edge technologies, including artificial intelligence, big data, quantum computing, cloud computing, surveillance infrastructure and genomic research.”

Before the Microsoft deal, the US was especially concerned about G42’s connections to large Chinese tech firms, including telecommunications giant Huawei — which is under US sanctions — and possibly even the Chinese government.

According to the Times, US officials worried G42 was being used to siphon off advanced American technology to Chinese tech firms or to the Chinese government. “Intelligence reports have also warned that G42’s dealings with Chinese firms could be a pipeline to get the genetic data of millions of Americans and others into the hands of the Chinese government,” the Times reported.

Enter Washington’s most powerful officials and the point of its sharp spear, Microsoft. We don’t know exactly what happened behind the scenes. But we do know a deal was “largely orchestrated by the Biden administration to box out China as Washington and Beijing battle over who will exercise technological influence in the Persian Gulf region and beyond,” according to the Times.

US Commerce Secretary Gina Raimondo traveled twice to the Emirates to get the complex agreement done. It gave the US — and Microsoft — exactly what they wanted. Microsoft will invest $1.5 billion in G42, which will sell Microsoft services to train and tune genAI models. G42 will also use Microsoft’s Azure cloud services, and it agreed to a secret security arrangement, of which no details have been made public. 

Chinese technology, including from Huawei and others, will be stripped out of the company. Microsoft President Brad Smith will join G42’s board, and Microsoft will audit the company’s use of its technology. (It wouldn’t be a surprise if that auditing is designed in part to ensure the connection between G42 and Chinese companies and government has been completely severed.)

So, in essence, the US pushed China out of the most influential genAI company in the Middle East and Microsoft now has a significant foothold in a region that will be spending countless billions on AI as it pivots away from an oil economy. In the words of the Times, the deal could become “a model for how US firms leverage their technological leadership in AI to lure countries away from Chinese tech, while reaping huge financial awards.”

What happens next?

The G42 deal has largely flown under the radar, while much more public skirmishes have been fought in the tech Cold War between the US and China — including the battle over banning TikTok in the US and China’s decision to force Apple to pull WhatsApp, Threads and Signal from its Chinese App Store. But TikTok and the others are just a side show. The future is AI, not tweens watching 30-second videos about silly pranks and makeup tips.

That means Microsoft will have an increasingly close relationship with the US government, as will other genAI leaders, including Alphabet, OpenAI, Meta and Amazon. If the US is to thwart China’s AI and tech ambitions, it desperately needs those companies’ cooperation.

But that kind of cooperation comes at a price. The US has a terrible track record in reigning in tech. The Biden administration has been willing to use anti-trust laws to go after Big Tech, even though Congress has been unwilling to act. But it’s hard to imagine the government will continue to wield the Big Stick of anti-trust investigations and lawsuits if, at the same time, it’s asking Microsoft and others to do its bidding against China. 

The first victim of the tech Cold War against China might well be serious government oversight over the dangers of AI.

Generative AI, Government, Microsoft, Regulation, Technology Industry
Kategorie: Hacking & Security

A crafty new Android notification power-up

24 Duben, 2024 - 12:00

Has there ever been something as simultaneously invaluable and irritating as our modern-day device notifications?

All the beeps, bloops, and blorps our various gadgets send our way serve an important purpose, of course — at least in theory. They keep us attuned to our professional and personal networks and everything around ’em to make sure we never miss anything important.

But they also demand our attention, interrupt what we’re doing, and annoy us endlessly, often with stuff that really doesn’t require any immediate acknowledgment or reaction.

And while Android’s notification systems offer plenty of nuanced control over how different alerts do and don’t reach you, it still seems virtually impossible to avoid swimming in a sprawling sea of stuff in your phone’s notification panel at the end of each day.

So what if there were a better way — a smarter system that could monitor your incoming Android notifications for you, condense all the less pressing noise down into a single alert, and make sure you see only the messages, meetings, and reminders that really matter?

[Get fresh practical knowledge in your inbox with my free Android Intelligence newsletter. Three new things to try every Friday!]

My friend and fellow Android-appreciating organism, have I got just the thing for you.

Meet your Android notification nanny

Brace yourself, dear biped: I’m about to draw your attention to one of the best and most powerful Android productivity tools out there — and one shockingly few mortal beings seem to be aware of.

Much like the Android app drawer enhancement we talked about the other day, it’s a perfect example of the type of advanced customization and efficiency-enhancing intelligence that’s possible only on Android. But you really have to be in the know to know about it.

Allow me to introduce you to a brilliant little somethin’ called BuzzKill.

BuzzKill is an Android app that, in the simplest possible terms, lets you create custom filters for your Android phone’s notifications — almost like Gmail filters, only for Android alerts instead of emails.

I’ve talked about BuzzKill before and shown you all the basics of how it works and what kinds of simple, insanely helpful things it can do for you. Today, I want to zone in on a specific new “experimental” feature the app recently started offering and why it might be worth your attention.

The feature is called Summarize. And it does exactly what you’d expect, from that name: It takes clusters of incoming notifications that meet certain conditions and then combines ’em together into a single, far less overwhelming and distraction-creating alert.

You might, for instance, ask BuzzKill to intercept all incoming notifications from your Android Messages app during the workday and combine ’em into one notification you can easily see at a glance when you’re ready to catch up. Or maybe you’d want it to collect all your incoming Slack alerts in the evenings and group those together to avoid a freeway-style backup at the top of your screen.

Heck, maybe you want it to watch for all notifications from Messages, Slack, and Gmail on the weekends, keep ’em all together in a single summarized notification, and then ding your phone incessantly if any of the incoming messages has a specific word or phrase indicating a need for immediate attention — something like, say, “urgent,” “broken,” or “holy humbuggery, what in the name of codswallop just happened?!”

Whatever the specifics, you’ll only have to think through and set up those parameters once. And from that moment forward, anytime notifications meeting your conditions come in, you’ll see something like this:

Android notifications, summarized — with minimal clutter and distraction.

JR Raphael, IDG

Just a single combined alert for all that activity — not bad, right?

If there’s nothing particularly important, you can swipe it away in one swift gesture, using any finger you like (hint, hint; choose carefully). If you want to explore any of the summarized contents further, you can tap the “Expand” command in the notification’s corner to — well, y’know…

My Android notifications expanded back into their standard, split-apart state.

JR Raphael, IDG

Kinda handy, wouldn’t ya say?

Where BuzzKill’s powers really come into play are with all the extenuating circumstances you can set up — and how impossibly easy the app makes it to manage it all. All I did to get the above going was create a super-simple “if this, then that”-style rule within BuzzKill, like so:

The behind-the-scenes magic that makes my Android notification summarizing happen.

JR Raphael, IDG

And then, to build in a supplementary rule that makes sure certain high-priority notifications stand out from that summary and grab my immediate attention, I created a second “if this, then that” guideline:

BuzzKill understands that there’s an exception to every rule.

JR Raphael, IDG

See? Told ya it was easy!

And make no mistake about it: All of this all just scratching the surface of what BuzzKill can accomplish. One of my favorite ways to use it, for instance, is to keep low-priority notifications from interrupting me at all during the workday and instead have ’em batched together into a single evening-time delivery.

All my Photos alerts arrive in one batch daily, thanks to this nifty notification rule.

JR Raphael, IDG

I also rely on it to prevent rapid-fire back-to-back messages from buzzing my phone 7,000 times in seven seconds — a problem Android 15 appears poised to address, too, albeit in a much less nuanced and customizable way.

Take that, rapid-fire short-message texters!

JR Raphael, IDG

The app’s new experimental notification summarization option is so interesting and packed with potential, though, I just had to share it with you once I really started exploring it and thinking through all the ways it could be helpful.

BuzzKill costs four bucks, as a one-time up-front payment. The app doesn’t require any unusual permissions, doesn’t collect any form of data from your phone, and doesn’t have any manner of access to the internet — meaning it’d have no way of sharing your information even if it wanted to.

It’s yet another illuminating illustration of the incredible productivity power Android provides us — a power anyone can embrace, with the right set of know-how.

And now, you have it. Happy filtering!

Learn all sorts of useful tech tricks with my free Android Intelligence newsletter. Three new things try every Friday — straight from me to you.

Android, Google, Mobile Apps, Productivity Software
Kategorie: Hacking & Security

How to fix iCloud sync in seconds

23 Duben, 2024 - 23:03
What is iCloud Drive?

In the simplest terms, iCloud Drive is Apple’s cross-platform cloud storage system that allows users to sync and store files, photos, videos, calendar events, contacts and other important data. iCloud Drive has been around since 2014, and while the platform is much more robust that it was in the early days, sometimes devices that rely on it don’t sync properly.

What is iCloud sync?

iCloud sync allows all of your devices to access the same updated data (or photos, videos, contacts, etc.), regardless of device. It’s usually reliable, but sometimes you’ll find content fails to sync between devices in the few seconds it should take. If this seems to be happening to you, these simple tricks can help get things running smoothly again.

Is iCloud sync on by default?

Yes, iCloud should be syncing your data automatically. But if there are some things you don’t want synced across devices, you can specify what gets shared via iCloud in System Settings (macOS) and Settings (iOS). You can even turn it off completely if for some reason you don’t want your data to sync.

Is everything up-to-date?

First, check to make sure you are running the latest version of macOS on your Mac, iOS on your iPad or iPhone, or Windows on a PC.

Check that iCloud is working

It also makes sense to double-check that iCloud services are working correctly before running through any other changes. You can visit Apple’s System Status page to make sure things are indeed up and running.

One of the first things to do is check Apple’s system status page for any outages.

Make sure you are properly logged in

Step two is to ensure you are logged into iCloud using the same Apple ID on all your devices. Go to icloud.com, login with your Apple ID and then tap iCloud Settings (either the gear-shaped box icon or by selecting it in the drop-down menu underneath your name at the top right of the iCloud browser window).

In the next window, you should see your storage space status and a row called My Devices. Are all the devices you want to sync included on the list? If not, it is possible they are not using the same Apple ID. (You can do quite a lot of useful things through iCloud’s online service).

Check dates and time

Next, check all the devices that should be syncing. You must ensure these are configured to set time and date automatically and have iCloud Drive/Documents & Data enabled. Follow these steps:

iOS: Settings>Apple ID>iCloud>iCloud Drive. Toggle to On

Mac: System Preferences>iCloud>ensure all the iCloud services you want to sync are checked.

Make sure iCloud is enabled for specific apps

If you have a particularly balky app that isn’t syncing as it should, you’ll want to check System Settings (in macOS) or Settings (in iOS). Click on your Apple ID account, scroll down to iCloud, and check there to see which apps are using iCloud. If your iCloud access for the app isn’t on, you’ll want to enable it. If it’s there, toggle iCloud access off, then on again to (hopefully) get things in sync again.

Check that cellular access is enabled

If everything is syncing okay while you’re using Wi-Fi networks, but you run into problems while on a cellular network, you’ll want to make sure cellular access is turned on. You can check this in Settings (in iOS); scroll down to Cellular and check to make sure it’s enabled for the apps you use. Also, scroll all the way down to make sure iCloud Drive is enabled over cellular.

Force Sync

Once you know your system(s) are set up correctly, you can use this simple trick to force iCloud Contacts and Calendars to sync:

To refresh your iCloud Calendars, launch the app on your iOS device and tap the “Calendars” button at the bottom of the page. When you get to the next page just tap and hold your finger on the screen and drag the list down until the activity icon appears and release the page. The activity icon will spin briefly, and you should find iCloud has synced your calendars for you.

This also works with Contacts. Launch the app and select “Groups” on the All Contacts page. Once you are in Groups, just tap and hold your finger and drag the page down as you did for Calendars. The activity icon will appear, and your Contacts will be synced.

Log out of iCloud and log in again

If you regularly experience sync problems with your iOS device(s) and you know your network is stable, then you should try logging out of your iCloud account on your iOS device or Mac, then log back in.

IMPORTANT: Before doing this, be certain to follow Apple’s extensive instructions to back up your iCloud data.

Sometimes, logging out of iCloud and then logging back in will clear up syncing problems.

Jonny Evans

At icloud.com you can see all of your devices in one place.

To log out, go to iCloud Settings/System Preferences and click Sign Out. You’ll have to respond to a series of prompts before this completes.

Restart your device, return to iCloud’s controls and sign back in. (Please make certain to use the same email address for your Apple ID across all your systems.)

Now you should re-enable all the iCloud features you want to use.

This process usually solves any sync problems you may have, though you may find it necessary to repeat this sequence on all your devices.

Restart your device(s)

If problems persist then close and restart the relevant iCloud-enabled app: Contacts or Calendar, for example. Double-click the Home button (or swipe up to about halfway up the screen and hold for a second or so on iPhone X), swipe through your active apps and swipe up to close the app. (You can long press the app icon and then tap the X that appears on iPhone X.) Return to the Home screen and wait a few moments before launching the app again.

Another approach that sometimes works is to turn off iCloud Contacts and turn it on again. Go to Settings>Apple ID>iCloud, then turn off Contacts.  Unless you have a copy of your contacts stored elsewhere, you should then choose Keep on My iPhone/iPad.

Wait a few moments and turn Contacts on again in Settings.

Reset your device(s)

Never underestimate the power of a hard reset to resolve many iOS problems. To achieve a hard reset on iOS devices simply hold down the Power and Home buttons until the device turns off and the Apple logo appears. The device will restart and system processes will be refreshed, which sometimes fixes iCloud sync problems.

Google+? If you use social media and happen to be a Google+ user, why not join AppleHolic’s Kool Aid Corner community and join the conversation as we pursue the spirit of the New Model Apple?

Got a story? Drop me a line via Twitter or in comments below and let me know. I’d like it if you chose to follow me on Twitter so I can let you know when fresh items are published here first on Computerworld.

Apple, Cloud Storage, iCloud
Kategorie: Hacking & Security