Computerworld.com [Hacking News]

Syndikovat obsah
Making technology work for business
Aktualizace: 15 min 11 sek zpět

Apple reports on its forever war against App Store fraud

14 Květen, 2024 - 19:00

Apple’s ongoing fight against App Store fraud means the company has prevented in excess of $7 billion in potential fraudulent transactions in the last four years. This makes it pretty clear that fraud is a big business, and everyone should be aware of the threat.

The growing App Store fraud business

The company today released its fourth annual report into App Store fraud, and it sheds real light onto the scale of the problems Apple faces:

  • In 2023, Apple prevented over $1.8 billion potential fraud — up 20% since it first published an App Store fraud report in 2020 — and it blocked over 14 million stolen credit cards.
  • The company stopped 3.3 million accounts from transacting again. 
  • Apple rejected more than 1.7 million app submissions during the year — not just for fraud, but for privacy and security failures, or poor/copied content.
  • The company terminated nearly 374 million developer and customer accounts and removed around 152 million ratings and reviews over fraud concerns.
  • Apple stopped more than 3.5 million stolen credit cards from being used to make fraudulent purchases in 2023.

The data points should form a signal warning to anyone in the app distribution business around the Apple ecosystem. They prove that significant attempts are regularly made to undermine personal and platform security.

How Apple protects the App Store

Apple’s App Store is protected by a range of human and automated systems, including app review and malware checks. Apple says its 500-strong app review team checks around 132,500 apps every week.

The company’s systems also identify fraudulent customer and developer accounts, which often go hand in hand, with developers crafting convincing customer reviews to create trust in their apps. “These accounts tend to be bots that are created for the purposes of spamming or manipulating ratings and reviews, charts, and search results, which threaten the integrity of the App Store and its users and developers,” Apple explained.

The company also works to combat “fleeceware” apps, software that is relatively innocuous, but costs an unreasonable amount of money. The company’s developer notes warn against these: “We’ll reject expensive apps that try to cheat users with irrationally high prices,” Apple says.

What risks exist as third-party stores open in Europe?

As the first third-party iOS App Stores prepare to open for business in the EU, Apple’s report must be seen as a checklist for protection. Not only should alternative stores invest in robust monitoring against such attacks, but would-be customers must review what protections exist before sharing payment or any other details with them.

People using these stores will need to protect system security against malware and must also take the time to ensure any apps they do install are what they say they are. For example, we all know the market for people’s private data is vast. Apple has sung a mostly solo song about this and has made major investments to protect customer privacy and explain why it matters. 

There’s quite clearly a market in grabbing your data. And given that not every App Store is like the other, customers will need to check each store’s privacy policy to ensure it is in line with what they expect. That’s particularly true since the Federal Trade Commission received roughly 1 million reports of identity theft last year.

Attackers are smart and sophisticated

The sophistication of attacks is also a matter of concern, particularly following a recent SonicWall Capture Labs report that explains how Android users face a scourge of malware-infested imposter apps — apps that pretend to be legitimate apps like Instagram, but are in fact socially engineered attacks.

Apple notes similar attempts. It said its teams have prevented some attempts in which fraudsters try to distribute what seem to be completely harmless puzzle apps that, once approved, actually turn out to be something completely different, including illegal gambling and predatory loans.

Perhaps more frightening, particularly to less-experienced users, Apple said its App Store fraud teams have encountered financial service apps “involved in complex and malicious social engineering efforts designed to defraud users, including apps impersonating known services to facilitate phishing campaigns and that provided fraudulent financial and investment services.” 

It’s important to understand the scale of these attempts. Building apps costs time and money, so it matters that Apple removed/rejected around 40,000 apps engaged in such ‘bait and switch’ attempts last year. Anyone opening an alternative app store must be prepared to protect against such attacks.

It’s also worth pointing out that Apple has prevented more than 47,999 illegitimate apps available on what it calls “pirate storefronts” from reaching customers. This also protects developers against illegally cloned apps, or genuine apps into which malware has been woven.

Fighting the fight

There are lots of opinions concerning Apple’s struggles to protect App Store and customer security. While millions of customers seem pretty happy with it, some argue that Apple uses the fact it offers the safest online storefront as a competitive advantage. That’s not much of an argument.

Hopefully, future independent stores will turn out to be just as committed to user security as Apple seems to be, because it looks as if tens of thousands of rogue fraudsters will be testing those stores to identify any weak points in security. Apple, meanwhile, continues to invest in tools and initiatives to address the ever changing threat landscape. Be careful out there.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Apple, Application Security, Mobile, Mobile Apps
Kategorie: Hacking & Security

Microsoft looks to ease the shift to hybrid work with its Places app

14 Květen, 2024 - 17:33

As hybrid work arrangements become more common, it can be a challenge to get co-workers in the same place at the same time. With that in mind, Microsoft hopes to make it easier to coordinate in-person collaboration with the launch of Places, an app with features to help plan the time employees spend in the office. 

“Microsoft Places seems well-suited to help organizations address the challenges involved in supporting a hybrid workplace,” said Raúl Castañón, senior research analyst at 451 Research, part of S&P Global Market Intelligence. The transition to a hybrid work model has involved more than a reduction in office footprint, he said, resulting in a shift of the physical office to a “collaborative, flexible space,” with fluctuating occupancy.

With Places, workers can set their own proposed location schedule and display it for others to see. The information can be set using the Places app itself or in Outlook Calendar. They can to also view the schedules for colleagues and managers, while managers can specify certain days when staffers are expected to be in the office. 

Microsoft plans to integrate Places data with its Copilot generative AI (genAI) assistant later this year, the company said in a blog post Monday. This will enable suggestions about which days are best for individuals to come into the office, highlighting in-person meetings and the planned presence of managers and coworkers. 

A booking feature called “Places find” — also accessible from Outlook — lets users search for available rooms and desks. Here, images of rooms or desk pools are displayed on an office map, with details about available technology and other amenities. Users will receive reminders if they haven’t booked a room on a day they’re scheduled to be in the office. The upcoming Copilot integration will automatically find and book meeting spaces on behalf of users. 

Places is designed to make it easier for hybrid workers to connect with colleagues when in the office.

Microsoft

The AI features are the most eye-catching element in Places, said Jack Gold, founder and principal analyst at J. Gold Associates. 

“The ability to have an assisted intelligence agent ‘uncomplicate’ scheduling and peer connectivity issues will be well received, if it works as planned…,” he said. “Most of us have too many complicated interactions theses days and raising the needed flags for pending obligations is a welcome feature.”

An “expanded presence” feature enabled by Places is available across Microsoft 365 apps. It provides users with a glimpse of who’s in the office at the same time, making it easier for co-workers to arrange a quick face-to-face chat. Microsoft said all location data adheres to its privacy standards, with opt-in and opt-out controls accessible to admins and users.

 The “expanded presence” data is “aggregated and anonymized in reports about office occupation trends,” Microsoft said in the blog post.  

Places will also provide admins and facilities managers with analytics on office use, letting them see what space is available and what’s actually needed. Copilot, once integrated with Places, will provide suggestions on how best to manage and adapt spaces to meet demand. 

Adapting to new working patterns is a challenge facing many companies, said Gold. “While the pendulum has begun to swing back to in-office work for some organizations, after a rapid increase in remote work spurred by office closures during the COVID-19 pandemic, the reality for most is that they’ll be supporting some level of hybrid work going forward,” he said. “I don’t expect that to change anytime soon.” 

A recent 451 Research report indicated that organizations are investing in various technologies to enable a hybrid workplace, including contactless technologies such as biometrics and voice user interfaces (28%) and office space management technology (35%) that includes hot-desk reservations, IoT sensors and AI-driven analytics to optimize office capacity. 

“We expect these trends will continue throughout 2024, as a significant number of surveyed organizations say they will continue to invest in these technologies over the next 12 months,” said Castañón.

Microsoft Places competes with a range of products, including various hybrid work management features available in productivity and collaboration software suites from Zoom and Google. The Places app could make sense for customers that want to remain with Microsoft apps.

“There are a number of tools that offer equivalent capabilities to Places,” said Gold, but the integration of feature into Microsoft 365 makes it “easier for enterprises to focus on staying in the Office environment rather than finding piece-meal approaches, or moving wholesale to a different productivity suite.”

Microsoft Places is available in public preview now for Teams Premium subscribers.

Generative AI, Microsoft, Microsoft 365, Remote Work
Kategorie: Hacking & Security

Adobe introduces AI assistant to help enterprises exploit data held in PDFs

14 Květen, 2024 - 14:26

Adobe has unveiled a new artificial intelligence assistant that integrates with its widely used Acrobat PDF software, promising to help enterprise workers save hours each week by making it easier to extract insights and information from digital documents.

Acrobat AI Assistant for enterprise is an add-on to Adobe’s Acrobat product that will enable employees to quickly generate summaries of lengthy, complex documents, get answers to specific questions, create initial drafts of written content, and navigate to relevant sections of a report through clickable citations.

Adobe employees in sales and in research and development are already using it internally, according to Toni Vanwinkle, Adobe’s Vice President of Digital Employee Experience. “AI assistant really helps our employees work smarter,” she said. “Unlocking information in the enterprise is really one of those key things to foster productivity for knowledge workers.”

Vanwinkle provided several examples of how Adobe staff are using the AI assistant to work more efficiently. The company’s research and development teams, she said, have cut down the time spent analyzing industry trends from hours to minutes by using the tool to automatically summarize technical documents. Adobe’s sales teams have also reduced by half the time needed to research responses to requests for proposals (RFPs) by using the assistant to pinpoint relevant information.

Acrobat already includes some AI features that any user can access, but the new enterprise assistant focuses specifically on common business use cases such as report querying, document navigation, content summarization and data extraction.

A conversational interface enables workers to extract information from PDFs and documents in other formats, including Word and PowerPoint, or generate summaries so they don’t have to read the whole thing.

Under the hood, the AI models powering the assistant are currently from Microsoft’s Azure OpenAI service, which provides enterprise access to OpenAI’s powerful language models like GPT-3. However, Vanwinkle said Adobe plans to take an “LLM-agnostic” approach in the future that will enable enterprise customers to plug in other large language models based on their specific needs. But any partner models would need to meet Adobe’s standards for ethics, security, and privacy, she noted.

Allowing an external LLM to process enterprise data held in potentially confidential documents could be seen as a security risk, but Adobe said that the AI assistant is governed by data security protocols.

To help enterprises successfully deploy the AI assistant, Adobe is providing best practices guides and customer success managers to advise companies on implementation, integration, and driving organizational change. The company is also assisting customers in setting up “communities of practice” that bring together AI champions from different functions to share knowledge and identify high-value use cases.

Shared responsibility

But even with the most advanced AI, Vanwinkle emphasized that human judgment remains essential and that the assistant is not meant to replace human workers but rather augment their capabilities. Users still need to carefully review and validate the AI’s outputs, especially for any externally facing content.

“We want to make sure that there’s always a human in the loop,” she said. “Understanding really strong prompting, understanding the documents, doing that verification process all help us with the hallucination issue” that sometimes causes AI systems to generate inaccurate or nonsensical information.

Adobe insists that when enterprises make use of Acrobat AI Assistant, they agree to use the features responsibly.

The company has been investing heavily in AI across its product lines in recent years, but the Acrobat AI Assistant for enterprise represents one of its most ambitious efforts yet to bring advanced AI capabilities to knowledge workers in large organizations. It comes as competition heats up in the suddenly crowded market for generative AI tools aimed at business users, with rivals like Microsoft and Google racing to integrate their own AI models into productivity software.

Looking further ahead, Vanwinkle said Adobe’s long-term vision is for AI to become a “superpower” that helps unlock employees’ full potential rather than a threat to their jobs. “The vision for me is to give employees assistance that can unlock and become superpowers for them to bring more valuable work to the enterprise and hopefully more quality to their own lives and more well-being,” she said.

The new Adobe Acrobat AI assistant for enterprises is generally available, the company said. Pricing details were not disclosed, but the tool will be sold as an add-on to Adobe’s existing enterprise Acrobat licenses on a per-user basis.

For now, the Acrobat AI Assistant only functions in English, but additional languages are “coming soon,” the company said.

Generative AI, Productivity Software
Kategorie: Hacking & Security

Google: Project Starline 3D meeting platform arrives next year

14 Květen, 2024 - 13:53

Two years ago, Google previewed Project Starline, a 3D video conferencing system that gives users the feeling that they’re in the same room. On Monday, the company  announced plans to make the technology commercially available next year, promising a more immersive meeting experience than is possible with the current crop of videoconferencing applications.

“You can talk, gesture and make eye contact with another person, just like you would if you were in the same room,” Andrew Nartker, general manager for Project Starline at Google, said in a blog post

Project Starline is the product of several years of research and “thousands of hours” of internal testing, Google said. It relies on a combination of technologies such as 3D imaging, computer vision, and spatial audio to replicate the feeling of being in the same physical space. At first, Project Starline required a large video booth; it has since been condensed to a smaller form factor that resembles a more traditional videoconferencing system.

Google is now working with HP to make the technology commercially available in 2025, the company said. Pricing has not been announced. 

“Google’s Project Starline is a transformational technology where those using the device actually feel like they are meeting in real life with someone else,” said Wayne Kurtzman, IDC’s research vice president for collaboration and communities. “I’ve heard enterprise partners who have used the device for the first time call it a ‘magic box.’”

Project Starline has several advantages over existing video software, according to Google. Greater levels of immersion help meeting participants be more attentive, Google said, referencing internal user research. The company also claimed meeting participants have better memory recall, are more likely to remember details of a “face-to-face” conversation, and experienced less  video call fatigue. 

“With more than half of meaning and intent communicated through body language versus words alone, an immersive collaboration experience plays an important role in creating authentic human connections in hybrid environments,” Alex Cho, president of personal systems at HP, said in a statement.

On first impression, the technology appears to have value for one-to-one virtual experiences, said Irwin Lazar, president and principal analyst at Metrigy, though adoption will likely hinge on cost.

Google isn’t the first to attempt realistic videoconferencing, said Lazar, who pointed to systems developed by vendors such as Cisco, Tandberg, Polycom, and HP. However, these telepresence devices failed to gain widespread adoption due to the high cost of deployment, he said. 

“So, I’ll reserve some judgement [on Project Starline] until we see the cost,” said Lazar. 

Even so, he pointed to continued business investment in innovative video meeting technologies, such as multi-camera systems and center room cameras that aim to help remote participants engage with one another more effectively. “ I do expect that there will be interest in piloting this technology — especially if it’s available within existing meeting apps like Google Meet and Zoom Meetings,” he said.

Collaboration Software, Google, HP, Videoconferencing
Kategorie: Hacking & Security

OpenAI rival Anthropic launches Claude chatbot in Europe

14 Květen, 2024 - 12:03

OpenAI competitor Anthropic has launched its AI chatbot, Claude, in Europe, a region known for its challenging regulatory environment.

Claude supports multiple European languages including French, German, Spanish, and Italian, enhancing multilingual interactions, Anthropic said in a statement. Earlier this year, the company also launched Claude API in Europe.

Significantly, this comes a day after OpenAI announced a new desktop version of ChatGPT and an upgraded user interface called GPT-4o, which allows interactions via text, voice, and visual prompts.

European customers now have access to Claude.ai, the web-based version, the Claude iOS app, and a Claude Team plan, Anthropic said. The team plan will provide businesses with secure access to Claude’s advanced AI capabilities and the Claude 3 model family, the company said.

Claude.ai and the Claude iOS app will be available for free. Users will be able to subscribe to Claude Pro to access advanced models. For businesses, a Team plan will be available at €28 (approx. $30.21) + VAT per user monthly, requiring a minimum of five users.

Anthropic is backed by Alphabet’s Google and Amazon.

European regulatory challenges

Europe is known for its strict regulations governing tech companies. In April this year, EU lawmakers reached a provisional political agreement on what could be the world’s first set of regulations for AI by a major legislative body.

In January, UK and EU regulators scrutinized Microsoft’s investments in OpenAI, suggesting a potential review. Although later reports indicated that Microsoft might not face any problems, the situation underscored the regulatory hurdles that companies face in the EU.

Things can become even more complex when competition gets involved. For instance, Google Cloud, along with AWS and Europe-based Cloud Infrastructure Services Providers in Europe (CISPE), recently protested against Microsoft’s cloud software licensing practices in the EU.

“Claude’s rigorous adherence to these stringent European regulations is imperative,” said Thomas George, president of CyberMedia Group. “This commitment not only elevates the standards of data privacy and security but [potentially] positions Claude as a benchmark for responsible AI use within the industry. On the cybersecurity front, Claude’s integration introduces significant challenges and opportunities. Such AI models’ extensive use of data highlights the critical need for robust security measures.”

Efforts to expand amid competition

Anthropic’s expansion to Europe could mark its latest growth strategy following several earlier announcements this year.

In March, the company upgraded its flagship models to a new 3.0 standard, enhancing performance across a range of common tasks and increasing processing speeds.

Anthropic now offers three versions of Claude — Opus, which is fully-featured; Sonnet, a mid-tier option, and Haiku, a lightweight version. Each model has varying performance scores across different tasks, with Sonnet and Haiku sacrificing some accuracy for other benefits like speed.

All this comes as Anthropic continues to face stiff competition from rivals. George pointed out that in Europe, Anthropic’s entry can significantly disrupt the existing balance. “With its advanced capabilities, Claude is set to challenge major incumbents like OpenAI and Google, fostering a competitive environment that could greatly benefit European businesses,” George said. “This competition is expected to catalyze innovations, leading to superior AI solutions particularly tailored for sectors such as finance, healthcare, and customer service.”

Generative AI
Kategorie: Hacking & Security

OpenAI announces new multimodal desktop GPT with new voice and vision capabilities

13 Květen, 2024 - 22:43

After weeks of speculation, ChatGPT creator OpenAI announced a new desktop version of ChatGPT and a user interface upgrade called GPT-4o that allows consumers to interact using text, voice, and visual prompts.

GPT-4o can recognize and respond to screenshots, photos, documents, or charts uploaded to it. The new GPT-4o model can also recognize facial expressions and information written by hand on paper. OpenAI said the improved model and accompanying chatbot can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, “which is similar to human response time in a conversation.”

The previous versions of GPT also had a conversational Voice Mode, but they had latencies of 2.8 seconds (in GPT-3.5) and 5.4 seconds (in GPT-4) on average.

GPT 4o now matches the performance of GPT-4 Turbo (released in November) on text in English and code, with significant improvement on text in non-English languages, while also being faster and 50% cheaper in the API version, according to OpenAI Chief Technology Officer Mira Murati.

“GPT-4o is especially better at vision and audio understanding compared to existing models,” OpenAI said in its announcement.

During an on-stage event, Murati said GPT-4o will also have new memory capabilities, giving it the ability to learn from previous conversations with users and add that to its answers.

Chirag Dekate, a Gartner vice president analyst, said that while he was impressed with OpenAI’s multimodal large language model (LLM), the company was clearly playing catch-up to competitors, in contrast to its earlier status as an industry leader in generative AI tech.

“You’re now starting to see GPT enter into the multimodal era,” Dekate said. “But they’re playing catch-up to where Google was three months ago when it announced Gemini 1.5, which is its native multimodal model with a one-million-token context window.”

Still, the capabilities demonstrated by GPT-4o and its accompanying ChatGPT chatbot are impressive for a natural language processing engine. It displayed a better conversational capability, where users can interrupt it and begin new or modified queries, and it is also versed in 50 languages. In one onstage live demonstration, the Voice Mode was able to translate back and forth between Murati speaking Italian and Barret Zoph, OpenAI’s head of post-training, speaking English.

During a live demonstration, Zoph also wrote out an algebraic equation on paper while ChatGPT watched through his phone’s camera lens. Zoph then asked the chatbot to talk him through the solution.

While the voice recognition and conversational interactions were extremely human-like, there were also noticeable glitches in the interactive bot where it cut out during conversations and picked things back up moments later.

The chatbot then was asked to tell a bedtime story. The presenters were able to interrupt the chatbot and have it add more emotion to its voice intonation and even change to a computer-like rendition of the story.

In another demo, Zoph brought up software code on his laptop screen and used ChatGPT 4o’s voice command app to have it evaluate the code, a weather charting app, and determine what it was. GPT-4o was then able to read the app’s chart and determine data points on it related to high and low temperatures.

From left to right, OpenAI CTO Mira Murati, head of Frontiers Research Mark Chen, and head of post-training Barret Zoph demonstrate GPT-4o’s ability to interpret a graphic’s data during an onstage event. 

OpenAI

Murati said GPT-4o text and images capabilities will be rolled out iteratively with extended “red team” access starting today.

Paying ChatGPT Plus users will have up to five times higher message limits. A new version of Voice Mode with GPT-4o will arrive in alpha in the coming weeks, Murati said.

Model developers can also now access GPT-4o in the API as a text and vision model. The new model is two times faster, half the price, and has five times higher rate limits compared to GPT-4 Turbo, Murati said.

“We plan to launch support for GPT-4o’s new audio and video capabilities to a small group of trusted partners in the API in the coming weeks,” she said.

Zoph demonstrates using his smartphone’s camera how GPT-4o can read math equations written on paper and assist a user in solving them.

OpenAI

What was not clear in OpenAI’s GPT-4o announcement, Dekate said, was the context size of the input window, which for GPT-4 is 128,000 tokens. “Context size helps define the accuracy of the model. The larger the context size, the more data you can input and the better outputs you get,” he said.

Google’s Gemini 1.5, for example, offers a one-million-token context window, making it the longest of any large-scale foundation model to date. Next in line is Anthropic’s Claude 2.1, which offers a context window with up to 200,000 tokens. Google’s larger context window translates into being able to fit an application’s entire code base for updates or upgrades by the genAI model; GPT-4 had the ability to accept only about 1,200 lines of code, Dekate said.

An OpenAI spokesperson said GPT-4o’s context window size remains at 128k.

Mistral also announced its LLaVA-NeXT multimodal model last week earlier this month. And Google is expected to make further Gemini 1.5 announcements at its Google I/O event tomorrow.

“I would argue in some sense that OpenAI is now playing catch-up to Meta, Google, and Mistral,” Dekate said.

Nathaniel Whittemore, CEO of AI training platform Superintelligent, called OpenAI’s announcement “the most divisive” he’d ever seen.

“Some feel like they’ve glimpsed the future; the vision from Her brought to real life. Others are left saying, ‘that’s it?’ he said in an email reply. “Part of this is about what this wasn’t: it wasn’t an announcement about GPT4.5 or GPT-5. There is so much attention on the state of the art horserace that for some, anything less than that was going to be a disappointment no matter what.”

Murati said OpenAI recognizes that GPT-4o will also present new opportunities for misuse of the real-time audio and visual recognition. She said the company will continue to work with various entities, including the government, the media, and the entertainment industry to try to address the security issues.

The previous version of ChatGPT (4.0) also had a Voice Mode that used three separate models: one model transcribes audio to text, another takes in text and outputs text, and a third model that converts that text back to audio. That model, Murati explained, can observe tone, multiple speakers, or background noises, but it can’t output laughter, singing, or express emotion. GPT-4o, however, uses a single end-to-end model across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network for more of a real-time experience.

“Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations,” Murati said. “Over next few weeks, we will continue iterative deployments to bring to you.”

Emerging Technology, Generative AI
Kategorie: Hacking & Security

Apple makes a deal to open iPhone to Generation GenAI

13 Květen, 2024 - 15:47

Apple may have agreed to a deal to offer Chat GPT support within Siri on iPhones, according to the New York Times and Bloomberg. The deal might also eventually extend to other generative AI (GenAI) services, including Google Gemini, though no additional agreements are yet in place. This is just the latest in a rash of stories exploring the company’s AI plans.

It isn’t clear exactly how Apple will approach this. It’s possible the company will offer iPhone users the choice to use these services to replace its Siri voice assistant, and we’re likely to learn more about its implementation plans at WWDC in June. These will likely extend across its platforms.

How to secure Generation GenAI

Apple’s challenge will be to maintain the user privacy and security it prides itself in offering on its devices, while also supporting third-party chatbots. I do wonder if the company is investing in some form of OpenAI’s ChatGPT for Enterprise product, which provides additional privacy and security protection features.

The New York Times report confirms the company felt it missed the boat on genAI in the first instance, but also suggests the signal importance with which Apple regards this technology now. Among other things, it claims the company’s decision to close down its multi-billion dollar Apple Car project was made in order to transfer resources to the company’s in-house genAI development efforts. 

The report also describes some of the challenges secretive Apple has in attracting leading AI talent, noting that some new hires subsequently quit as they felt constrained by the need for secrecy. 

Setting up for a better Siri

For Siri, the main issue Apple execs hope to resolve through the deployment of ChatGPT is the capacity to deliver more accurate contextual understanding.

At present, Siri has no understanding of context, which means it just isn’t good at engaging in more complex tasks than stating the weather or turning music on. ChatGPT lets you define numerous factors to provide much more granular information. The company could also reach similar deals with other companies to offer their takes on genAI. (Apple is thought to have been in talks with both Baidu and Google in this regard.)

Bloomberg reports three primary strands to the company’s attempts:

  • On-device genAI: Solutions that run on the device, no server required,
  • Coud-powered AI: Cloud-based intelligence similar to that provided by ChatGPT.
  • Chatbot services: A more intelligent, contextually-aware Siri. It’s possible Apple will send some or all Siri enquiries to a third party service, Bloomberg said.
What is at stake

The need to protect its iPhone kingdom is the primary motivation at Apple. The company apparently feels that the use of genAI, and intelligent agents powered by that technology, could eventually supplant both iOS and the App Store. “…it has the potential to become the primary operating system, displacing the iPhone’s iOS software,” wrote the New York Times, citing company insiders. 

Apple won’t be fully dependent on third-party AI, as it has its own solutions also in development. These include improving Siri’s ability to handle tasks it already manages, while extending its feature set with new tools the company can then introduce as being more private than other services. In part, this will be because some of the processing will take place, as we’ve anticipated, on the device itself. 

The report also confirms recent claims Apple is developing new silicon for use in data centers. The idea here will be to reduce server costs, improve energy efficiency and to bake privacy into the system in order to maintain user security.

Will WWDC be make or break for Cupertino?

It is important to note that the frequency of reports concerning Apple’s plans for AI is accelerating as we ramp toward the company’s big developer event, WWDC. This strongly suggests that Apple will introduce new components to its overall plan for AI at the show. 

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Apple, Artificial Intelligence, Generative AI, iOS, iPhone, Siri
Kategorie: Hacking & Security

EC to include Teams as part of antitrust charges despite Microsoft concessions

13 Květen, 2024 - 14:27

The European Commission (EC) plans to bring antitrust charges against Microsoft for anticompetitive practices regarding its Teams product despite concessions the company has made so far to try to avoid them.

EU officials plan to include concerns over Microsoft’s workforce collaboration and teleconferencing app in the forthcoming antitrust charges against the technology giant, according to a report in the Financial Times published Monday.

Last July, the EC opened a formal investigation to assess whether Microsoft may have breached EU competition rules by tying or bundling its communication and collaboration product Teams to its popular suites for businesses Office 365 and Microsoft 365.

The EC declined to comment on the FT report Monday, but a spokesperson noted that its investigation into Microsoft’s potential unfair competitive practices over Teams is ongoing. Microsoft has made several concessions to try to avoid antitrust charges that are expected to come from the EC, apparently to no avail.

Conscious unbundling

Last month, the company revealed that its enormously popular workforce collaboration app — which reached over 300 million global users in 2023 — will now be sold separately from Microsoft Office worldwide. This extended the previous action to unbundle Teams from its productivity bundle in the EU Economic Area and Switzerland.

However, it appears this was not enough to appease EU officials into thinking this made the market fair for competition, which means the company still faces being fined up to 10% of its global turnover if found in breach of EU rules. Microsoft Monday declined to comment on the FT report.

The origin of the EC Microsoft Teams investigation dates back to July 2020 when enterprise messaging application Slack, which has since been bought by Salesforce, originally filed a competition complaint against Microsoft.

That complaint alleged that the tech giant was engaging in the “illegal and anti-competitive practice of abusing its market dominance to extinguish competition in breach of European Union competition law” by “force installing it for millions, blocking its removal, and hiding the true cost to enterprise customers.”

This was around the same time apps such as Slack and Teams, which allow corporate workers in disparate locations to collaborate and share files via chat, exploded in popularity during the remote work mandates of the COVID-19 pandemic.

Industry continues to have questions

Apparently, despite Microsoft’s unbundling, rivals still have concerns that the company will make Teams run more seamlessly on its own software than theirs, according to the FT report. They also have concerns over data portability that could hinder existing Teams users from making the switch to rival apps, according to the report.

Indeed, questions remain for an industry that watches with keen interest whether Microsoft is sincere in “following the fairness principle in both letter and spirit,” noted Pareekh Jain, CEO of EIIRTrend & Pareekh Consulting.

“Apart from price bundling concerns,” the question also remains whether Microsoft is “facilitating users to freely transfer between videoconferencing apps with the right experience and performance,” he said.

“This case is important as it also will have an impact on bundling and unbundling of copilots and OpenAI-related software apps,” Jain added.

If and when the charges are filed, it would be the first time in about a decade that EU regulators have formally filed antitrust charges against Microsoft. The last series of anticompetitive investigations ended in 2013. To date, Microsoft has racked up 2.2 billion euros ($2.4 billion) in fines in the past decade for tying or bundling products together in a way that was deemed anticompetitive by EU regulators.

Microsoft Teams, Regulation
Kategorie: Hacking & Security

A glimpse at the powerful future of information

13 Květen, 2024 - 12:04

San Francisco-based Perplexity AI is currently the biggest threat to both Open AI and Google. The startup, founded in 2022, is a unicorn with a $1 billion valuation founded by former Google and Open AI employees. 

The reason: it’s an extremely well-designed hybrid of ChatGPT and Google Search, while being superior to both for most common information chores. 

You can use Perplexity AI as a multi-modal search engine. For example, you can feed it a long list of URLs, upload photos and PDFs, and add snippets of code. The uploads can be in any language, and it will return results in English unless you direct it otherwise. (The free version of Perplexity AI is fine for most users, but some of the more advanced features here are available only in the Pro version, which costs $20 per month or $200 per year. The free version includes five Pro searches per day; the Pro version 600.) 

The results are selected and prioritized based on an improved version of Google’s PageRank, which favors more authoritative and reliable sources and uses heuristics and data-driven learning from past queries to improve accuracy and relevance. 

You can accept its default mode of searching the entire internet, or tell it specifically to search only academic papers, Wolfram Alpha, YouTube or Reddit — or use only the content you type, paste or upload. It then reads the top results and provides a genAI-produced summary using models like GPT-4, Claude, and Mistral Large, with links to the pages where it got its facts. 

A new query starts a “Thread,” so you can ask follow-up questions without repeating the details. You can even say vague things like, “Can you elaborate?” and it will. The site presents follow-up questions, which you can trigger with a simple click; invites dialog and further investigation; and you can save the threads for later. 

It can also generate images, and in an interesting way. After any query, you can just click a “generate image” button, and it will produce one. 

Why Perplexity AI is the future of search 

Perplexity AI is not unique. It’s simply the most popular and probably best in the category of tools that combine search with large language model (LLM) chatbots. (I previously recommended another in this space, is phind.com.)

The leading brands will soon converge on Perplexity’s space. OpenAI is rumored to be building a search engine. Apple’s Siri is thought to be getting the addition of AI. Bing already uses ChatGPT. And Perplexity-like startups abound — for example, Subtl.ai explicitly positions its tools as a kind of “enterprise Perplexity AI” that keeps corporate data private (Perplexity has its own enterprise offering, called Perplexity Enterprise). 

It’s likely that within a year, the AI chatbots will have search, the search engines will have AI and the voice assistants will have both search and AI. 

This strikes me as an improvement all around.

Where does Perplexity go from here? 

Perplexity is currently beta testing a new feature called “Perplexity Pages,” which is a pretty amazing idea that combines search, AI and crowdsourcing. 

Pages is a text editor, where Perplexity produces a draft article on a topic chosen by the user. The user then edits and enhances the article by removing passages, doing additional queries on key points, adding pictures and other interventions. Once satisfied, the user then “posts” the article for other users to see in a social network context (users will get a feed of completed Pages articles). 

Perplexity Pages sounds like a kind of AI-fueled cross between Reddit, Wikipedia and Twitter.

Perplexity AI currently offers a “Discover” tab, where AI-generated articles are curated. Presumably, the company wants to replace or augment this feed with user-modified articles. With enough contributors, Perplexity AI may be inspired to allow users to follow other users, like a social network, blogging platform or newsletter service.

Something like this could replace social networks for people who care about learning and better information. 

Where Perplexity should go from here

If Perplexity AI really wanted to stay ahead of the curve, they’d launch smart glasses that give audio and visual access (via a camera) to the service, all controlled by the mobile app. 

At present, this is possible but not desirable. Brilliant Labs’ Frame smart glasses use Perplexity, among other services. The problem is that Brilliant Labs is an underfunded and undercapitalized startup offering unacceptably dorky glasses that don’t work as well as they could. Frame glasses make you look like a cartoon character like Dexter from “Dexter’s Laboratory.”

The other problem is Perplexity’s AI computer voice feature, which is available in the mobile app to Pro users. The company offers four voices — two male and two female — and they’re all pretty good, but not quite good enough. 

Pi.ai, which has a voice interface via the web and mobile versions alike, offers far more life-like voices. 

By combining socially acceptable glasses, the option for user prescriptions, high-quality microphones, speakers and sensors (like Ray-Ban Meta glasses) and great computer voices like Pi.ai, Perplexity could own the immediate future of AI glasses, which is growing into a killer category. 

Another area Perplexity AI could dominate is fact-checking. Because its version of PageRank is so good, Perplexity AI is already a great place to do everyday fact checking. By simply adding a link to a story and asking Perplexity AI if the information in the article is true — or just asking if a claim is true — the site does an outstanding job most of the time debunking disinformation and setting the context for the information. 

For example, I asked Perplexity AI: “Are chemtrails real?”

It replied: “Chemtrails, as described by conspiracy theories, are not real. The term “chemtrails” is often used by conspiracy theorists to suggest that the visible trails left by aircraft in the sky are composed of chemical or biological agents deliberately sprayed for nefarious purposes. However, extensive scientific research and expert analysis have debunked these claims.”

It then went on to describe what contrails are, offered up some of the most reputable studies on the issue (with links to the papers), then a little context about the false conspiracy theories and where they come from.

This kind of instant quality fact-checking is what the world needs right now. 

Perplexity AI should create a browser — or at minimum, a browser extension — that automatically debunks all information displayed in that browser. (Perplexity does offer a browser extension, but it doesn’t debunk every false thing you’re looking at.) I’d also like to see this deployed on social networks and other sources of disinformation.

Right now, Perplexity AI is the state of the art for AI based search. It’s likely that such tools might one day become a banality. In the meantime, there’s no better or more useful AI-based chatbot for most business people.

‘One True Answer’ vs. reality

Years ago, with voice assistants and appliances like the Amazon Echo coming into use, we thought we’d be forced kicking and screaming out of the Search Engine era and into the One True Answer era (where queries would take a stab at a single definitive answer, and thereby hand over to search engine companies like Google the power to determine what’s true and what’s false (whether its answers were true or not). 

The rise of tools like Perplexity AI and an industry seeking to copy or compete with them on their terms brings optimism that, instead of the One True Answer dystopia we feared, our information appliances and services can instead give us a balanced answer from multiple reputable sources specified in links and invite us into a back-and-forth conversation about the information. 

This is better than a single answer, better than a search result full of links and better than a ChatGPT-like chatbot, prone to confidently asserted hallucinations.

While OpenAI’s ChatGPT kickstarted the LLM-based generative AI revolution, Perplexity AI may be the company that leads the way into the future we really need. 

Emerging Technology, Generative AI, Technology Industry, Web Search
Kategorie: Hacking & Security

Apple updates its Platform Security Guide

10 Květen, 2024 - 17:53

Apple’s head of security engineering and architecture, Ivan Krstić, this week announced the publication of what should be essential reading for Apple admins and security pros — the newly updated Apple Platform Security guide. (Among other things, Krstić also leads Apple’s war against surveillance hackers.)

The latest update since 2022, the guide is currently being translated into local language versions, so it might not be available on your local Apple server. You can get it in American English directly from the US site, and you’ll know when you find it because the May 2024 publication date will be visible at the bottom of the front page. 

What is the Platform Security Guide?

“This documentation provides details about how security technology and features are implemented within Apple platforms. It also helps organizations combine Apple platform security technology and features with their own policies and procedures to meet their specific security needs,” Apple says in the introduction to the 210-page document. (It’s interesting to note that in 2019 the guide extended to 157-pages.)

Open it up and you’ll find updated information, along with the addition of new sections addressing several topics, including App Store, WidgetKit, and Lockdown Mode security. The latter doesn’t explain much we didn’t know already, but puts the protection into context and links to the most recently updated information concerning that mode. The document has also been brought up to speed with additional information concerning start-up security on the latest Apple Silicon devices and harmonizes links to the company’s security reporting pages.

I expect in the future it might further extend to sharing information pertaining to server chips from the company, if that plan turns out to be true.

What’s new in the Platform Security guide?

Some particular highlights include a better explanation of the company’s built-in malware protection system, XProtect, and a little added insight into how App Store security works. 

How XProtect works is to some extent a bit of a black box, but the latest iteration of the report does shed a little light on what’s happening:

“Should malware make its way onto a Mac, XProtect also includes technology to remediate infections. For example, it includes an engine that remediates infections based on updates automatically delivered from Apple (as part of automatic updates of system data files and security updates). This system removes malware upon receiving updated information, and it continues to periodically check for infections; however, XProtect doesn’t automatically restart the Mac. In addition, XProtect contains an advanced engine to detect unknown malware based on behavioral analysis. Information about malware detected by this engine, including what software was ultimately responsible for downloading it, is used to improve XProtect signatures and macOS security.”

As for App Store security, EU readers will note that this section hasn’t yet been updated to include what security Apple provides around purchases made from third party stores. That’s likely to make interesting reading once it does appear. But the document does explain the five different security processes that govern apps sold through the company’s own App Store. These include automated malware scans, human review, manual checks, user reviews, and processes for correction and removal of bad/scam apps.

Under the EU sideloading scheme, Apple will only be able to ensure malware scans and respond to user feedback; third-party app providers will deliver (and presumably in some cases, fail to deliver) the other security processes.

Who is the guide for?

This really is essential reading for anyone who wants to better understand Apple security. That means Apple admins as well as developers, security researchers, customers — anyone who really wants to get to grips with the information it offers.

Those already familiar with the document shouldn’t expect much; while there are some new sections (and dozens of sections have been updated), many of those changes are relatively small. (Some of the information about recently introduced security tools for Messages may be of interest, however.)

Given the scale and complexity of the Apple platform ecosystem, it seems likely some small tidbits of new information will be found. 

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Apple, iOS Security, Mac, MacOS Security, Privacy, Security, System Administration
Kategorie: Hacking & Security

Q&A: Insurance exec says AI nearly perfect when processing tens of thousands of documents

10 Květen, 2024 - 12:00

Nearly a year after rolling out a generative artificial intelligence (genAI) tool to help it process thousands of claims documents, global insurance claims management firm Sedgwick says the technology’s accuracy is nearly perfect.

Sedgwick, which operates in 80 countries, receives about 1.7 million pages of digital claims-related documents a day. The documents then go through an arduous vetting process by examiners who must decide whether they’re valid and how they should be handled.

In April 2023, Sedgwick unveiled a genAI tool called Sidekick to help with document summarization, data classification, and claims analysis. Sidekick uses OpenAI’s GPT-4, giving the company “an unlimited number of large language models to be created for varying purposes.”

In December, Computerworld spoke with the company’s global chief digital officer, Leah Cooper, about the challenges and purposes of the genAI rollout. At that time, Sedgwick’s Sidekick genAI technology had combed through 14,000 documents and was “shockingly good” at accurately spitting out summaries. Five months later, Cooper said more than 50,000 documents have been processed by Sidekick, and those documents have been evaluated by more than 1,000 examiners who reported a 98%-plus accuracy rate in the document summarizations.

Leah Cooper, Sedgwick’s global chief digital officer

Sedgwick

Computerworld revisited the topic with Cooper to ask her what she and the company have learned about genAI and its capabilities for reducing workloads and increasing document processing efficiencies.

Tell me about the kinds of documents you had Sidekick evaluate. Are they all medical-related insurance documents or do they run the gamut? And how long are they? “Medical documents in the workers compensation space were the core focus for our initial pilot, but we have since expanded to other types of documents and photos for validation. The average length of the documents in the testing phase was six to seven pages, but some were much longer, ranging from 25 to 30 pages.”

Tell me a little about Sidekick, how you developed it using OpenAI’s GPT-4, and how it connects to your document management system? “We developed Sidekick so that we can leverage the best of what genAI has to offer, giving our claims professionals an advantage in their daily work.

“If we can drive efficiencies by taking the busywork out of claims administration, and allow our people to focus on taking care of our customers, we can transform not only the process but the experience of having a claim. Our first initiative was to summarize documents that are received in support of a claim. Those were the basics: deploying ChatGPT into the Sedgwick environment so that our data stayed secure, and then evaluating a first use case to see if genAI could be successfully implemented.  

“We are thrilled to say that we did that successfully, incorporating over 50,000 documents during our pilot phase of Sidekick. We’ve just wrapped up our second phase, where we integrated Sidekick technology into our proprietary claims admin systems. This was a huge part of the productivity driver in our business.

“Now, how do we make this tool more relevant to our business? How do we drive productivity, decrease resolution time, and shift from a tactical application to a strategic and conceptual one? This is where we are uniquely positioned: it’s not one tool, it’s several capabilities pulled together to create a scalable, rapidly deployable platform in a unique way. If we combine generative AI with 50 years of understanding and refining how the claims model works and a best-in-class data science program, Sedgwick will pivot into a business model that transforms the claims industry.

Last time we spoke, you told me about 500 employees were using Sidekick. Did that number remain the same and how did they use genAI in their work? “Now we have over 1,100 employees who have used this tool. Examiners are using this technology to summarize claims documents and expedite the entire process.”

What kind of feedback did employees using Sidekick give you – positive and negative (if both)? “Employees are actually asking us to make this product widely available more quickly. People who have used this solution are telling colleagues about the accuracy of summarizations and time saved on claims, fostering a culture of excitement around a new tool which hasn’t existed before.”

How did you determine the 98%+ accuracy rate? “In Sidekick’s pilot phase, we constantly asked for feedback from employees who were evaluating the output results of Sidekick. Examiners would be prompted to say whether the document was successfully summarized or whether something was missing.

“One key to defining a strong AI program is to set the expectation of outputs so users can understand what they are judging as part of this new process. By identifying what examiners are looking for and defining our output results, we were able to set a standard for what is deemed successful and what is not. 

“It’s incredibly important to obtain real feedback from the users who are ‘boots on the ground.’ Individuals who would normally create these summarizations manually were the ones who graded the AI. In the first few months, we did a lot of tweaking based on feedback and it took multiple iterations of prompt engineering to simulate what goes through an examiner’s mind.

“Once we nailed it down and were satisfied with initial testing scores, we rolled it out more widely to 1,100 employees, who ultimately scored Sidekick with the 98%+ accuracy rating. Business partner involvement in determining success is crucial to adoption of the technology. If the people that support these claims are not behind it, companies will not realize a successful engagement with technology.”

Can you explain the time savings and potential productivity increases genAI created? “This technology has created and will continue to learn efficiency gains throughout our organization. We’ve been trying to find a way to automate tasks associated with claims administration that are not complex for business operations, they’re just necessary steps in a process. From my perspective, we want to focus on how we can recognize areas that need lesser attention (e.g., the simpler claims) and allow our examiners to really focus on the other types of claims that could benefit from faster and more collaborative engagement with our customers. If we can take busywork out of claims that need minimal investigation and instead direct adjusters to a claim that needs more attention, clinical resources and time with customers, then we have changed the model for care throughout a difficult time in somebody’s life.”

You receive about 1.7 million claims-related documents every day, so the 50,000 documents handled by genAI is a relatively small percentage. What kind of methodology did you use to ensure these documents were an accurate representative of the whole? “During this rollout, we didn’t select only documents that had certain attributes. We wanted the big picture. In the end, we worked with clients who were anxious to collaborate with us on new technology-forward programs to analyze claims. We worked with clients directly to receive approval to use this new technology during our claims process and ran every document through the generative AI solution once they were on board. This gave us a solid exposure to every type of claim and document, ensuring that genAI was thoroughly vetted for every scenario.”

What are some things you’ve learned from the project? “When we first integrated genAI into our operations, we had to learn quick about how and when to best apply it. GenAI technology is rapidly evolving, it seems like on a daily basis. While this is transformative, it also means that we have a perpetual learning curve and challenge to understand the best application of genAI. It gives us an opportunity to work in the most agile environment imaginable: this is a very exciting, and overwhelming, time for leadership. 

“A great deal of articles and media around this topic have compared genAI to the introduction of the internet, and while they’re a bit different, there are some major similarities. Sedgwick and companies around the world are investing in these tech tools, but the companies that will see the most success are the ones that think critically about how to best leverage this technology. By identifying how genAI best fits into operational models — which can be challenging as much of the tech is developed in a vacuum — it is vital to identify the best use-cases for each tool. As these challenges and opportunities continue, we’re excited to see what new opportunities arise as innovation continues.”

Did you encounter anything unexpected throughout this trial of Sidekick? “We encountered a number of positive takeaways from this trial of Sidekick. Namely, the reliability of the genAI products currently available were much higher than we expected.

“Initial accuracy rates were staggeringly high, and they have proven to be incredibly useful for our claims management teams. In the past, tech tools ‘out of the box’ have not been this effective. That initial success highlights the rapid innovation, which will continue, of artificial intelligence right now and the current and potential use-cases for it across business models and operations.”

What’s the next step in your genAI journey? Will you be instituting a larger rollout, or are you considering other areas of the business for its use and, if so, what are they? “The next steps … involve a focus on transforming workflows through the combination of new tech tools, along with data science, decision engines, and dynamic API outreaches. This combination of tools into a new platform will enable low-touch automation on simple claims like never before. Our operational model and understanding of the industry climate has already set the stage for our ‘must haves.’ We understand that better than anybody out there. 

“However, our latest genAI release lets us recognize, ‘OK, we just learned this from new data or supporting documents. What does that mean to the claim?’ That’s where data science steps in: through our years of best-in-class operations and collaboration with clients, we have a data set that lets us know what happens next. Taking the information that we learned from generative AI, and combining that then with the analytical AI of predictive modeling, we can drive the advancement of a claim and provide prescriptive recommendations to our claims examiners. 

“And that’s an important point here. The goal is never to replace the critical thinking and judgment calls that our people do so well. It’s to inform them with relevant, rapid data so that they can make those decisions even better. To put it simply, we will be able to say, ‘This claim has taught us this, so expect that.’

“We can address the next step in claim lifecycle, and we can think further out to find an optimal path for resolution. Claim resolution time will recede, early adopters will reap benefits of technology enablement more rapidly, and the experience of those who we serve will improve. But this triangle that brings together genAI, data science, and 50 years of knowledge has uniquely positioned us to make that claims model more intelligent and more situation specific than ever before.”

Do you have any tips or recommendations for others considering rolling out genAI? “The AI sector has been funded and developed beyond the knowledge and understanding of how to use these tools, and companies are struggling to find a way to integrate this technology with legacy processes. It’s important to focus on meaningful ways to transform the way companies do business by adding in resources that help shape judgment calls without removing a human from the claims experience. 

“I would say that facilitating a strong partnership between tech and operations is key to figuring out this genAI journey. You have to approach the work as if no processes are sacred, and companies and employees can’t be afraid to leverage AI to find new innovations and efficiencies.

“And for a last tip, I like to call this ‘digital triage,’ in that there cannot be the assumption that there is a blanket application out there that will be useful across an entire business model. Take our work, for example, with complex claims, it is necessary for a human to be there to partner with the person submitting their claim, and by leveraging a tech solution such as genAI, the groundwork can be laid for humans to focus on the most important aspects of the process.”

Chatbots, Emerging Technology, Financial Services Industry, Generative AI
Kategorie: Hacking & Security

An awesome Android audio upgrade

10 Květen, 2024 - 11:45

Every now and then, I come across an Android customization concept so clever, so cool, so splendidly useful that I just can’t wait to share it with you.

Today, my fellow Android-appreciating animal, is one of those days.

The concept in question is a hefty and exceptionally practical upgrade for your Android audio experience. It brings a boost to the way you interact with sound on whatever Android device you’re using, no matter who made it or how old it may be. And it’ll take you less than a minute to get going (though, if you enjoy geeking out over details as much as I do, you may find yourself fine-tuning its setup and exploring advanced options within it for a while beyond that).

I’m tellin’ ya: It’s one heck of an improvement. And unlike most of the stuff we’re bound to hear about at Google’s grand I/O gala next Tuesday, it’s something you can start using this very second — and something with an impact that’ll be immediately obvious and genuinely advantageous all throughout your day, both for professional work-related purposes and for any after-hours audio adventuring.

Ready for a whole new level of Android aural pleasure?

[Psst: Don’t stop with audio. Treat yourself to all sorts of advanced Android knowledge with my free Android Shortcut Supercourse. You’ll learn tons of useful new tricks for your phone!]

Meet your Android audio enhancement

The upgrade of which we speak may seem simple on the surface, but don’t be fooled: This nifty little lift will affect all aspects of your Android-using experience and make your life easier — even saving you time and increasing your efficiency — countless times a day.

It’s a completely new take on the Android volume panel — y’know, the little slider that shows up whenever you tap your phone’s physical volume keys. Unless you’re a complete and total nerd (hello!), that’s probably a part of the Android interface you haven’t spent much time thinking about. But believe you me, once you see how much of a meaningful difference this improvement introduces, you’ll wonder how you went so long without it.

Our upgrade comes by way of a thoughtfully crafted app called Precise Volume 2.0. The app essentially replaces your standard volume panel interface with a totally different, much more customizable, and delightfully feature-rich alternative. (And, yes, you’d better believe this is another one of those wonderful control-claiming superpowers that’d be possible only on Android.)

But enough with the broad blathering. Precise Volume has six especially noteworthy benefits that I’d encourage you to consider:

  1. It empowers you to create all sorts of custom presets — specific sets of volume levels for media, ring and notification noises, call and alarm chimes, and even general system sounds — and then activate those with a single swift tap right from your regular volume panel pop-up.
  2. It includes easy options for automation, too, so you can instantly have your phone change its volume settings in any specific way anytime a particular app is opened, anytime a specific Bluetooth or wired device is connected, or even anytime a certain day and time arrives.
  3. It expands the standard system volume sliders to make ’em much more precise, with a visible zero to 100 scale that lets you get super-nuanced about the exact volume level you want for any given moment or purpose. You can even increase that scale, if you want extra control beyond that, and make your volume sliders operate on a zero to 1,000 step increment setup (or any other measure you like).
  4. It includes simple equalizer settings, which can make any audio you’re hearing sound noticeably better and can also be included in those presets and automations we just went over. These settings can even make your phone’s maximum volume higher, if you find things are occasionally too quiet.
  5. It adds in a not-yet-broadly-available Android-15-style volume panel expansion that makes all your audio controls even easier to manage from anywhere.
  6. And it brings that simple, standard Android design into the volume panel on any device — delivering quite an improvement over the murky mess present on phones by Samsung and other heavy-handed companies by default.
The Precise Volume panel in its initial form, at left, and fully expanded, at right.

JR Raphael, IDG

Not bad, right? Now, there is one catch: For the full set of features, including the volume panel replacement, you’ll have to pony up six bucks as an in-app purchase for Precise Volume’s Pro version. But you can play around with some of the features even in the app’s free version. And if you like what you find, you’ll likely find that one-time purchase to be very worth its weight.

So let’s get started, shall we?

60 seconds to smarter Android audio

First things first, on the simplest possible level:

  • Download Precise Volume 2.0 from the Play Store.
  • Open it up and accept the couple of permissions it requests. (They’re completely innocuous and required for parts of the app’s operation.)
  • Explore the app’s tabs and the options within ’em to play around with the presets and other features.

For the full-fledged volume replacement panel at the heart of this conversation, meanwhile:

  • Tap the Settings tab at the bottom of the Precise Volume interface.
  • Tap “Volume Button Override.”
  • Flip the toggle next to “Enabled,” then follow the prompts to upgrade to the app’s Pro version.
  • Once that’s done, tap that toggle again, then follow the prompt to allow Precise Volume the ability to display itself over other apps. That’s needed for reasons that should be obvious, and there’s no harm in allowing it.
  • Once that’s done, tap that toggle one more time — and this time, follow the prompt to enable the app as an Android accessibility service. That may sound scary, but it’s genuinely required for an app to be able to process your physical button presses in this way and effectively replace a part of the system interface. And Precise Volume is extremely up front about the exact reasons for all of its permission requirements and the fact that it doesn’t collect, store, or share any form of personal data.
    • After you select the app’s name, be sure to flip the topmost toggle to turn its accessibility service on — not the lower toggle to activate it as an accessibility shortcut.
    • And note that on a Samsung phone, this part of the process is unnecessarily convoluted. After selecting the option to enable the accessibility service, you’ll have to tap “Installed apps” and then find Precise Volume 2.0 in the list before you’ll see the relevant option.

And that’s it! Just head back to your home screen and then press your phone’s physical volume-up or volume-down key, and you should see the new Precise Volume panel appear in place of the standard volume pop-up. You can then get to the expanded bottom-of-screen interface by tapping the three dot icon within the regular side-of-screen panel.

The Precise Volume panel in action — from zero to 100.

JR Raphael, IDG

Beyond that, you’ll absolutely want to spend a bit of time in the “Manage Volume Presets” area of Precise Volume’s Settings tab. That’s where you can create those one-tap presets we talked about a minute ago.

Precise Volume’s presets make it possible to create complete audio settings for any specific scenario.

JR Raphael, IDG

The “Automation” area of that same tab is where you can configure simple automations for what happens when specific apps are opened, specific devices are connected, or specific days and times occur — if, say, you want your media volume to bump up and your notification volume to go all the way down whenever you open Google Meet or maybe your notification and ring volume to bump up during the workday but then drop back down in the evenings.

Precise Volume’s automations open the door to all sorts of step-saving smartness.

JR Raphael, IDG

The “Behavior” section within that same tab is where you can control the precise nuance level of your volume slider, if you want to make the control even finer than the default zero to 100 scale.

Make your volume control as nuanced as you want with Precise Volume’s “Steps” setting.

JR Raphael, IDG

And the Equalizer tab at the bottom of the Precise Volume app is the place where — well, y’know. All that equalizer stuff, including the volume booster, resides.

It may sound technical, but Precise Volume’s Equalizer area is full of simple, effective enhancements.

JR Raphael, IDG

And there ya have it: an awesome Android audio upgrade. The power is now at your fingertips, and a smarter, more efficient, and more powerful way of interacting with audio on your phone will always be present and waiting to be called into action.

Get six full days of advanced Android knowledge with my free Android Shortcut Supercourse. You’ll learn tons of efficiency-enhancing magic for your phone!

Android, Mobile, Mobile Apps, Productivity Software
Kategorie: Hacking & Security

Strict return-to-work policies may be driving tech workers away

9 Květen, 2024 - 23:21

Mandatory return-to-office policies appear to be pushing workers at major tech companies away from their employers, with measurable effects on how willing people are to stay at companies that require in-person attendance.

A study released this week by researchers at the University of Michigan and University of Chicago found that three large US tech companies — Microsoft, Apple and SpaceX — saw substantially increased attrition, particularly of their more senior personnel, when they implemented strict return-to-work policies in the wake of the COVID-19 pandemic.

While many efforts to study remote work and its effects on the economy have been based on survey data, the authors of this study used publicly available resume data, rather than self-reported preferences, to track the actual effects of back-to-work policies

The three companies were chosen, according to the researchers, because they were among the first to implement return-to-office mandates as the pandemic eased in 2022, and because of their critical importance to the technology sector.

“We estimate nearly identical effects for all three companies despite their markedly different corporate culture and product gamut, suggesting the effects are driven by common underlying dynamics,” the report said.

One of the most striking findings, according to David Van Dijcke, one of the study’s authors and a Ph.D candidate in economics at the University of Michigan, was that workers in more senior positions were more likely than their juniors to leave a job rather than go back to the office.

“You might expect something different, where younger or more junior employees have matured in or started their first jobs in a remote environment,” he said. “But we didn’t find anything like that.”

The study also found a tight correlation between the rigor of a specific return-to-office policy and its effects on workers, Van Dijcke said. Apple’s one day per week policy produced the smallest changes to its workforce, causing about a 4% decrease in senior employees as a share of the overall pool. SpaceX’s full-time in-house requirement, by contrast, led to a larger than 15% decrease.

Van Dijcke, who is also employed at the risk analytics division of Ipsos Public Affairs, said that there were several possible reasons more senior tech workers might leave. Another study, he noted, tracked two offices of the same company, located mere blocks apart, and found that employees there got different benefits from working remotely and working in-office.

“They found that senior software engineers were more productive when they were not close to their co-workers, which I guess makes sense, right?” he said. “They’re skilled at what they do and benefit from fewer distractions. Whereas if you’re more junior you might benefit from distractions that give you valuable feedback.”Recent surveys have tracked closely with Van Dijcke and his colleagues’ findings, including data published today by Gartner finding that one in three executives would quit over a return-to-office mandate, compared to less than 20% of non-executive workers.

Nevertheless, Gartner said that in-office requirements are getting more strict across the technology sector. Dell, for instance, has begun to issue employees color-coded “grades” based on their attendance.

Remote Work
Kategorie: Hacking & Security

Apple’s worst ad ever?

9 Květen, 2024 - 20:10

Editor’s note: On Thursday, after this story was published, Apple apologized for its ad and said it would not use it.

For years now, Apple has been as much a marketing company as a technology company. But after Apple CEO Tim Cook introduced the iPad Pro ad this week, you have to ask whether Apple has lost its marketing mojo. 

The ad features — if that’s the right word — musical instruments, artist tools, toys, and games being crushed by a huge hydraulic press and turned into a new iPad. All this happens to the musical background of Sonny and Cher’s 1971 hit, All I Ever Need Is You

Ah, no, Apple, we need far more than just you.

I mean, seriously.  How was this ad ever made? Who greenlighted it? And how in the world did it ever make it to the public? Did no one even look at it?

It’s not just me. Many other people hate — really hate — the ad, based on the groundswell of criticism that erupted online this week. 

At a time when artificial intelligence (AI) fans are pushing out artists, writers, and musicians, it’s offensive for Apple to imply we can do without all those old analog creative types. (I’m reminded of how James McNerney, former Boeing CEO, would call his company’s senior engineers  “phenomenally talented a**holes.” If Apple ends up shoving out its creative people the way Boeing did its engineers, future Apple products might be as “good” as the Boeing 737 Max 9, and the 787 Dreamliner.)

No one will die from a bad Apple, but it’s a red flag whenever a company mistreats its top people. It’s a slap in the face to all of Apple’s in-house creatives, as well as the company’s many loyal creative users. 

The timing could not have been worse. People, especially creative pros, are understandably nervous about their work being stolen and losing their jobs to soulless AI algorithms and apps. This ad is that very fear crystallized into 60 seconds of video. 

This is not the kind of “Think Different,” people want to see from Apple. Indeed, this is the opposite of what people want. As Michael J. Miraflor, chief branding officer for venture capital company Hannah Grey VC, tweeted, “I don’t think I’ve ever seen a single commercial offend and turn off a core customer base as much as this iPad spot.

James Cook, marketing director at venture capitalist firm Molten Ventures, summed it well on Xitter

Apple’s new “Crush” ad (let’s call it “2024”) is a visual & metaphorical bookend to the 1984 ad.

1984: Monochrome, conformist, industrial world exploded by colourful, vibrant human.

2024: Colourful, vibrant humanity is crushed by monochrome, conformist industrial press.

Exactly so. 

Now, Cult of Mac members might argue we just don’t get it. That just tells me that Apple’s marketing policies, started by Steve Jobs with his personal reality distortion field, have been remarkably persistent. But that was then; this is now.

Seeing objects like pianos and guitars that many love being smashed is a slap in the face and it’s painful to watch. This is how Apple can lose its marketing mojo.

And if you don’t think Apple could lose its popularity due to an ad, think again. Ask Coca-Cola how  its 1985 New Coke campaign worked out. Hint: Many of you have never had a New Coke in your life. More recently, Peleton saw its stock price drop by 15% after its ad campaign showing a woman being thankful to her husband for getting an the exercise bike so she could become prettier. Yeah, that went over well.

Remarkably, the same concept was done much better by eBay, of all companies, in 2015, eBay produced an animated GIF of the evolution of the desk where desktop items from photos to calculators to calendars transform into icons on an ever-changing computer screen. 

Unlike the Apple ad, this was fun, and it made the same point.

Even more annoying is that Apple could have used the same footage and music to make a brilliant ad. That’s what Reza Sixo Safai did on Xitter. Safai simply reversed the video, so from an iPad Pro, the crushed wreckage is restored to all its wonderful, delightful glory. 

The moral of this story is that no company can ever be so successful that it can’t blow it. If Apple is as smart as its fans think it is, it will dump this ad as soon as possible and try to find another way to get people excited about buying iPads. 

This was not the way. 

Apple, iPad, IT Strategy, Marketing and Advertising Industry, Vendors and Providers
Kategorie: Hacking & Security

Apple’s M4 chip really does compete with itself

9 Květen, 2024 - 19:17

It’s not often an Apple advertisement bombs, but when it does it generates international attention. That’s why everyone is now learning that Apple’s newly introduced iPads can be used to make music, create art, capture images, make movies and do much more, and the versatile tool isn’t just for creative pros, but for the rest of us, too. What makes this possible? Apple Silicon — and additional details have emerged since the introduction of the chip.

The big news concerns speed. Apple’s M4 is up to 45% faster than the M2 processor and 25% faster than the not-so-old amazing M3, according to the latest speed data leak. It makes the chip faster than Qualcomm’s much-hyped Snapdragon X Elite — as well as the competing M3 Pro.

Fresh data for the M4 on Geekbench 6 gives us these scores:

  • Single core: 3,767.
  • Multi core: 14,677.

These results are likely to have been captured from new iPads, which means the actual performance potential for the M4 on the Mac is likely to be even greater. 

While the test results come from one of the tablets with 16GB RAM, we understand the processor has clock speeds lowered to make the device more energy efficient. Once someone does give Apple Silicon a heat sync, it should be able to run faster, even though when it comes to single-core chips no other consumer processor or chip can compete with the M4. The only way, as they say, is up.

I’ve said before that Apple’s silicon design teams are moving so fast that they compete with themselves, and this continues to be true. Not only that, but it seems to be accelerating the introduction of these processors.

Here’s the evidence:

M1 processor (November 2020)

  • Single core: 2,304.
  • Multi core: 8,422.

M2 processor (June 2022)

  • Single core: 2,623.
  • Multi core: 9,803.

M3 processor (October 2023)

  • Single core: 3,027.
  • Multi core: 11,883.

M4 processor (May 2024)

  • Single core: 3,767.
  • Multi core: 14,677.

Sources: At time of writing Geekbench 6 appears to be offline. As a result, I’ve had to use Nanoreview to source the data. It is also true that these results vary; an earlier check on Geekbench showed single core results between 3,595-3,810. Take them as a guide.

It is worth noting the extent to which each iteration of M-series chip leapfrogs the previous generation. The current highest end iPad Pro with an M4 chip running 1TB+ RAM seems like it might even surpass the M3 Pro chip. That’s significant, I think.

The three towers

Apple’s teams seem to have the following goals: To make computationally powerful chips, make them extremely power efficient, and ensure they generate little heat so the processors can be used across a slew of different devices. 

Iteration by iteration of the Apple Silicon concept, realizing these goals lets Apple achieve significant environmental benefits, dramatically reducing the power required by its devices while also trimming the size of batteries inside them — which means those devices derive the same life between charges. It also means Apple’s designers can scale cheerily between versions of the core architecture, scaling all the way from A-series chips in iPhones to the powerful Ultra series of processors the company also has the ability to create. 

This wide scale remains a huge design opportunity for Apple, which can visualize and design systems that could not exist before. We’ve all seen talk about plans for folding devices; those are made far more possible as products get thinner and batteries become increasingly less likely to overheat. Within this, the influence of ARM, (which itself recently announced record results), is tangible. 

Where is this going?

What this means is that Apple Silicon is becoming a huge competitive advantage to the company. “The flexibility of Apple silicon architecture remains one of their biggest technical advantages over competitors,” said Ben Bajarin, Creative Strategies analyst, following Apple’s latest iPad launch.

Apple’s own CPU tests claim that the M4 chip with 28 billion transistors outpaces the M2 by 50%.  Apple also compared the processor to an Asus Zenbook 14 OLED with an Intel Core Ultra 7 processor, arguing that its new iPads could deliver the same performance at a quarter the power. That means you get more life between charges and as enterprise apps appear, the iPad can only become a more attractive tool for anyone in the mobile enterprise.

They’ll become even more tempting once they do become foldable and processor sizes shrink even more.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Apple, CPUs and Processors, iOS, iPad, Tablets
Kategorie: Hacking & Security

OpenAI unveils ‘Model Spec’: A framework for shaping responsible AI

9 Květen, 2024 - 14:56

In a bid to improve accountability and transparency in AI development, OpenAI has released a preliminary draft of “Model Spec.” This first-of-its-kind document outlines the principles guiding model behavior in its API and ChatGPT, OpenAI announced in a blog post.

“We’re doing this because we think it’s important for people to be able to understand and discuss the practical choices involved in shaping model behavior,” the company said in the blog. “The Model Spec reflects existing documentation that we’ve used at OpenAI, our research and experience in designing model behavior, and work in progress to inform the development of future models. This is a continuation of our ongoing commitment to improve model behavior using human input and complements our collective alignment work and broader systematic approach to model safety.”

Model behavior — how AI models respond to user inputs, encompasses various aspects such as tone, personality, and response length — plays a critical role in AI-human interactions. It is a complex task to shape the behavior as models learn from diverse datasets and may encounter conflicting objectives in practice.

Shaping this behavior is still a nascent science, as models are not explicitly programmed but instead learn from a broad range of data, OpenAI said.

A three-tiered approach to shaping responsible AI

The “Model Spec” draft document outlined a three-pronged approach to shaping AI behavior. This document specifies OpenAI’s “desired model behavior” and how the company evaluates tradeoffs when “conflicts arise,” the ChatGPT creator added in the blog.

The first part of the Model Spec focuses on core objectives. These are broad principles that guide model behavior, including assisting users to achieve their goals, benefiting humanity, and reflecting positively on OpenAI. These foundational principles also ask model behavior to adhere to “social norms and applicable law.”

Beyond these broad objectives, the document also provides clear instructions, which the blog refers to as “rules.” These rules are designed to address complex situations and “help ensure the safety and legality” of AI actions. Some of these rules include following instructions from users, complying with laws, avoiding the creation of information hazards, respecting user rights and privacy, and avoiding the generation of inappropriate or NSFW (not safe for work) content.

Finally, the Model Spec acknowledges that there may be situations where these objectives and rules “conflict.” To navigate these complexities, the document suggests default behaviors for the AI model to follow. These default behaviors include assuming the best intentions from the users, being helpful without “overstepping” boundaries, and encouraging respectful interactions.

“This is the direction the models should ideally be going and it’s great to see OpenAI making the effort with this new spec on how a model should behave according to the user with greater context and personalization but more so “responsibly,” said Neil Shah, VP for research and partner at Counterpoint Research, a global research and consulting firm.

OpenAI’s stress on transparency and collaboration

OpenAI, in the blog post, acknowledged the Model Spec as a “living document,” meaning that it is open for feedback and evolving alongside the field of AI.

“Our intention is to use the Model Spec as guidelines for researchers and data labelers to create data as part of a technique called reinforcement learning from human feedback (RLHF),” another document by OpenAI detailing the Model Spec said. “The Spec, like our models themselves, will be continuously updated based on what we learn by sharing it and listening to feedback from stakeholders.”

RLHF will drive how a model will be more tuned to actual human behavior but also make it transparent with set objectives, principles, and rules. This takes the OpenAI model to the next level making it more responsible and useful, Shah said. “Though this will be a constantly moving target to fine-tune the specs as there are a lot of grey areas with respect to how a query is construed and what the final objective is and the model has to be intelligent and responsible enough to detect if the query and response is less responsible.”

The Model Spec represents a significant step towards achieving ethical AI. The company emphasizes the importance of building trust with users and public, who are increasingly interacting with AI systems in their daily lives.

Emerging Technology, Technology Industry
Kategorie: Hacking & Security

Think Shadow AI is bad? Sneaky AI is worse

9 Květen, 2024 - 14:14

The many IT risks associated with Shadow IT — and especially Shadow AI and Shadow IoT — are well-known and understandably well-feared. But there is a new form of Shadow IT on the horizon: “Sneaky IT.” 

Shadow IT  involves an end-user who bypasses IT and the enterprise security people and whips out a payment card to secure services elsewhere. That delivers a variety of unknown threats into the enterprise environment. But what happens when a trusted vendor adds new elements to its service — especially if it’s SaaS — and never mentions it? That poses a similar risk, both of which relate to environment visibility or, in the case of Sneaky IT, the absence of visibility. 

This has the potential to cause major compliance problems as well as data-control problems. When a regulator asks how an enterprise is using generativeAI (genAI) and for what, a CIO needs to be able to answer that completely, truthfully and honestly. 

Sneaky IT makes that all but impossible. 

One of my favorite examples of Sneaky IT came in the form of Sneaky IoT. It was several years ago and involved a large midwestern manufacturing company. It had been using a handful of highly-specialized suppliers for  massive pieces of equipment to run the assembly line — and it  knew the machines intimately. 

Then the vendor decided to install a bunch of microphones in the machines to help predict repair problems before they happened. (Given that it was leveraging IoT mics and Machine Learning to do the audio analysis, I suppose it was both sneaky IoT as well as sneaky AI.)

One day, there was a malfunction. While waiting for the vendor’s repair crew to arrive, some of the assembly line workers tried dismantling the machines and discovered the microphones. The asembly line manager was livid that the vendor never informed — let alone asked — before installing what he saw as spy devices in his environment.

GenAI tools are being snuck into products at a far greater pace. To be fair, vendors are generally announcing that they are now using AI — especially when they are indeed not using it. But they are rarely sufficiently specific for an enterprise IT team to make an informed decision. And it’s certainly not specific enough to answer the questions of any regulator.

From the perspective of IT, the difference between Shadow AI and Sneaky AI is vast. IT can demand that employees and contractors not use unauthorized systems, but IT management does not have the tools nor the time to investigate Shadow abuses. Candidly, if an employee grabs their phone, accesses ChatGPT and then uses that answer in their document, how could anyone in IT possibly know? 

But Sneaky AI involves vendors IT is paying. Although IT can imply a threat for employees to be fired if they engage in Shadow AI, few employees believe that threat. If, however, a vendor gets the enterprise into compliance trouble because they didn’t deliver on all contractual disclosures and other obligations, the fear of not being renewed (and maybe getting sued) is quite real.

I have heard a wide range of vendors describe this SneakyAI problem, but they label it ShadowIT. Beyond the clear definitional issue, by falsely lumping the two together, vendors are making it more difficult to find a way to fix it. Maybe doing so is already beyond scope, but let’s at least try to minimize the nightmare slightly.

The possibility of Sneaky IT should be directly addressed in vendor contracts. The goal is to get enterprise IT decision-makers back to a place where they know what they are buying and  installing in their systems. That means going well beyond notification and demanding early notification and seeking permission.

No, this isn’t suggesting a major SaaS vendor will wait until all of its customers give their permission before rolling out a new capability. But enterprise IT has the right to opt out and say, in essence, “This isn’t what we bought. And it’s absolutely not what we want and we have no intention of paying for it.” 

From a contract position, the vendor must give advance notice (six months, a year?) of any material change in capabilities or methods. If the customer doesn’t want it, they must be able to get out of their current agreement with no financial penalty. If they signed a five-year contract and paid in advance for a discount and only one year has passed, they should be given a full refund of the remaining term.

As a practical matter, enterprises might get a lot of resistance adding such terms for license deals already in effect. But it is a reasonable ask, since it’s not IT that’s changed the terms of the arrangement. IT bought XYZ and the vendor decided to change it. The vendor broke the deal.

The simple solution is to immediately add such requirements to every RFP. If a vendor wants to bid for your business, they have to agree to this provision before the negotiations begin.

Security, Vendor Management, Vendors and Providers
Kategorie: Hacking & Security

How to set – and achieve – DEI goals in IT

9 Květen, 2024 - 12:00

Building and maintaining a diverse workforce improves organizations in myriad ways, including fostering innovation, enhancing problem-solving capabilities, attracting top talent, building customer understanding, and contributing to social responsibility. Doing so typically requires that companies adopt a comprehensive diversity, equity, and inclusion (DEI) strategy to not only hire workers from diverse backgrounds, but also provide an environment in which these workers want to stay.

A key part of that strategy is setting specific goals for hiring and retaining a diverse workforce. Without clear, measurable DEI goals that leaders are held accountable for meeting, it’s all too easy for companies to say they value diversity while maintaining the status quo in their own workforces.

It’s especially important for tech companies and IT departments to set DEI goals, because certain demographics, such as women and Black, Latino, and Indigenous workers, are underrepresented in technology roles. Diversifying their workforces often means that tech leaders must step outside their comfort zones and actively seek out workers from underrepresented groups — and potentially change the corporate or department culture so that all workers feel respected and can expect equity when it comes to pay, promotions, and career growth.

That’s why workforce experts say that company-wide DEI goals aren’t enough. Tech leaders need to set — and meet — DEI goals specifically for technology workers.

“DEI goals are important in IT because underrepresented populations have traditionally been excluded from opportunities in IT and cybersecurity,” said Maxwell Shuftan, director, mission programs and partnerships at SANS Institute.

That’s because they tend to exit the STEM (science, technology, engineering, and mathematics) learning path prior to high school, providing little exposure to IT or cybersecurity as a potential career option, he said. In addition, they may not have opportunities to identify their aptitude and interest in technology, and they typically have limited access to high-quality technical education and training.

“Promoting diversity, equity, and inclusion is not just good for business; it’s the right thing to do, ensuring all individuals have opportunities to succeed and creating a more robust IT and cyber workforce,” he said.

But how do companies set meaningful DEI goals in IT? Here’s advice from IT leaders and DEI experts.

Take stock

“Setting and achieving DEI goals in IT is about creating a roadmap that reflects the world we live in,” said Greg Vickrey, a director with global technology research and advisory firm ISG. “It’s a mix of art and science — understanding the human element of the teams and backing it up with data.”

Vickrey advised organizations to begin with a complete audit, asking tough questions about the current diversity environment in IT. “This approach helps identify gaps in representation across different groups including race, gender, disability, veteran status, etc.,” he said.

As a research and development leader in the technology industry, Hema Ramaswamy, SVP of engineering at data intelligence platform Tracer, runs DEI initiatives like a technology project. She starts with the “why” and develops a concrete plan with goals, progress metrics, feedback, and improvements.

“At Tracer, we started off with a demographic survey with the goal of identifying the demographic characteristics and background that comprise our team and looking for areas of improvement,” she said. “Oftentimes, we hear about DEI [in terms of] gender, race, religion, sexual orientation, age, etc. However, at Tracer, our survey showed that the diverse educational background is where we needed to focus to be equitable and inclusive.”

To achieve DEI goals in IT, tech leaders must be intentional about measuring progress and adopt a comprehensive approach, said Libby Hillenbrand, senior director, leadership development and DEI at Rocket Software. “This begins with a thorough assessment of not only the current demographics and representation mix within your workforce but also engaging your employees for feedback and ideas,” she said.

“Done together, this analysis serves as a foundation for targeted efforts and provides a baseline for measuring progress against industry benchmarks,” Hillenbrand said. “Keep your employees at the center and bring them along on the journey.”

Get specific

Armed with such data, leaders can develop concrete DEI goals that address weak areas.

For example, an IT department might set a goal to increase the representation of women in the software engineering team by 20% over the next 18 months, said Vickrey. Another example is to ensure that at least 30% of leadership roles are filled by individuals from underrepresented groups within the same timeframe.

“These are tangible targets that push us to think differently about recruitment, promotion, and development,” he said.

Other examples of measurable DEI goals include increasing the percentage of women and underrepresented minorities in the IT candidate pool by 40%, as well as sourcing 35% of IT products and services from businesses owned by women, minorities, or other underrepresented groups.

Measure progress and be prepared to adjust tactics

To gauge progress, tech organizations must establish and monitor metrics and key performance indicators (KPIs), said Rocket Software’s Hillenbrand. It’s crucial to have metrics around hiring and retaining women and underrepresented groups, including increasing the number of women and underrepresented groups in leadership roles and in specific geographies.

“Tech companies should also measure promotion rates and pay equity,” she said.

Tech Mahindra, an IT services and consulting company, has initiated targeted recruitment drives aimed at underrepresented communities, not just at the entry level but also in senior and technical roles, said Richard Lobo, the company’s chief people officer. “[This ensures that we are] challenging the status quo and fostering a culture of inclusivity from the top down,” he said.

Beyond the traditional headcount metrics, tech companies are increasingly tracking retention and promotion rates for members of underrepresented groups, as well as conducting employee sentiment analysis through surveys to gauge the effectiveness of DEI initiatives, according to Lobo. This data-driven approach allows leaders to identify gaps, make informed decisions, and continuously refine their strategies, he said.

“Accountability mechanisms have become increasingly sophisticated, with leaders using a mix of qualitative and quantitative metrics to measure progress,” Lobo said.

Esteban Gutierrez, CISO at software company New Relic, noted that it’s harder to incorporate trackable metrics for DEI outside of representational numbers. “You can set goals around this, but it’s more than just hiring [underrepresented] employees,” he said. “It’s not as easy to add metrics to concepts like inclusion or a sense of belonging.”

Gutierrez said mentoring members of underrepresented groups is key to supporting larger organizational DEI efforts. He advised team leads to set up metrics that they can track, such as “how often you [leaders] meet [with members of the underrepresented groups], whether you come prepared with talking points, set a conversation topic for the meeting, or whether you bring any specific work examples to discuss or walk through.”

Monitoring turnover rates can also be revealing, Gutierrez said. Leaders need to take time to analyze and understand the contributing factors of why people leave. IT naturally has a high turnover rate, so it’s important to create a culture of belonging and inclusion that can boost retention.

Hold IT leaders accountable

For tech leaders to hold themselves accountable for meeting DEI goals, they should regularly review progress against their metrics and solicit new approaches from their leaders and ambassadors, Hillenbrand said.

New Relic’s Gutierrez advised team leads to create opportunities for feedback and encourage input from all team members, especially during brainstorms, retrospective meetings, and systematic reflections. “Employee resource group meetings also provide an opportunity for organizational reflection and a space to discuss progress and what efforts still need to be made,” he said.

Another approach that hits closer to home is to tie executives’ pay to meeting DEI goals — and that includes IT leaders.

It’s one thing to say that diversity matters, but it is a completely different thing to have it become a part of your overall compensation, said Keyla Cabret-Lewis, vice president of DEI and talent development at Aflac.

“For several years, diversity goals have been part of our management incentive program,” she said. “As such, failure to reach our goals will result in lesser compensation for company leaders.”

The results? “Women hold more than half of leadership roles and 37% of senior management roles in the company,” Cabret-Lewis said. “In Aflac’s digital services (IT) division, the CIO is a woman, and more than half of her direct reports are women or people of color who serve as Aflac officers (vice presidents).”

Lessons learned

“In my 15 years of experience in the DEI space, I’ve learned that you need both a top-down and bottom-up approach,” Rocket Software’s Hillenbrand said. “Strong executive sponsorship is required, where leadership has the will to drive change. Leaders must be willing to invest in the mechanisms to measure progress, update processes and programming, while modeling the desired behaviors themselves. You must also capture the hearts and minds of employees by inviting them to be part of the solutions.”

To that end, tech organizations should establish programs that enlist the passion of their employees to help connect them to their companies’ cultures.

“These programs should work to provide educational resources, creative events, programs, and open forums, both internal and external, to raise cross-cultural awareness and reinforce their commitment to inclusion and belonging,” she said.

Tracer’s Ramaswamy agreed that DEI requires a commitment from the top — and that it should be run as an initiative where progress is tracked and altered for its effectiveness. “To advocate for the DEI program is an arduous task. It requires allies and partners top-down and across multiple different departments and levels,” she said.

DEI isn’t a one-off campaign but an ongoing commitment, said ISG’s Vickrey. There is no final destination; it’s a continuous journey — one that requires perseverance and adaptability.

“Early on, I learned that not every initiative will work for every team or every individual, so stay flexible and adaptive,” he said.

“Another lesson is the power of transparency,” he added. “If you fall short of goals, be honest about it and it will strengthen the IT team’s commitment to DEI. It’s all about trust, and trust comes from being candid with each other, through the wins and the setbacks.”

Related reading:

Diversity and Inclusion, IT Leadership, IT Management
Kategorie: Hacking & Security

Microsoft once again under fire over cloud software licensing

9 Květen, 2024 - 10:26

Microsoft’s licensing of its software and services in the cloud is getting it into more hot water in Europe. This time it’s a group of Spanish startups that has called on regulators to investigate Microsoft behavior in the cloud marketplace.

The complaint, from La Asociación Española de Startups (AES) to the Spanish National Markets and Competition Commission (CNMC), accuses Microsoft of “anti-competitive practices” in the cloud marketplace.

The restrictive practices in the cloud marketplace are affecting both suppliers and cloud customers within the startup ecosystem in Spain, the association said.

AES, representing more than 700 startups in Spain, alleges that Microsoft is taking advantage of its dominant position in the market for operating systems (Windows) and office productivity software (Microsoft Office) to force the use of its cloud services, Microsoft Azure.

At issue are questions around data portability (moving information from one cloud platform to another) and restrictive contractual limitations on software licensing.

Technical and contractual barriers are limiting startup competition and innovation, according to AES. The association is calling on regulators to investigate their complaint and acting to ensure a more open, fair and competitive marketplace for cloud services in Spain.

Microsoft denied any wrongdoing or market manipulation.

“Microsoft provides choice and flexibility for our customers to switch to another cloud provider at no cost, and our licensing terms enable our customers and other cloud providers to run and offer Microsoft software on every cloud,” a Microsoft spokesperson told Computerworld. ”We will engage with the Spanish Start Up Association to learn more about its concerns.”

Cumulus

The Spanish complaint adds to a growing volume of similar complaints against Microsoft across Europe.

Last year CISPE (Cloud Infrastructure Service Providers in Europe), which represents European cloud infrastructure providers, filed a complaint against Microsoft with the European Commission.

CISPE’s complaint alleged that anti-competitive practices such as discriminatory packaging, linking, and pricing are among the technical and economic barriers that made it difficult for customers to freely choose between cloud service providers.

Francisco Mingorance, secretary general of CISPE, told Computerworld that “not only the target, but also the practices [targeted in the Spanish complaint], present some overlap with our pending EU-level complaint.”

CISPE is holding talks with Microsoft aimed at “resolving ongoing issues related to unfair software licensing for cloud infrastructure providers and their customers in Europe”. Any remediations or resolution agreed ought to be public and apply across the sector, CISPE insists.

Egress fees

In the UK, telecoms regulator Ofcom has referred the public cloud infrastructure market to to UK’s Competition and Markets Authority for further investigation.

High fees for transferring data out, committed spend discounts and technical restrictions are “making it difficult for business customers to switch cloud provider or use multiple providers”, according to Ofcom. The regulator’s is concerned that the business practices of market leaders Amazon Web Services and Microsoft could limit competition.

At issue are factor such as egress fees, the charges that customers pay to transfer their data out of a cloud. Hyperscalers – such as AWS, Google Cloud and Microsoft – set them at significantly higher rates than other providers.

“The cost of egress fees can discourage customers from using services from more than one cloud provider or to switch to an alternative provider,” according to Ofcom.

Technical barriers to interoperability and portability, factors that mean that customers need to reconfigure their data and application to work on different clouds, and committed spend discounts are also an issue in alleged vendor lock-in and restrictive practices in the cloud.

Microsoft, Microsoft 365, Microsoft Azure, Regulation
Kategorie: Hacking & Security

Businesses lack AI strategy despite employee interest — Microsoft survey

8 Květen, 2024 - 20:42

Generative AI (genAI) tools are becoming more common in the workplace, but business leaders are concerned that their organizations lack a strategy to deploy the technology across their workforce.

That’s according to a Microsoft survey of 31,000 employees in 31 countries, published in the company’s annual Work Trend Index report. 

The survey indicates strong demand from employee for access to genAI tools. Three quarters of respondents use the tools in their jobs, the report claims, double the usage of just six months ago. Employees say AI saves time, enables them to focus on more important tasks, allows for more creativity, and lets them enjoy work more. And more than three-quarters (78%) of office workers use their own AI tools —  a phenomenon known as bring your own AI (BYOAI).

Business leaders also see potential in the technology, with 79% of leaders surveyed believing AI use is needed for their organization to remain competitive. 

Microsoft itself has claimed several large-scale deployments of its own Copilot genAI assistant: Amgen, BP, and Koch Industries are among the enterprises that have purchased over 10,000 “seats” of Microsoft 365 Copilot, CEO Satya Nadella said during the company’s recently quarterly financial earnings call. 

But not all large businesses are keen to dive in quickly, however. The survey found that 60% of leaders believe their organization’s leadership lacks the vision to roll out AI across their workforce. 

“While leaders agree using AI is a business imperative, and many say they won’t even hire someone without AI skills, they also believe that their companies lack a vision and plan to implement AI broadly; they’re stuck in AI inertia,” Colette Stallbaumer, general manager of Copilot and Cofounder of Work Lab at Microsoft, said in a pre-recorded briefing.

“We’ve come to the hard part of any tech disruption, moving from experimentation to business transformation,” Stallbaumer said.

While there’s clear interest in AI’s potential, many businesses are proceeding with caution with major deployments, say analysts.

“Most organizations are interested in testing and deployment, but they are unsure where and how to get the most return,” said Carolina Milanesi, president and principal analyst at Creative Strategies.  

Security is among the biggest concerns, said Milanesi, “and until that is figured out, it is easier for organizations to shut access down.”  

As companies start to deploy AI, IT teams face significant demands, said Josh Bersin, founder and CEO of The Josh Bersin Company. Deploying genAI tools puts the onus on IT staff to ensure data quality and security standards are in place, as well as “getting up to speed on the European AI Act, implementing governance, and helping to standardize on vendors and tools, if possible,” he said. 

With all this groundwork required, it’s likely to take a year or more for businesses to develop a comprehensive strategy around genAI, said Bersin.

Does GenAI generate business value?

Another sticking point is determining value and ensuring a return on investment when investing in AI.

AI takes many forms, but genAI is the focus of most newer AI initiatives within organizations, according to a recent Gartner survey. The most common way employees interact with the technology is when it is embedded into existing productivity and line-of-business apps (34% of respondents), such as Microsoft’s 365 Copilot, Adobe Firefly, and many others. 

Other ways to interact with genAI, include prompt engineering (25%), training bespoke genAI models (21%), or using standalone generative AI tools, such as OpenAI’s ChatGPT or Google’s Gemini (19%).

These genAI features don’t come cheap. In most cases, digital work app vendors charge an additional fee for generative AI features within their paid products. This can be as much as an extra $30 per user each month for Microsoft and Google’s business-focused AI assistants, while those with a more limited focus can charge less, such as Slack AI, which costs a not-insignificant additional $10 per user each month. 

Alongside the challenges in measuring the impact of genAI, spending significant sums in training up employees can also be seen as a risk. 

Demonstrating the value of AI projects is cited as the biggest obstacle to AI adoption, according to the Gartner survey. “As organizations scale AI, they need to consider the total cost of ownership of their projects, as well as the wide spectrum of benefits beyond productivity improvement,” Leinar Ramos, senior director analyst at Gartner, said in a statement.

Microsoft’s survey paints a similar picture. The majority of leaders (59%) are unsure of their organization’s ability to quantify any productivity gains from employee use of AI.

“Cost is really where organizations are getting stuck,” Milanesi said, with companies unsure of what returns they can expect when deploying generative AI. 

Bersin said many organizations have seen productivity improvements in early trials of genAI tools, but a shift to broader, company-wide deployments requires greater consideration of the value it can deliver. “When it comes to massive purchases across the enterprise, I am sure there will be lots of discussions about ROI, because these tools are expensive,” he said.

In its report, Microsoft cited a six-month, randomized control trial of 60 Copilot customers involving 3,000 workers. Results included an 11% reduction in emails read and 4% less time interacting with them, as well as a 10% increase in the number of documents edited in Word, Excel, and PowerPoint. The impact on the number of meetings was less clear; some companies saw an increase, others, a drop. 

There’s a tendency to focus on time saved when assessing genAI tools, said Milanesi, but that might not be the best approach. The real value is in improved quality of work and increased worker satisfaction, she said.This drives “better engagement at work and, in turn, better work,” she said.

Where a tool boosts employee engagement, cost becomes less of a consideration. “Think about the cost of a worker quitting, or someone staying in the job and not being engaged,” she said.

It might be that some AI tools are better suited to certain job roles than others. “The question for any leader is to identify what level of AI is right for the talent. Like for PCs, not everybody needs the top of the line,” said Milanesi. 

Artificial Intelligence, Generative AI, IT Skills, IT Strategy, Microsoft
Kategorie: Hacking & Security