Transhumanismus

Follow the Data? Investigative Journalism in the Age of Algorithms

Singularity HUB - 3 hodiny 1 min zpět

You probably have a picture of a typical investigative journalist in your head. Dogged, persistent, he digs through paper trails by day and talks to secret sources in abandoned parking lots by night. After years of painstaking investigation, the journalist uncovers convincing evidence and releases the bombshell report. Cover-ups are exposed, scandals are surfaced, and sometimes the guilty parties are brought to justice.

This is a formula we all know and love. But what happens when, instead of investigating a corrupt politician or a fraudulent business practice, journalists are looking into the behavior of an algorithm?

In an ideal world, algorithmic decision-making would be better than that made by humans. If you don’t program your code to discriminate on the basis of age, gender, race, or sexuality, then you may think these factors shouldn’t be taken into account. In theory, the algorithms should make decisions based purely on the data, in a transparent way.

Reality, however, is not ideal; algorithms are designed by people and draw their datasets from a biased world. Hidden prejudices may lead to unintended consequences. Furthermore, overconfidence in algorithms’ performance, misinterpretation of statistics, and automated decision-making processes can make appealing these decisions extremely difficult.

Even when decisions are appealed, algorithms are usually incapable of explaining “why” they made a decision: careful, statistical analysis is needed to disentangle the effects of all the variables considered, and to determine whether or not that decision was unfair. This can make explaining the case to the general public—or to lawyers—very difficult.

AI Behaving Badly

A classic example of recent investigative journalism about algorithms is ProPublica’s study of Broward County’s recidivism algorithm. The algorithm, which delivers “risk scores” assessing an accused person’s likelihood of committing more crimes, helps judges determine an appropriate sentence.

ProPublica found the algorithm to have a racial bias—it was more often incorrectly assigning high risk scores to black defendants than white. Yet Northpointe, the company that made the software, argued it was unbiased. The higher rate of false positives for black defendants could be due to the fact that they are arrested more often by the police.

It’s illustrative of how algorithms fed on historical data can perpetuate historical biases. Hirevue’s algorithm assigns scores to candidates for jobs, records job applicants, and analyzes their verbal and non-verbal reactions to a series of questions. It then compares that score against the highest-performing employees currently at the company, as a substitute for a personality test. Critics of the system argue that this just ensures your future employees look and sound like those you’ve hired in the past.

Even when algorithms don’t appear to be making obvious decisions, they can wield an outsized influence on the world. Part of the Trump-Russia scandal involves the political ads bought on Facebook; its micro-targeting was enabled by Facebook’s algorithm. Facebook’s experiments in 2012 demonstrated that the ads could nudge people to go to the polls by altering what they saw in the newsfeed. According to Facebook, this experiment pushed between 60,000-280,000 additional voters to go to the polls; that number could easily exceed the margin of victory in a close election.

Just as we worry that legislators will struggle to keep up with rapid developments in technology, and that tech companies will get away with inadequate oversight of bad actors with new tools, journalism must also adapt to cover and explain “the algorithms beat.”

The Algorithms Beat

Nick Diakopoulos, Director of the Computational Journalism Lab at Northwestern University, is one of the researchers hoping to prevent a world where mysterious, black-box algorithms are empowered to make ever more important decisions, with no way of explaining them and no one held accountable when they go wrong.

In characterizing “the algorithms beat,” he identifies four main types of newsworthy stories.

The first type is where the algorithm is behaving unfairly, as in the Broward County case. The second category of algorithmic public-interest stories arise from errors or mistakes. Algorithms can be poorly designed; they can work from incorrect datasets; or they can fail to work in specific cases. Then, because the algorithm is perceived as infallible, errors can persist, such as graphic or disturbing videos that slip through YouTube’s content filter.

The third type of story arises when the algorithm breaks social norms or even laws. Google’s predictive search algorithm has been sued for defamation by an Australian man for suggesting the phrase “is a former hitman” as an autocomplete option after his name. If an advertising company hired people to stand outside closing factories advertising payday loans and hard liquor, there might be a scandal, but an algorithm might view this behavior as optimal. In what might be considered a parallel case, Facebook allowed advertisers to target white supremacists.

Finally, the algorithms may not be entirely to blame: humans can use or abuse algorithms in ways that weren’t intended. Take the case detailed in Cathy O’Neil’s wonderful book, Weapons of Math Destruction. A Washington teacher was fired for having a low “teacher assessment score.” The score was calculated based on whether standardized test scores for the students improved under a specific teacher. But this created a perverse incentive: teachers lied and inflated the scores their students received. Those who didn’t cheat and inflate the scores were fired. The algorithm was being abused by the teachers—but, arguably, it should never have been used as the main factor in deciding who got bonuses and who got fired.

Finding the Story

So how can journalists hope to find stories in this new era? One way is to obtain raw code for an audit. If the code is used by the government, such as in the 250+ algorithms tracked by the website Algorithm Tips, freedom of information requests may allow journalists to access the code.

If the bad behavior arises from a simple coding error, an expert may be able to reveal it, but issues with algorithms tend to be far more complicated. If even the people who coded the system can’t predict or interpret its behavior, it will be difficult for outsiders to infer a personality from a page of Python.

“Reverse-engineering” the algorithm—monitoring how it behaves, and occasionally prodding it with a well-chosen input—might be more successful.

AlgorithmWatch in Germany gathers data from customers to see how they are affected by advertising and newsfeed algorithms; WhoTargetsMe is a browser plugin that collects information about political advertising and tells them who’s trying to influence their vote. By crowdsourcing data from a wide range of people, its behavior in the field can be analyzed.

Investigative journalists, posing as various people, can attempt to use the algorithms to expose how they behave—along with their vulnerabilities. VICE News recently used this to demonstrate that anyone could pose as a US Senator for the purposes of Facebook’s “Paid for by…” feature, which was intended to make political ads transparent.

Who’s Responsible?

Big tech companies derive much of their market value from the algorithms they’ve designed and the data they’ve gathered—they are unlikely to share them with prying journalists or regulators.

Yet without access to the data and the teams of analysts these companies can deploy, it’s hard to get a handle on what’s happening and who’s responsible. Algorithms are not static: Google’s algorithms change 600 times a year. They are dynamic systems that respond to changing conditions in the environment, and therefore their behavior might not be consistent.

Finally, linking the story back to a responsible person can be tough, especially when the organizational structure is as opaque as the algorithms themselves.

As difficult as these stories may be to discover and relate accurately, journalists, politicians, and citizens must start adapting to a world where algorithms increasingly call the shots. There’s no turning back. Humans cannot possibly analyze the sheer volume of data that companies and governments will hope to leverage to their advantage.

As algorithms become ever more pervasive and influential—shaping whole nations and societies—holding them accountable will be just as important as holding politicians responsible. The institutions and tools to do this must be developed now—or we will all have to live with the consequences.

Image Credit: Christopher Edwin Nuzzaco / Shutterstock.com

Kategorie: Transhumanismus

Global Machine Learning as a Service (MlaaS) Market 2018-2023: Emerging Market Players …

Home AI - 10 hodin 27 min zpět

Machine Learning as a Service (MlaaS) market report is important for business professionals, manufacturers, dealers, consumers, and industry …


Link to Full Article: Read Here

Kategorie: Transhumanismus

What is Artificial Intelligence marketing?

Home AI - 11 hodin 12 min zpět

Artificial Intelligence (AI) has made the transition from once being a glorious manifestation of sci-fi imagination to today emerging as a technological …


Link to Full Article: Read Here

Kategorie: Transhumanismus

How RPA Driven By AI & ML Can Help Insurers

Home AI - 11 hodin 24 min zpět

Implementing more automation with artificial intelligence and machine learning will reduce processing times, reduce mistakes, provide better customer …


Link to Full Article: Read Here

Kategorie: Transhumanismus

Computers have learned to make us jump through hoops

Home AI - 11 hodin 35 min zpět

I realised that what I had been doing was adding to a dataset for training the machinelearning software that guides self-driving cars – probably those …


Link to Full Article: Read Here

Kategorie: Transhumanismus

Volume 5, Issue 4, December 2018

Home AI - 13 hodin 24 min zpět

OriginalPaper. Assessing Survival Time of Women with Cervical Cancer Using Various Parametric Frailty Models: A Case Study at Tikur Anbessa …


Link to Full Article: Read Here

Kategorie: Transhumanismus

With AI, the Dream Merchants are now Attention Merchants: View

Home AI - 14 hodin 12 min zpět

Artificial Intelligence and Machine Learning are terms we are discussing widely now but these were coined way back in 1956. Ever since, research …


Link to Full Article: Read Here

Kategorie: Transhumanismus

Biomedical Data Science Curriculum Initiative

Home AI - 14 hodin 45 min zpět

The second workshop of the Biomedical Data Science Curriculum Initative will be taking place on May 16th and 17th at the Department of Biomedical …


Link to Full Article: Read Here

Kategorie: Transhumanismus

Non-CS PhDs in Data Science: A Deep Dive

Home AI - 15 hodin 30 min zpět

Background:¶. I have a PhD in Electrical Engineering, specializing in Semiconductor Device Physics. As part of my work, I do model development, …


Link to Full Article: Read Here

Kategorie: Transhumanismus

Machine Learning

Home AI - 16 hodin 54 min zpět

Machine learning is a foundation underlying most directions of applied machine intelligence including computer vision, NLP, web search, and others.


Link to Full Article: Read Here

Kategorie: Transhumanismus

Machine Learning

Home AI - 16 hodin 54 min zpět

Machine learning is a foundation underlying most directions of applied machine intelligence including computer vision, NLP, web search, and others.


Link to Full Article: Read Here

Kategorie: Transhumanismus

Intel launches Neural Compute Stick 2 for ~RM414

Home AI - 18 hodin 1 min zpět

Designed for developers looking to take advantage of Artificial Intelligence or AI deep learning prototyping, it offers up to 8X better performance …


Link to Full Article: Read Here

Kategorie: Transhumanismus

Intel launches Neural Compute Stick 2 for ~RM414

Home AI - 18 hodin 1 min zpět

Designed for developers looking to take advantage of Artificial Intelligence or AI deep learning prototyping, it offers up to 8X better performance …


Link to Full Article: Read Here

Kategorie: Transhumanismus

What BlackBerry’s Acquisition of Cylance Means for the Cybersecurity Business

Home AI - 19 hodin 27 min zpět

… said Friday it is plunking down $1.4 billion in cash to buy Cylance, a cybersecurity firm that specializes in machine learning-enabled threat detection.


Link to Full Article: Read Here

Kategorie: Transhumanismus

Cyber Saturday—BlackBerry’s Biggest Buy, Facebook’s Crisis Mismanagement, Assange’s Secret …

Home AI - 17 Listopad, 2018 - 23:40

… buy Cylance, a cybersecurity firm that specializes in machine learning-enabled threat detection. Pending regulatory approval, expected by February, …


Link to Full Article: Read Here

Kategorie: Transhumanismus

Top 5 Data Center Stories of the Week: November 17, 2018

Home AI - 17 Listopad, 2018 - 23:07

Cybersecurity Tools for Data Center Networks are Getting Smarter With Machine Learning – New tools put more power in the hands of security …


Link to Full Article: Read Here

Kategorie: Transhumanismus

‘Neuroscience allows tailored treatment’

Home AI - 17 Listopad, 2018 - 21:48

Delivering the keynote address at Tech 2018 here on Saturday, Dr Pugh said, “The real promise of neuroscience is the ability to tailor the treatment to …


Link to Full Article: Read Here

Kategorie: Transhumanismus

Artificial intelligence is ‘shockingly’ racist and sexist

Home AI - 17 Listopad, 2018 - 21:26

An MIT study has revealed the way artificial intelligence system collect data often makes them racist and sexist. Researchers looked at a range of …


Link to Full Article: Read Here

Kategorie: Transhumanismus

Are you a data scientist? Nashville needs you. | Opinion

Home AI - 17 Listopad, 2018 - 21:00

Not long ago, data scientists were typically lower-level analysts working mostly in the background. Now, data science has become a necessity for …


Link to Full Article: Read Here

Kategorie: Transhumanismus

Are you a data scientist? Nashville needs you. | Opinion

Home AI - 17 Listopad, 2018 - 21:00

Not long ago, data scientists were typically lower-level analysts working mostly in the background. Now, data science has become a necessity for …


Link to Full Article: Read Here

Kategorie: Transhumanismus
Syndikovat obsah