Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity University
Aktualizace: 18 min 27 sek zpět

This Week’s Awesome Tech Stories From Around the Web (Through August 15)

2 hodiny 48 min zpět

A College Kid’s Fake AI-Generated Blog Fooled Tens of Thousands. This Is How He Created It.
Karen Hao | MIT Technology Review
“At the start of the week, Liam Porr had only heard of GPT-3. By the end, the college student had used the AI model to produce an entirely fake blog under a fake name. It was meant as a fun experiment. But then one of his posts found its way to the number-one spot on Hacker News. Few people noticed that his blog was completely AI-generated. Some even hit ‘Subscribe.'”


The First Gene-Edited Squid in History Is a Biological Breakthrough
Emily Mullin | One Zero
“Scientists have long marveled at these sophisticated behaviors and have tried to understand why these tentacled creatures are so intelligent. Gene editing may be able to help researchers unravel the mysteries of the cephalopod brain. But until now, it’s been too hard to do—in part because cephalopod embryos are protected by a hard outer layer that makes manipulating them difficult.”


New Audio Deepfake AI Narrates Reddit Thread as David Attenborough
Luke Dormehl | Digital Trends
“Sir David Attenborough, wildlife documentary broadcaster and natural historian, is an international treasure who must be protected at all costs. Now 94, Attenborough is still finding new dark recesses to explore on Planet Earth—including the r/Relationships and r/AskReddit boards on Reddit. Well, kind of.”


Robotic Chameleon Tongue Snatches Nearby Objects in the Blink of an Eye
Michelle Hampson and Evan Ackerman | IEEE Spectrum
“Chameleons may be slow-moving lizards, but their tongues can accelerate at astounding speeds, snatching insects before they have any chance of fleeing. Inspired by this remarkable skill, researchers in South Korea have developed a robotic tongue that springs forth quickly to snatch up nearby items.”


Android Is Becoming a Worldwide Earthquake Detection Network
Dieter Bohn | The Verge
“It’s a feature made possible through Google’s strengths: the staggering numbers of Android phones around the world and clever use of algorithms on big data. As with its collaboration with Apple on exposure tracing and other Android features like car crash detection and emergency location services, it shows that there are untapped ways that smartphones could be used for something more important than doomscrolling.”


‘Terror Crocodile’ the Size of a Bus Fed on Dinosaurs, Study Says
Johnny Diaz | The New York Times
“‘Deinosuchus was a giant that must have terrorized dinosaurs that came to the water’s edge to drink,’ [Adam Cossette, a vertebrate paleobiologist who led the study,] said in a statement. ‘Until now, the complete animal was unknown. These new specimens we’ve examined reveal a bizarre, monstrous predator.'”


This Pacific Island Nation Plans to Raise Itself Above the Ocean to Survive Sea Level Rise
Adele Peters | Fast Company
“The previous president of Kiribati, a low-lying island nation in the Pacific, predicted that the country’s citizens would eventually become climate refugees, forced to relocate as sea level rise puts the islands underwater. But a new president elected in June now plans to elevate key areas of land above the rising seas instead.”


Digitizing Burning Man
Lucas Matney | Tech Crunch
“Going virtual is an unprecedented move for an event that’s mere existence already seems to defy precedent. …’I’ve fallen in love with this idea that at some point in the future, some PhD student in 300 years time is going to write a thesis on the first online Burning Man, because it does feel like an extraordinary moment of avant garde imagineering for what the future of human online interaction looks like,’ Cooke tells TechCrunch.”

Image credit: Francesco Ungaro / Unsplash

Kategorie: Transhumanismus

New Algorithm Paves the Way Towards Error-Free Quantum Computing

14 Srpen, 2020 - 16:00

No one likes noise when they’re working through a difficult problem. Quantum computers are no different, and now researchers have devised a new way to estimate how noise can throw their calculations off, a big step towards making the technology practical.

The quantum states at the heart of today’s quantum computers are fragile things. They are highly susceptible to disturbances from everything from stray magnetic fields to the minuscule imperfections in the control electronics or materials used to build the device.

These sources of noise can easily cause errors to creep into calculations, so finding ways to properly characterize and mitigate them will be essential for building quantum computers that can solve real-world problems.

While in the future we may be able to build quantum computers that are less prone to disturbances, in the meantime we will need to find ways to mitigate the errors they cause. A major barrier to building larger quantum computers is that it is likely to require multiple qubits dedicated to error correcting for every qubit used in calculations.

But doing that kind of error correction first requires you to understand the noise it is causing, a problem that is not trivial. Current approaches for characterizing this noise either provide a single figure, which is too simplistic to guide sophisticated error correction, or only work for devices smaller than the tens of qubits seen in today’s state-of-the-art quantum computers.

Now though, researchers at the University of Sydney have demonstrated a new technique that provides a detailed and accurate picture of noise across a network of qubits and is theoretically able to scale to as many qubits as required. They describe their algorithm in a paper in Nature Physics.

“Our experiments give the first demonstration of a protocol that is practical, relevant, and immediately applicable to characterizing error rates and correlated errors in present-day devices with a large number of qubits,” the authors write. “This protocol opens myriad opportunities for novel diagnostic tools and practical applications.”

When dealing with systems whose individual components can all interact with each other—as is the case with the qubits in a quantum computer—the number of possible interactions can increase exponentially as the number of components does.

To avoid this problem, the researchers came up with several shortcuts and simplifications that help focus on the most important interactions, making the calculations tractable while still providing a precise enough result to be practically useful.

To test their approach, they put it to work on a 14-qubit IBM quantum computer accessed via the company’s IBM Quantum Experience service. They were able to visualize correlations between all pairs of qubits and even uncovered long-range interactions between qubits that had not been previously detected and will be crucial for creating error-corrected devices.

They also used simulations to show that they could apply the algorithm to a quantum computer as large as 100 qubits without calculations getting intractable. As well as helping to devise error-correction protocols to cancel out the effects of noise, the researchers say their approach could also be used as a diagnostic tool to uncover the microscopic origins of noise.

There’s still a long way to go until quantum computers get large enough to solve practically useful problems. But at least now we know that when they arrive, we’ll be able to protect their delicate qubits from the quantum ruckus going on around them.

Image Credit: Wikimedia Commons/National Institute of Standards and Technology

Kategorie: Transhumanismus

These Sleek Houses Are 3D Printed, and They Fit in Your Backyard

13 Srpen, 2020 - 16:00

If you’d told me ten years ago that I could go live in a house built by a giant concrete-spitting 3D printer, I not only would’ve thought you were crazy, I wouldn’t have known what you were talking about.

Five years ago I still would have been very, very skeptical.

But today, 3D printed homes seem to be coming out of the woodwork—so much so that we may soon need to adjust that saying to “coming out of the printer.” (I’ll show myself out…).

This week another company joined the 3D printed buildings melee. Oakland, California-based Mighty Buildings came out of stealth mode, to the tune of $30 million in venture capital funding. The construction technology company started out at Y Combinator, graduating in 2018, and is on track to become a major player in the industry.

In its warehouse in Oakland, the company recently printed a 350-square-foot studio demo unit in under 24 hours (this kind of time frame has become pretty standard for 3D printed homes), and it has already delivered two units to homeowners in northern and southern California.

Mighty Buildings’ homes are different from those of its 3D-printed-house peers in two ways.

First of all, they’re not made of concrete. Most existing 3D printed buildings are made of an enriched and reinforced concrete mixture, but Mighty Buildings developed its own synthetic stone to print with. Similar to Corian, a durable and heat-resistant material used in countertops, the stone dries and hardens as soon as it sees the light of day (in this case, as soon as it leaves the printer).

Secondly, while right now just the walls and floor of the houses are 3D printed, the company has plans in place to print the ceilings and roofs, too. Putting a traditional roof on is an extra step and adds cost; printing the roofs is part of the company’s push to automate as much of the construction process as possible.

“The 3D printing, robotic post-processing, and the ability to automate steps like the pouring of insulation means that Mighty Buildings will be able to automate up to 80 percent of the construction process,” Sam Ruben, chief sustainability officer and co-founder of the company, told Digital Trends.

Depending who you ask, though, these labor savings don’t quite carry over to cost savings for the final consumer. $115,000 for a studio and $285,000 for a three-bedroom, two-bath house are an absolute steal if you’re in the San Francisco Bay Area (though you still need to come up with a piece of land on which to plop your 3D printed haven). In less-expensive cities, these prices aren’t such a bargain, and in many places they’d be considered downright outrageous.

But Mighty Buildings plans to go for the low-hanging fruit first. With people stepping over each other to pay $2,000 a month for a studio in the Bay Area, homeowners with some extra space in their yards could pay off a Mighty Studio in just a few years—and it would undoubtedly be an investment that would keep on giving.

The company plans to start out by focusing on this market, and it will help that this type of housing unit is already legal in California. Ultimately, though, Mighty Buildings hopes to sell directly to developers who would buy multiple homes and build communities with them.

So whether it’s in a neighborhood full of other synthetic stone homes, floating on a pontoon in a river, or in someone’s backyard, it seems the future will see no shortage of opportunities to inhabit your very own 3D printed house.

Image Credit: Mighty Buildings

Kategorie: Transhumanismus

An Indian Company Is Gearing Up to Make Millions of Doses of a $3 Covid-19 Vaccine

12 Srpen, 2020 - 16:00

As the Covid-19 pandemic drags on, there’s one thing we’re all counting on to rescue us from the drudgery of socially-distanced life: a vaccine.

How many times have you heard “X won’t happen again until there’s a vaccine”? Concerts, conferences, festivals, sporting events, weddings, and anything else that entails a lot of people being in one place has been put on hold indefinitely—and we miss it. All of it.

But as much as we’re counting on a vaccine to put an end to this nightmare, the reality is that even once a fateful scientist, company, or lab does find a vaccine, the story doesn’t end there; the next steps are manufacturing the vaccine at scale, ensuring equitable distribution both between and within countries, and making sure everyone who needs vaccination—billions of people around the world—can access and afford it. We’ve never been faced with a challenge like this, and the way it plays out will speak to our collective compassion and humanity.

An Indian company is getting a jump-start on manufacturing low-cost vaccines. With funding from the Bill & Melinda Gates Foundation, the Serum Institute of India plans to crank out 100 million doses of Oxford University’s coronavirus vaccine for poor countries at a cost of $3 or less per dose. In a separate deal with multinational pharma giant AstraZeneca, which licensed Oxford’s vaccine in late April, the Serum Institute also agreed to produce a billion doses for low- and middle-income countries.

The Serum Institute

The Serum Institute of India isn’t widely known, but as Bill Gates points out in this video from 2012, the company plays a crucial role in global health. As the world’s biggest manufacturer of vaccines by volume (not by revenue—that title goes to British GlaxoSmithKline), Serum makes vaccines for dozens of diseases, including measles, mumps, diptheria, tetanus, and hepatitis-b, among others. According to the company’s website, 65 percent of children in the world receive at least one of its vaccines, and they’re used in over 170 different countries.

Serum was founded in 1966 and is privately owned, which gives it the freedom to make quick, risky decisions that publicly-traded pharma companies can’t; Bloomberg says the company “may be the world’s best hope for producing enough vaccine to end the pandemic.”

The Oxford Vaccine

As detailed in a paper published in The Lancet on July 20, a vaccine developed by researchers at Oxford University showed highly encouraging results in phase 1 and 2 clinical trials. Of 1,077 people that took part in the trials, 90 percent developed antibodies that neutralized Covid-19 after just one vaccine dose.

Its unwieldy name, “ChAdOx1 nCoV-19,” is a mashup of its various attributes: it’s a chimpanzee (Ch) adenovirus-vectored vaccine (Ad) developed at Oxford (Ox). Unlike American company Moderna’s vaccine, which prompts an immune response using Covid-19 messenger RNA, the Oxford vaccine is made from a virus genetically engineered to resemble coronavirus. Scientists used a virus that causes the common cold in chimpanzees, and added the spike protein that Covid-19 uses to break into human cells. The resulting virus doesn’t actually cause people to get infected, but it prompts the immune system to launch a defense against it and block it from continuing to invade cells.

The vaccine’s only side effects were headaches and a mild fever. More extensive trials are now being launched in the US (this will be the biggest with 30,000 people), UK, South Africa, and Brazil. The vaccine may be used in controversial human challenge trials as well—this is when vaccinated people are infected with the virus to see whether the vaccine can effectively neutralize it.

Risky Business, Onward

The Serum Institute is taking a pretty big risk by forging ahead with these plans, even outside of the fact that the Oxford vaccine hasn’t yet passed Phase 3 clinical trials. If the vaccine falls through for any reason, Serum stands to lose up to $200 million.

Even once a vaccine (this one or any other) is determined safe, cranked out at lightning speed, and distributed, there’s no guarantee it will eradicate Covid-19. The virus could mutate and develop a new strain. The ultra-accelerated timeline under which vaccines are being developed could leave us with one that’s not truly safe and time-tested. Production constraints and supply hoarding could complicate manufacturing. And according to one study, 50 percent of Americans and more than a quarter of people in France say they don’t even want to be vaccinated.

As Carolyn Johnson wrote in the Washington Post, “The declaration that a vaccine has been shown safe and effective will be a beginning, not the end. Deploying the vaccine to people in the United States and around the world will test and strain distribution networks, the supply chain, public trust and global cooperation. It will take months or, more likely, years to reach enough people to make the world safe.”

Despite these caveats, though, a vaccine is still a finish line we must race towards, and the only logical next step short of letting the virus rage in an attempt to achieve herd immunity. So, fraught as it may be when (or if) it arrives, we’ll keep waiting, hoping, and looking forward to all the things we’re going to do again once there’s a vaccine.

Image Credit: Bao_5 from Pixabay

Kategorie: Transhumanismus

Scientists Gene-Hack Cotton Plants to Make Them Every Color of the Rainbow

11 Srpen, 2020 - 16:00

Imagine this: You’re on a drive through cotton country. The sun’s out, top’s down. It’s a beautiful, totally normal day. Only, what was once a sea of white puff balls has transformed into a multi-hued swirl. Lines of deep purple, bright yellow, midnight blue. All the colors in the rainbow—and your t-shirt drawer, as it so happens.

Today, you’d do well to check your water. But in the future, colorful cotton could be the norm—Australian scientists are having early success genetically modifying the crop to make it multicolored. And although color is their latest project, they’re also working to make synthetic-like, stretchy cotton.

The team hopes their new cotton plants might eventually be grown widely and made into clothes, helping to displace the toxic dyes and synthetic materials used by the fashion industry.

And that’s a worthy cause.

The numbers are hard to pin down exactly, but there’s little doubt that how we make clothes could be more environmentally friendly. In textile manufacturing, which often takes place in developing countries, those harmful dyes can cause health problems for workers and do more damage as toxic runoff. Also, the life cycle of clothing isn’t as long as it once was. While some items will get a second life by way of a thrift store, it all eventually makes its way to a landfill, where the synthetic materials in many clothes can take centuries to break down.

The holy grail then? Non-toxic, compostable clothing. And while it’s still early, the tools of synthetic biology and genetic engineering may well prove a big part of the solution.

Not as Novel as You Might Think

To be totally clear, cotton doesn’t only grow in one color. (This was news to me, so maybe it is to you too.) Some varieties, dating back millennia, are naturally dark chocolate, light brown, and even mauve. These were traditionally used in handwoven textiles, but with the Industrial Revolution, naturally pigmented cotton gave way to white cotton because it had longer, higher quality fibers, you could dye it any color on the cheap, and it didn’t require specialized equipment or methods to harvest.

In the 80s and 90s, as people became more environmentally conscious, there was a revival of naturally pigmented cotton. You could make clothes from it without dye and suppliers were often small, organic farmers. There was even work to make it amenable to industrial looms. Sally Fox, for example, developed varieties with longer fibers in an array of colors.

Still, naturally colored cotton is generally more expensive to produce, the color range is limited, and the fiber quality is lower than white cotton.

Enter genetic engineering.

As far back as 1993, people were talking about adding color genes to cotton. Two biotech companies, Agracetus and Calgene, had plans to splice in genes from the indigo plant to make cotton for blue jeans. “Of course it will work,” Ken Barton, vice-president of research and development at Agracetus, said at the time. “Give a scientist enough time and money and he can do anything.” Of course, we’re still dying jeans 27 years later—but maybe the time has come.

As with all things in the realm of biology, the devil’s in the details, but our tools for manipulating nature have advanced in the last few decades too.

From Cotton Tissue to Cotton Plant

An array of tiny, brilliantly colored buds of cotton tissue are sitting in a few dozen petri dishes in a Canberra greenhouse (check out images here). In one dish, the cotton is raspberry red; in the other it’s yellow like a mango. The tissue, which carries genes for color spliced in by scientists at Australia’s scientific research agency, CSIRO, is only the first step, but it’s a promising one. In the next few months, the team, led by senior research scientist, Colleen MacMillan, will coax the tissue into full-grown cotton plants.

If all goes to plan, the cotton fiber will be just as colorful as the petri dish tissue. The team points to splotches of color on leaves of tobacco plants carrying the cotton genes as likely evidence they’ll take. If the leaves of the cotton plants are similarly colored, the cotton fiber will be too.

“We’ve seen some really beautiful bright yellows, sort of golden-orangey colors, through to some really deep purple,” Filomena Pettolino, a scientist on MacMillan’s team, told Australia’s ABC News. The team is also working on black cotton, which would be a significant achievement—black dyes are notoriously the nastiest, most toxic of the lot. And the less dye the better.

Though they’re favored for speed and quality, synthetic dyes can include formaldehyde and heavy metals which stain the skin and cause cancer. That early-90s dream to make jeans with blue cotton? It’s just as relevant today. In the Chinese province of Xintang, where 300 million pairs of jeans are dyed each year, the toxic runoff flows into rivers by the gallon.

In parallel to their work in multicolored cotton, the team has a longer-term project to make synthetic-like cotton. Synthetics like polyester and nylon make their way into the environment from washing machines—which pull off and flush microfibers from the fabric—and of course, they also line landfills. The team is screening thousands of plants, hunting for proteins with just the right properties: stretchy, wrinkle-free, and maybe even waterproof.

“We’re looking into the structure of cotton cell walls and harnessing the latest tools in synthetic biology to develop the next generation cotton fiber,” CSIRO scientist Dr Madeline Mitchell said. “We’ve got a whole bunch of different cotton plants growing; some with really long thin fibers, others like the one we call ‘Shaun the Sheep’, with short, woolly fibers.”

It remains to be seen whether this next-gen cotton can keep up with fashion’s insatiable demand for new hues—though black is never out of style—if it can yield as much as a standard cotton plant, and what it will cost farmers.

First, though, this team (or another) will need to prove they can grow the stuff and produce seeds at scale. But if it works, you or someone you know may one day rock a pair of fully compostable, bright purple yoga pants of gene-hacked cotton.

Image credit: Crystal de Passillé-Chabot / Unsplash

Kategorie: Transhumanismus

The Secret to a Long, Healthy Life Is in the Genes of the Oldest Humans Alive

10 Srpen, 2020 - 16:00

The first time I heard nematode worms can teach us something about human longevity, I balked at the idea. How the hell can a worm with an average lifespan of only 15 days have much in common with a human who lives decades?

The answer is in their genes—especially those that encode for basic life functions, such as metabolism. Thanks to the lowly C. elegans worm, we’ve uncovered genes and molecular pathways, such as insulin-like growth factor 1 (IGF-1) signaling that extends healthy longevity in yeast, flies, and mice (and maybe us). Too nerdy? Those pathways also inspired massive scientific and popular interest in metformin, hormones, intermittent fasting, and even the ketogenic diet. To restate: worms have inspired the search for our own fountain of youth.

Still, that’s just one success story. How relevant, exactly, are those genes for humans? We’re rather a freak of nature. Our aging process extends for years, during which we experience a slew of age-related disorders. Diabetes. Heart disease. Dementia. Surprisingly, many of these don’t ever occur in worms and other animals. Something is obviously amiss.

In this month’s Nature Metabolism, a global team of scientists argued that it’s high time we turn from worm to human. The key to human longevity, they say, lies in the genes of centenarians. These individuals not only live over 100 years, they also rarely suffer from common age-related diseases. That is, they’re healthy up to their last minute. If evolution was a scientist, then centenarians, and the rest of us, are two experimental groups in action.

Nature has already given us a genetic blueprint for healthy longevity. We just need to decode it.

“Long-lived individuals, through their very existence, have established the physiological feasibility of living beyond the ninth decade in relatively good health and ending life without a period of protracted illness,” the authors wrote. From this rare but valuable population, we can gain “insight into the physiology of healthy aging and the development of new therapies to extend the human healthspan.”

A Genetic Legacy

While it may seem obvious now, whether genes played a role in longevity was disputed for over a century. After all, rather than genes, wouldn’t access to health care, socioeconomic status, diet, smoking, drinking, exercise, or many other environmental and lifestyle factors play a much larger role? Similar to height or intelligence (however the latter is assessed), the genetics of longevity is an enormously complicated and sensitive issue for unbiased studying.

Yet after only a few genetic studies of longevity, a trend quickly emerged.

“The natural lifespan in humans, even under optimal conditions in modern societies, varies considerably,” the authors said. One study, for example, found that centenarians lived much longer than people born around the same time in the same environment. The offspring of centenarians also have lower chances of age-related diseases and exhibit a more “youthful” profile of metabolism and age-related inflammation than others of the same age and gender.

Together, about 25 to 35 percent of the variability in how long people live is determined by their genes—regardless of environment. In other words, rather than looking at nematode worm genes, we have a discrete population of humans who’ve already won the genetic lottery when it comes to aging. We just need to parse what “winning” means in terms of biology. Genes in hand, we could perhaps tap those biological phonelines and cut the wires leading to aging.

“Identification of the genetic factors that underlie extreme human lifespan should provide insights into the mechanisms of human longevity and disease resistance,” the authors said.

A Radical Redesign

Once scientists discovered that genes play a large role in aging, the next question was “which ones are they?”

They turned to genome-wide association studies, or GWAS. This big data approach scans existing genomic databases for variations in DNA coding that could lead to differences in some outcome—for example, long versus short life. The differences don’t even have to be in so-called “coding” genes (that is, genes that make proteins). They can be anywhere in the genome.

It’s a powerful approach, but not that specific. Think of GWAS as rudimentary “debugging” software for biological code: it only looks for differences between different DNA letter variants, but doesn’t care which specific DNA letter swap most likely impacts the final biological program (aging, in this case).

That’s a huge problem. For one, GWAS often finds dozens of single DNA letter changes, none powerful enough to change the trajectory of aging by itself. The technique highlights a village of DNA variants, that together may have an effect on aging by controlling the cell’s course over a lifetime, without indicating which are most important. It’s also hard to say that a DNA letter change causally leads to (or protects against) aging. Finally, GWAS studies are generally performed on populations of European ancestry, which leaves out a huge chunk of humans—for example, the Japanese, who tend to produce an outsized percentage of centenarians.

So what needs to change?

Rather than focusing on the general population, the key is to home in on centenarians of different cultures, socioeconomic status, and upbringing. If GWAS are like fishing for a rare species in several large oceans, then the authors’ point is to focus on ponds—distributed across the world—which are small, but packed with those rare species.

“Extremely long-lived individuals, such as centenarians, compose only a tiny proportion (~0.01 percent to 0.02 percent) of the United States population, but their genes contain a biological blueprint for healthy aging and longevity,” the authors said. They’re spared from usual age-related diseases, and “this extreme and extremely rare phenotype is ideal for the study of genetic variants that regulate healthspan and lifespan.”

It’s an idea that would usually make geneticists flinch. It’s generally thought that the larger the study population, the better the result. Here, the recommendation is to narrow our focus.

And that’s the point, the authors argue.

Whatever comes out of these studies will likely have a much larger impact on aging than a GWAS fishing experiment. Smaller (genomic) pond; larger (pro-youth) fish. What’s more, a pro-youth gene identified in one European-based long-living population can be verified in another group of centenarians—say, Japanese—ensuring that the gene candidates reflect something fundamental about human aging, regardless of race, culture, upbringing, and wealth.

The Road to Healthy Aging

A genomic screen of centenarians can easily be done these days on the cheap. But that’s only the first step.

The next step is to validate promising anti-aging genetic differences, similar to how scientists validated such differences in nematode worms during classic longevity studies. For example, a promising pro-youth gene variant can be genetically edited into mice using CRISPR or some other tool. Scientists can then examine how the mice grow up and grow old, compared to their non-edited peers. Does the gene make these mice more resilient to dementia? What about muscle wasting? Or heart troubles? Or hair greying and obesity?

From these observations, scientists can then use an enormous selection of molecular tools to further dissect the molecular pathways underlying these pro-youth genetic changes.

The final step? Guided by centenarian genes and validated by animal models of aging, we can design powerful drugs that sever the connection between the genes and proteins that drive aging and its associated diseases. Metformin is an experimental pill that came out of aging studies in nematode worms—imagine what studies in human centenarians will yield.

“Despite enormous improvements in human health over the past century, we remain far from a situation in which living to 100 years of age in fairly good health is the norm,” the authors said.

But as centenarians obviously prove, this is possible. By digging into their genes, scientists may find a path towards healthy longevity—not just for the genetically fortunate, but for all of us.

Image credit: Cristian Newman / Unsplash

Kategorie: Transhumanismus

Science Fiction Explores the Interconnectedness Revealed by the Coronavirus Pandemic

9 Srpen, 2020 - 16:00

In the early days of the coronavirus outbreak, a theory widely shared on social media suggested that a science fiction text, Dean Koontz’s 1981 science fiction novel, The Eyes of Darkness, had predicted the coronavirus pandemic with uncanny precision. Covid-19 has held the entire world hostage, producing a resemblance to the post-apocalyptic world depicted in many science fiction texts.

Canadian author Margaret Atwood’s classic 2003 novel Oryx and Crake refers to a time when “there was a lot of dismay out there, and not enough ambulances”—a prediction of our current predicament.

However, the connection between science fiction and pandemics runs deeper. They are linked by a perception of globality, what sociologist Roland Robertson defines as “the consciousness of the world as a whole.”

Globality in Science Fiction

In his 1992 survey of the history of telecommunications, How the World Was One, Arthur C. Clarke alludes to the famed historian Alfred Toynbee’s lecture entitled “The Unification of the World.” Delivered at the University of London in 1947, Toynbee envisions a “single planetary society” and notes how “despite all the linguistic, religious and cultural barriers that still sunder nations and divide them into yet smaller tribes, the unification of the world has passed the point of no return.”

Science fiction writers have, indeed, always embraced globality. In interplanetary texts, humans of all nations, races, and genders have to come together as one people in the face of alien invasions. Facing an interplanetary encounter, bellicose nations have to reluctantly eschew political rivalries and collaborate on a global scale, as in Denis Villeneuve’s 2018 film, Arrival.

Globality is central to science fiction. To be identified as an Earthling, one has to transcend the local and the national, and sometimes, even the global, by embracing a larger planetary consciousness.

In The Left Hand of Darkness, Ursula K. Le Guin conceptualizes the Ekumen, which comprises 83 habitable planets. The idea of the Ekumen was borrowed from Le Guin’s father, the noted cultural anthropologist Arthur L. Kroeber. Kroeber had, in a 1945 paper, introduced the concept (from Greek oikoumene) to represent a “historic culture aggregate.” Originally, Kroeber used oikoumene to refer to the “entire inhabited world,” as he traced back human culture to one single people. Le Guin then adopted this idea of a common origin of shared humanity in her novel.

Globality of the Pandemic

Many medical science fiction texts depict diseases afflicting all of humanity which must put up a unified front or perish. These narratives underscore the fluid and transnational histories of diseases, their impact and possible cure. In Amitav Ghosh’s 1995 novel, The Calcutta Chromosome, he weaves an interconnected history of malaria that spans continents over a century, while challenging Eurocentricism and foregrounding the subversive role of Indigenous knowledge in malaria research.

The epigraph quotes a poem by Sir Ronald Ross, the Nobel Prize-winning scientist credited with the discovery of the mosquito as the malaria vector:

“Seeking His secret deeds

With tears and toiling breath,

I find thy cunning seeds,

O million-murdering Death.”

Pandemics are by definition global. On March 11, 2020, the World Health Organization declared COVID-19 a pandemic, noting that “[p]andemic is not a word to use lightly or carelessly. It is a word that, if misused, can cause unreasonable fear, or unjustified acceptance that the fight is over, leading to unnecessary suffering and death.”

COVID-19 has forced billions into social isolation and continues to wreak havoc on an unprecedented global scale. Eerily similar photographs of masked faces, PPE-clad front-line workers and deserted downtowns emerged from every corner of the world.

However, a pandemic is not global merely in its spread—one needs to harness its globality to counter and eventually defeat it. As Israeli historian Yuval Harari notes, in the choice between national isolationism and global solidarity, we must choose the latter and adopt a “spirit of global co-operation and trust”:

“What an Italian doctor discovers in Milan in the early morning might well save lives in Tehran by evening. When the UK government hesitates between several policies, it can get advice from the Koreans who have already faced a similar dilemma a month ago.”

Regarding Canada’s response to the crisis, researchers have noted both the immorality and futility of a nationalistic “Canada First” approach.

Clearly, a nation cannot insulate itself from the deleterious effects of the pandemic by closing its hearts and borders. Tightening immigration can temporarily stanch the flow of people, but the virus, like the “million-murdering death,” is treacherous in its border-defying agility. Presently, as many nations experience a resurgence of nationalism and exclusionary policies of walls and borders, the pandemic is a harsh reminder of the lived reality of our transnational interconnectedness.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image credit: NASA

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through August 8)

8 Srpen, 2020 - 16:00

For Robots, It’s a Time to Shine (and Maybe Disinfect)
Lisa Prevost | The New York Times
“…cleaning robots are having a moment in commercial real estate. Their creators are promoting the machines as cost-effective solutions to the cleaning challenges posed by the pandemic. They can be put to frequent use without requiring more paid labor hours, they are always compliant, and some can even provide the data to prove that they have scoured every inch assigned.”


The Quest to Liberate $300,000 of Bitcoin From an Old Zip File
Lily Hay Newman | Wired
“In October, Michael Stay got a weird message on LinkedIn. A total stranger had lost access to his bitcoin private keys—and wanted Stay’s help getting his $300,000 back. The Guy [as Stay calls him] had bought around $10,000 worth of bitcoin in January 2016, well before the boom. He had encrypted the private keys in a zip file and had forgotten the password. He was hoping Stay could help him break in.”


Scientists Rename Genes to Stop Microsoft Excel From Reading Them as Dates
James Vincent | The Verge
“…Bruford says this is the first time that the guidelines have been rewritten specifically to counter the problems caused by software. So far, the reactions seem to be extremely positive—some would even say joyous. After geneticist Janna Hutz shared the relevant section of HGNC’s new guidelines on Twitter, the response from the community was jubilant.”


How Falling Solar Costs Have Renewed Clean Hydrogen Hopes
James Temple | MIT Technology Review
“The grand vision of the hydrogen economy has been held back by the high costs of creating a clean version, the massive investments into vehicles, machines and pipes that could be required to put it to use, and progress in competing energy storage alternatives like batteries. So what’s driving the renewed interest?”


Immunology Is Where Intuition Goes to Die
Ed Yong | The Atlantic
“[Immunity] lies at the heart of many of the COVID-19 pandemic’s biggest questions. Why do some people become extremely ill and others don’t? Can infected people ever be sickened by the same virus again? How will the pandemic play out over the next months and years? Will vaccination work? To answer these questions, we must first understand how the immune system reacts to SARS-CoV-2 coronavirus.”


Will Covid Make Countries Drop Cash and Adopt Digital Currencies?
Kenneth Rogoff | The Guardian
“As the Covid-19 crisis accelerates the long-term shift away from cash (at least in tax-compliant, legal transactions), official discussions about digital currencies are heating up. Between the impending launch of Facebook’s Libra and China’s proposed central-bank digital currency, events now could reshape global finance for a generation.”


The Space Between Our Heads
Mark Dingemanse | Aeon
“If language is such a slippery medium, perhaps it is time to replace it with something more dependable. Why not cut out the middleman and connect brains directly? …How frustrating to have such a rich mental life and be stuck with such poor resources for expressing it! But no matter how much we can sympathize with this view, it misses a few crucial insights about language.”


Billions of People Globally Still Can’t Afford Smartphones. That’s a Major Problem
Katie Collins | CNET
“According to a survey published Thursday by the Alliance for Affordable Internet, an initiative of Tim Berners-Lee’s Web Foundation, 2.5 billion people live in countries where a smartphone costs a quarter or more of their monthly income. In some countries, the cost of a device is higher still, locking people out of phone ownership, and internet access with it.”

Image credit: White.RainForest ∙ 易雨白林. / Unsplash

Kategorie: Transhumanismus

The Deck Is Not Rigged: Poker and the Limits of AI

7 Srpen, 2020 - 16:00

Tuomas Sandholm, a computer scientist at Carnegie Mellon University, is not a poker player—or much of a poker fan, in fact—but he is fascinated by the game for much the same reason as the great game theorist John von Neumann before him. Von Neumann, who died in 1957, viewed poker as the perfect model for human decision making, for finding the balance between skill and chance that accompanies our every choice. He saw poker as the ultimate strategic challenge, combining as it does not just the mathematical elements of a game like chess but the uniquely human, psychological angles that are more difficult to model precisely—a view shared years later by Sandholm in his research with artificial intelligence.

“Poker is the main benchmark and challenge program for games of imperfect information,” Sandholm told me on a warm spring afternoon in 2018, when we met in his offices in Pittsburgh. The game, it turns out, has become the gold standard for developing artificial intelligence.

Tall and thin, with wire-frame glasses and neat brow hair framing a friendly face, Sandholm is behind the creation of three computer programs designed to test their mettle against human poker players: Claudico, Libratus, and most recently, Pluribus. (When we met, Libratus was still a toddler and Pluribus didn’t yet exist.) The goal isn’t to solve poker, as such, but to create algorithms whose decision making prowess in poker’s world of imperfect information and stochastic situations—situations that are randomly determined and unable to be predicted—can then be applied to other stochastic realms, like the military, business, government, cybersecurity, even health care.

While the first program, Claudico, was summarily beaten by human poker players—“one broke-ass robot,” an observer called it—Libratus has triumphed in a series of one-on-one, or heads-up, matches against some of the best online players in the United States.

Libratus relies on three main modules. The first involves a basic blueprint strategy for the whole game, allowing it to reach a much faster equilibrium than its predecessor. It includes an algorithm called the Monte Carlo Counterfactual Regret Minimization, which evaluates all future actions to figure out which one would cause the least amount of regret. Regret, of course, is a human emotion. Regret for a computer simply means realizing that an action that wasn’t chosen would have yielded a better outcome than one that was. “Intuitively, regret represents how much the AI regrets having not chosen that action in the past,” says Sandholm. The higher the regret, the higher the chance of choosing that action next time.

It’s a useful way of thinking—but one that is incredibly difficult for the human mind to implement. We are notoriously bad at anticipating our future emotions. How much will we regret doing something? How much will we regret not doing something else? For us, it’s an emotionally laden calculus, and we typically fail to apply it in quite the right way. For a computer, it’s all about the computation of values. What does it regret not doing the most, the thing that would have yielded the highest possible expected value?

The second module is a sub-game solver that takes into account the mistakes the opponent has made so far and accounts for every hand she could possibly have. And finally, there is a self-improver. This is the area where data and machine learning come into play. It’s dangerous to try to exploit your opponent—it opens you up to the risk that you’ll get exploited right back, especially if you’re a computer program and your opponent is human. So instead of attempting to do that, the self-improver lets the opponent’s actions inform the areas where the program should focus. “That lets the opponent’s actions tell us where [they] think they’ve found holes in our strategy,” Sandholm explained. This allows the algorithm to develop a blueprint strategy to patch those holes.

It’s a very human-like adaptation, if you think about it. I’m not going to try to outmaneuver you head on. Instead, I’m going to see how you’re trying to outmaneuver me and respond accordingly. Sun-Tzu would surely approve. Watch how you’re perceived, not how you perceive yourself—because in the end, you’re playing against those who are doing the perceiving, and their opinion, right or not, is the only one that matters when you craft your strategy. Overnight, the algorithm patches up its overall approach according to the resulting analysis.

There’s one final thing Libratus is able to do: play in situations with unknown probabilities. There’s a concept in game theory known as the trembling hand: There are branches of the game tree that, under an optimal strategy, one should theoretically never get to; but with some probability, your all-too-human opponent’s hand trembles, they take a wrong action, and you’re suddenly in a totally unmapped part of the game. Before, that would spell disaster for the computer: An unmapped part of the tree means the program no longer knows how to respond. Now, there’s a contingency plan.

Of course, no algorithm is perfect. When Libratus is playing poker, it’s essentially working in a zero-sum environment. It wins, the opponent loses. The opponent wins, it loses. But while some real-life interactions really are zero-sum—cyber warfare comes to mind—many others are not nearly as straightforward: My win does not necessarily mean your loss. The pie is not fixed, and our interactions may be more positive-sum than not.

What’s more, real-life applications have to contend with something that a poker algorithm does not: the weights that are assigned to different elements of a decision. In poker, this is a simple value-maximizing process. But what is value in the human realm? Sandholm had to contend with this before, when he helped craft the world’s first kidney exchange. Do you want to be more efficient, giving the maximum number of kidneys as quickly as possible—or more fair, which may come at a cost to efficiency? Do you want as many lives as possible saved—or do some take priority at the cost of reaching more? Is there a preference for the length of the wait until a transplant? Do kids get preference? And on and on. It’s essential, Sandholm says, to separate means and the ends. To figure out the ends, a human has to decide what the goal is.

“The world will ultimately become a lot safer with the help of algorithms like Libratus,” Sandholm told me. I wasn’t sure what he meant. The last thing that most people would do is call poker, with its competition, its winners and losers, its quest to gain the maximum edge over your opponent, a haven of safety.

“Logic is good, and the AI is much better at strategic reasoning than humans can ever be,” he explained. “It’s taking out irrationality, emotionality. And it’s fairer. If you have an AI on your side, it can lift non-experts to the level of experts. Naïve negotiators will suddenly have a better weapon. We can start to close off the digital divide.”

It was an optimistic note to end on—a zero-sum, competitive game yielding a more ultimately fair and rational world.

I wanted to learn more, to see if it was really possible that mathematics and algorithms could ultimately be the future of more human, more psychological interactions. And so, later that day, I accompanied Nick Nystrom, the chief scientist of the Pittsburgh Supercomputing Center—the place that runs all of Sandholm’s poker-AI programs—to the actual processing center that make undertakings like Libratus possible.

A half-hour drive found us in a parking lot by a large glass building. I’d expected something more futuristic, not the same square, corporate glass squares I’ve seen countless times before. The inside, however, was more promising. First the security checkpoint. Then the ride in the elevator — down, not up, to roughly three stories below ground, where we found ourselves in a maze of corridors with card readers at every juncture to make sure you don’t slip through undetected. A red-lit panel formed the final barrier, leading to a small sliver of space between two sets of doors. I could hear a loud hum coming from the far side.

“Let me tell you what you’re going to see before we walk in,” Nystrom told me. “Once we get inside, it will be too loud to hear.”

I was about to witness the heart of the supercomputing center: 27 large containers, in neat rows, each housing multiple processors with speeds and abilities too great for my mind to wrap around. Inside, the temperature is by turns arctic and tropic, so-called “cold” rows alternating with “hot”—fans operate around the clock to cool the processors as they churn through millions of giga, mega, tera, peta and other ever-increasing scales of data bytes. In the cool rows, robotic-looking lights blink green and blue in orderly progression. In the hot rows, a jumble of multicolored wires crisscrosses in tangled skeins.

In the corners stood machines that had outlived their heyday. There was Sherlock, an old Cray model, that warmed my heart. There was a sad nameless computer, whose anonymity was partially compensated for by the Warhol soup cans adorning its cage (an homage to Warhol’s Pittsburghian origins).

And where does Libratus live, I asked? Which of these computers is Bridges, the computer that runs the AI Sandholm and I had been discussing?

Bridges, it turned out, isn’t a single computer. It’s a system with processing power beyond comprehension. It takes over two and a half petabytes to run Libratus. A single petabyte is a million gigabytes: You could watch over 13 years of HD video, store 10 billion photos, catalog the contents of the entire Library of Congress word for word. That’s a whole lot of computing power. And that’s only to succeed at heads-up poker, in limited circumstances.

Yet despite the breathtaking computing power at its disposal, Libratus is still severely limited. Yes, it beat its opponents where Claudico failed. But the poker professionals weren’t allowed to use many of the tools of their trade, including the opponent analysis software that they depend on in actual online games. And humans tire. Libratus can churn for a two-week marathon, where the human mind falters.

But there’s still much it can’t do: play more opponents, play live, or win every time. There’s more humanity in poker than Libratus has yet conquered. “There’s this belief that it’s all about statistics and correlations. And we actually don’t believe that,” Nystrom explained as we left Bridges behind. “Once in a while correlations are good, but in general, they can also be really misleading.”

Two years later, the Sandholm lab will produce Pluribus. Pluribus will be able to play against five players—and will run on a single computer. Much of the human edge will have evaporated in a short, very short time. The algorithms have improved, as have the computers. AI, it seems, has gained by leaps and bounds.

So does that mean that, ultimately, the algorithmic can indeed beat out the human, that computation can untangle the web of human interaction by discerning “the little tactics of deception, of asking yourself what is the other man going to think I mean to do,” as von Neumann put it?

Long before I’d spoken to Sandholm, I’d met Kevin Slavin, a polymath of sorts whose past careers have including founding a game design company and an interactive art space and launching the Playful Systems group at MIT’s Media Lab. Slavin has a decidedly different view from the creators of Pluribus. “On the one hand, [von Neumann] was a genius,” Kevin Slavin reflects. “But the presumptuousness of it.”

Slavin is firmly on the side of the gambler, who recognizes uncertainty for what it is and thus is able to take calculated risks when necessary, all the while tampering confidence at the outcome. The most you can do is put yourself in the path of luck—but to think you can guess with certainty the actual outcome is a presumptuousness the true poker player foregoes. For Slavin, the wonder of computers is “That they can generate this fabulous, complex randomness.” His opinion of the algorithmic assaults on chance? “This is their moment,” he said. “But it’s the exact opposite of what’s really beautiful about a computer, which is that it can do something that’s actually unpredictable. That, to me, is the magic.”

Will they actually succeed in making the unpredictable predictable, though? That’s what I want to know. Because everything I’ve seen tells me that absolute success is impossible. The deck is not rigged.

“It’s an unbelievable amount of work to get there. What do you get at the end? Let’s say they’re successful. Then we live in a world where there’s no God, agency, or luck,” Slavin responded.

“I don’t want to live there,’’ he added “I just don’t want to live there.”

Luckily, it seems that for now, he won’t have to. There are more things in life than are yet written in the algorithms. We have no reliable lie detection software—whether in the face, the skin, or the brain. In a recent test of bluffing in poker, computer face recognition failed miserably. We can get at discomfort, but we can’t get at the reasons for that discomfort: lying, fatigue, stress—they all look much the same. And humans, of course, can also mimic stress where none exists, complicating the picture even further.

Pluribus may turn out to be powerful, but von Neumann’s challenge still stands: The true nature of games, the most human of the human, remains to be conquered.

This article was originally published on Undark. Read the original article.

Image Credit: José Pablo IglesiasUnsplash

Kategorie: Transhumanismus

The Global Work Crisis: Automation, the Case Against Jobs, and What to Do About It

6 Srpen, 2020 - 16:30

The alarm bell rings. You open your eyes, come to your senses, and slide from dream state to consciousness. You hit the snooze button, and eventually crawl out of bed to the start of yet another working day.

This daily narrative is experienced by billions of people all over the world. We work, we eat, we sleep, and we repeat. As our lives pass day by day, the beating drums of the weekly routine take over and years pass until we reach our goal of retirement.

A Crisis of Work

We repeat the routine so that we can pay our bills, set our kids up for success, and provide for our families. And after a while, we start to forget what we would do with our lives if we didn’t have to go back to work.

In the end, we look back at our careers and reflect on what we’ve achieved. It may have been the hundreds of human interactions we’ve had; the thousands of emails read and replied to; the millions of minutes of physical labor—all to keep the global economy ticking along.

According to Gallup’s World Poll, only 15 percent of people worldwide are actually engaged with their jobs. The current state of “work” is not working for most people. In fact, it seems we as a species are trapped by a global work crisis, which condemns people to cast away their time just to get by in their day-to-day lives.

Technologies like artificial intelligence and automation may help relieve the work burdens of millions of people—but to benefit from their impact, we need to start changing our social structures and the way we think about work now.

The Specter of Automation

Automation has been ongoing since the Industrial Revolution. In recent decades it has taken on a more elegant guise, first with physical robots in production plants, and more recently with software automation entering most offices.

The driving goal behind much of this automation has always been productivity and hence, profits: technology that can act as a multiplier on what a single human can achieve in a day is of huge value to any company. Powered by this strong financial incentive, the quest for automation is growing ever more pervasive.

But if automation accelerates or even continues at its current pace and there aren’t strong social safety nets in place to catch the people who are negatively impacted (such as by losing their jobs), there could be a host of knock-on effects, including more concentrated wealth among a shrinking elite, more strain on government social support, an increase in depression and drug dependence, and even violent social unrest.

It seems as though we are rushing headlong into a major crisis, driven by the engine of accelerating automation. But what if instead of automation challenging our fragile status quo, we view it as the solution that can free us from the shackles of the Work Crisis?

The Way Out

In order to undertake this paradigm shift, we need to consider what society could potentially look like, as well as the problems associated with making this change. In the context of these crises, our primary aim should be for a system where people are not obligated to work to generate the means to survive. This removal of work should not threaten access to food, water, shelter, education, healthcare, energy, or human value. In our current system, work is the gatekeeper to these essentials: one can only access these (and even then often in a limited form), if one has a “job” that affords them.

Changing this system is thus a monumental task. This comes with two primary challenges: providing people without jobs with financial security, and ensuring they maintain a sense of their human value and worth. There are several measures that could be implemented to help meet these challenges, each with important steps for society to consider.

Universal basic income (UBI)

UBI is rapidly gaining support, and it would allow people to become shareholders in the fruits of automation, which would then be distributed more broadly.

UBI trials have been conducted in various countries around the world, including Finland, Kenya, and Spain. The findings have generally been positive on the health and well-being of the participants, and showed no evidence that UBI disincentivizes work, a common concern among the idea’s critics. The most recent popular voice for UBI has been that of former US presidential candidate Andrew Yang, who now runs a non-profit called Humanity Forward.

UBI could also remove wasteful bureaucracy in administering welfare payments (since everyone receives the same amount, there’s no need to prevent false claims), and promote the pursuit of projects aligned with peoples’ skill sets and passions, as well as quantifying the value of tasks not recognized by economic measures like Gross Domestic Product (GDP). This includes looking after children and the elderly at home.

How a UBI can be initiated with political will and social backing and paid for by governments has been hotly debated by economists and UBI enthusiasts. Variables like how much the UBI payments should be, whether to implement taxes such as Yang’s proposed valued added tax (VAT), whether to replace existing welfare payments, the impact on inflation, and the impact on “jobs” from people who would otherwise look for work require additional discussion. However, some have predicted the inevitability of UBI as a result of automation.

Universal healthcare

Another major component of any society is the healthcare of its citizens. A move away from work would further require the implementation of a universal healthcare system to decouple healthcare from jobs. Currently in the US, and indeed many other economies, healthcare is tied to employment.

Universal healthcare such as Medicare in Australia is evidence for the adage “prevention is better than cure,” when comparing the cost of healthcare in the US with Australia on a per capita basis. This has already presented itself as an advancement in the way healthcare is considered. There are further benefits of a healthier population, including less time and money spent on “sick-care.” Healthy people are more likely and more able to achieve their full potential.

Reshape the economy away from work-based value

One of the greatest challenges in a departure from work is for people to find value elsewhere in life. Many people view their identities as being inextricably tied to their jobs, and life without a job is therefore a threat to one’s sense of existence. This presents a shift that must be made at both a societal and personal level.

A person can only seek alternate value in life when afforded the time to do so. To this end, we need to start reducing “work-for-a-living” hours towards zero, which is a trend we are already seeing in Europe. This should not come at the cost of reducing wages pro rata, but rather could be complemented by UBI or additional schemes where people receive dividends for work done by automation. This transition makes even more sense when coupled with the idea of deviating from using GDP as a measure of societal growth, and instead adopting a well-being index based on universal human values like health, community, happiness, and peace.

The crux of this issue is in transitioning away from the view that work gives life meaning and life is about using work to survive, towards a view of living a life that itself is fulfilling and meaningful. This speaks directly to notions from Maslow’s hierarchy of needs, where work largely addresses psychological and safety needs such as shelter, food, and financial well-being. More people should have a chance to grow beyond the most basic needs and engage in self-actualization and transcendence.

The question is largely around what would provide people with a sense of value, and the answers would differ as much as people do; self-mastery, building relationships and contributing to community growth, fostering creativity, and even engaging in the enjoyable aspects of existing jobs could all come into play.

Universal education

With a move towards a society that promotes the values of living a good life, the education system would have to evolve as well. Researchers have long argued for a more nimble education system, but universities and even most online courses currently exist for the dominant purpose of ensuring people are adequately skilled to contribute to the economy. These “job factories” only exacerbate the Work Crisis. In fact, the response often given by educational institutions to the challenge posed by automation is to find new ways of upskilling students, such as ensuring they are all able to code. As alluded to earlier, this is a limited and unimaginative solution to the problem we are facing.

Instead, education should be centered on helping people acknowledge the current crisis of work and automation, teach them how to derive value that is decoupled from work, and enable people to embrace progress as we transition to the new economy.

Disrupting the Status Quo

While we seldom stop to think about it, much of the suffering faced by humanity is brought about by the systemic foe that is the Work Crisis. The way we think about work has brought society far and enabled tremendous developments, but at the same time it has failed many people. Now the status quo is threatened by those very developments as we progress to an era where machines are likely to take over many job functions.

This impending paradigm shift could be a threat to the stability of our fragile system, but only if it is not fully anticipated. If we prepare for it appropriately, it could instead be the key not just to our survival, but to a better future for all.

Image Credit: mostafa meraji from Pixabay

Kategorie: Transhumanismus

These Scientists Just Completed a 3D ‘Google Earth’ for the Brain

5 Srpen, 2020 - 16:00

Human brain maps are a dime a dozen these days. Maps that detail neurons in a certain region. Maps that draw out functional connections between those cells. Maps that dive deeper into gene expression. Or even meta-maps that combine all of the above.

But have you ever wondered: how well do those maps represent my brain? After all, no two brains are alike. And if we’re ever going to reverse-engineer the brain as a computer simulation—as Europe’s Human Brain Project is trying to do—shouldn’t we ask whose brain they’re hoping to simulate?

Enter a new kind of map: the Julich-Brain, a probabilistic map of human brains that accounts for individual differences using a computational framework. Rather than generating a static PDF of a brain map, the Julich-Brain atlas is also dynamic, in that it continuously changes to incorporate more recent brain mapping results. So far, the map has data from over 24,000 thinly sliced sections from 23 postmortem brains covering most years of adulthood at the cellular level. But the atlas can also continuously adapt to progress in mapping technologies to aid brain modeling and simulation, and link to other atlases and alternatives.

In other words, rather than “just another” human brain map, the Julich-Brain atlas is its own neuromapping API—one that could unite previous brain-mapping efforts with more modern methods.

“It is exciting to see how far the combination of brain research and digital technologies has progressed,” said Dr. Katrin Amunts of the Institute of Neuroscience and Medicine at Research Centre Jülich in Germany, who spearheaded the study.

The Old Dogma

The Julich-Brain atlas embraces traditional brain-mapping while also yanking the field into the 21st century.

First, the new atlas includes the brain’s cytoarchitecture, or how brain cells are organized. As brain maps go, these kinds of maps are the oldest and most fundamental. Rather than exploring how neurons talk to each other functionally—which is all the rage these days with connectome maps—cytoarchitecture maps draw out the physical arrangement of neurons.

Like a census, these maps literally capture how neurons are distributed in the brain, what they look like, and how they layer within and between different brain regions.

Because neurons aren’t packed together the same way between different brain regions, this provides a way to parse the brain into areas that can be further studied. When we say the brain’s “memory center,” the hippocampus, or the emotion center, the “amygdala,” these distinctions are based on cytoarchitectural maps.

Some may call this type of mapping “boring.” But cytoarchitecture maps form the very basis of any sort of neuroscience understanding. Like hand-drawn maps from early explorers sailing to the western hemisphere, these maps provide the brain’s geographical patterns from which we try to decipher functional connections. If brain regions are cities, then cytoarchitecture maps attempt to show trading or other “functional” activities that occur in the interlinking highways.

You might’ve heard of the most common cytoarchitecture map used today: the Brodmann map from 1909 (yup, that old), which divided the brain into classical regions based on the cells’ morphology and location. The map, while impactful, wasn’t able to account for brain differences between people. More recent brain-mapping technologies have allowed us to dig deeper into neuronal differences and divide the brain into more regions—180 areas in the cortex alone, compared with 43 in the original Brodmann map.

The new study took inspiration from that age-old map and transformed it into a digital ecosystem.

A Living Atlas

Work began on the Julich-Brain atlas in the mid-1990s, with a little help from the crowd.

The preparation of human tissue and its microstructural mapping, analysis, and data processing is incredibly labor-intensive, the authors lamented, making it impossible to do for the whole brain at high resolution in just one lab. To build their “Google Earth” for the brain, the team hooked up with EBRAINS, a shared computing platform set up by the Human Brain Project to promote collaboration between neuroscience labs in the EU.

First, the team acquired MRI scans of 23 postmortem brains, sliced the brains into wafer-thin sections, and scanned and digitized them. They corrected distortions from the chopping using data from the MRI scans and then lined up neurons in consecutive sections—picture putting together a 3D puzzle—to reconstruct the whole brain. Overall, the team had to analyze 24,000 brain sections, which prompted them to build a computational management system for individual brain sections—a win, because they could now track individual donor brains too.

Their method was quite clever. They first mapped their results to a brain template from a single person, called the MNI-Colin27 template. Because the reference brain was extremely detailed, this allowed the team to better figure out the location of brain cells and regions in a particular anatomical space.

However, MNI-Colin27’s brain isn’t your or my brain—or any of the brains the team analyzed. To dilute any of Colin’s potential brain quirks, the team also mapped their dataset onto an “average brain,” dubbed the ICBM2009c (catchy, I know).

This step allowed the team to “standardize” their results with everything else from the Human Connectome Project and the UK Biobank, kind of like adding their Google Maps layer to the existing map. To highlight individual brain differences, the team overlaid their dataset on existing ones, and looked for differences in the cytoarchitecture.

The microscopic architecture of neurons change between two areas (dotted line), forming the basis of different identifiable brain regions. To account for individual differences, the team also calculated a probability map (right hemisphere). Image credit: Forschungszentrum Juelich / Katrin Amunts

Based on structure alone, the brains were both remarkably different and shockingly similar at the same time. For example, the cortexes—the outermost layer of the brain—were physically different across donor brains of different age and sex. The region especially divergent between people was Broca’s region, which is traditionally linked to speech production. In contrast, parts of the visual cortex were almost identical between the brains.

The Brain-Mapping Future

Rather than relying on the brain’s visible “landmarks,” which can still differ between people, the probabilistic map is far more precise, the authors said.

What’s more, the map could also pool yet unmapped regions in the cortex—about 30 percent or so—into “gap maps,” providing neuroscientists with a better idea of what still needs to be understood.

“New maps are continuously replacing gap maps with progress in mapping while the process is captured and documented … Consequently, the atlas is not static but rather represents a ‘living map,’” the authors said.

Thanks to its structurally-sound architecture down to individual cells, the atlas can contribute to brain modeling and simulation down the line—especially for personalized brain models for neurological disorders such as seizures. Researchers can also use the framework for other species, and they can even incorporate new data-crunching processors into the workflow, such as mapping brain regions using artificial intelligence.

Fundamentally, the goal is to build shared resources to better understand the brain. “[These atlases] help us—and more and more researchers worldwide—to better understand the complex organization of the brain and to jointly uncover how things are connected,” the authors said.

Image credit: Richard Watts, PhD, University of Vermont and Fair Neuroimaging Lab, Oregon Health and Science University

Kategorie: Transhumanismus

Airships Are No Longer a Relic of the Past; You Could Ride in One by 2023

4 Srpen, 2020 - 16:00

As concern over climate change and rising temperatures grows, the airline industry is taking heat (pun intended). Flying accounts for 2.5 percent of global carbon dioxide emissions; that’s lower than car travel and maritime shipping, but still a chunk worth acknowledging.

In some parts of the world people have started “flight-shaming,” that is, giving up air travel themselves and encouraging others to find alternative means of transport that are more climate-friendly. Sweden’s national flygskam campaign, which started in 2017, even led to a nine percent decrease in domestic air travel.

It’s possible to cut back on air travel, but given the globalized nature of business, the economy, and even families and friendships, we’re not going to stop needing a fast, relatively pain-free way to get across countries or around the globe; some things simply can’t be done over Zoom.

An unexpected potential solution is being floated (again, pun intended) by companies that believe people will be willing to trade a lot of time and money for a more planet-friendly way to travel: by airship.

What’s an Airship?

The term “airship” encompasses motorized craft that float due to being filled with a gas that’s lighter than air, like helium or hydrogen; blimps and zeppelins are the most common. Airships were used for bombings during World War I, and started carrying passengers in the late 1920s. In 1929 Germany’s Graf Zeppelin fully circled the globe, breaking the trip up into four legs and starting and ending in New Jersey; it took 22 days in total and carried 61 people. By the mid-1930s there were regular trans-Atlantic passenger flights.

Airships don’t need fuel to lift them off the ground, they just need it to propel them forward. Hydrogen was initially the lifting gas of choice, as it was cheap and abundant (and is lighter than helium). But the explosion of the Hindenburg in 1937 not only made the use of hydrogen all but defunct, it dismantled the passenger airship industry virtually overnight (interestingly, though, the Hindenburg wasn’t the deadliest airship disaster; it killed 36 people, but a crash 4 years prior killed 73 people).

Since then, airships have been relegated to use for large ads-in-the-sky, and before drones became commonplace they were used to take aerial photos at sporting events.

Comeback Kid

But passenger airships may soon be making a comeback, and more than one company is already banking on it. OceanSky Cruises—based, perhaps unsurprisingly, in Sweden—is currently taking reservations for expeditions to the North Pole in the 2023-2024 season. According to Digital Trends, a cabin for two is going for $65,000.

Carl-Oscar Lawaczeck, OceanSky Cruises’ CEO, points out several advantages airships have over planes; their environmental sustainability is just the beginning. “The possibilities are amazing when you compare airships with planes,” he said. “Everything is lighter and cheaper and easier and that gives a lot of possibilities.”

Airships have fewer moving parts, and they don’t need a runway to land on or take off from. They’re far more spacious and can carry larger and heavier loads.

If you cringe at the thought of 12 hours of stiff-backed, knee-crunched, parched-air flights, imagine something closer to a flying cruise ship: your own room, a bed, a restaurant and bar, maybe even a glass-floored observation room where you could see the landscape below drifting past in glorious detail.

Would all this make it worth the fact that 12 hours of travel would turn into 60? Airships travel at about one-fifth of the speed of planes; 20 knots versus 100. And nowadays the lifting gas of choice is helium, despite being expensive and scarce.

Join the Club

OceanSky is far from the only company pouring money into resurrecting the airship.

Google co-founder Sergey Brin also started an airship company. LTA (which stands for lighter than air!) Research and Exploration’s primary stated purpose is to build ultra-cheap craft to be used for humanitarian missions. The aforementioned lack of need for runways makes airships a promising and practical option for delivering supplies to remote, hard-to-reach locations.

To that end, Barry Prentice, who leads the Canadian company Buoyant Aircraft Systems International, hopes to use airships to transport pre-built structures for schools and housing to remote parts of Canada that lack good roads.

And earlier this year, French airship company Flying Whales (I mean, how can you not adore that name?) received $23 million in funding from the government of Quebec to build cargo-carrying Zeppelins.

Given our current pandemic-dominated reality, it’s hard to imagine a future of seamless global travel of any kind, much less on an airship. But that future will, thankfully, arrive (though when is anyone’s guess). As calls for climate action get louder and the costs associated with airships drop—as the cost of any new technology tends to do with time—we may find ourselves going retro and being ferried across the globe by giant helium-filled balloons.

Image Credit: Courtesy of Hybrid Air Vehicles Ltd

Kategorie: Transhumanismus

Construction of the World’s Biggest Nuclear Fusion Plant Just Started in France

3 Srpen, 2020 - 16:00

Fusion power promises to provide limitless green energy using cheap and abundant fuel, but it’s a long-running joke that it’s always 20 years away. Last week, though, construction started on the ITER fusion plant in France, which hopes to prove the commercial viability of fusion power.

While conventional nuclear power plants generate energy by splitting atoms, nuclear fusion involves smashing two atoms together. This produces dramatically more energy than the process of fission that we’ve already mastered and doesn’t produce long-lived radioactive waste. It also doesn’t rely on radioactive elements like uranium and plutonium for fuel, instead using abundant isotopes of hydrogen called deuterium and tritium.

The only catch is that trying to contain a nuclear fusion reaction is like trying to keep the sun in a box. It’s the same reaction that powers all stars, and trying to corral that kind of raw power and turn it into something we can use effectively is a challenge scientists have been struggling with for decades.

To get the fuel to fuse, it first has to be heated to 10 times the temperature of the sun’s core, which creates a superhot plasma. To maintain the fusion reactions, this plasma needs to be strictly confined and isolated from other components. Fortunately, plasmas can be manipulated using magnetic fields, and so gigantic electromagnets are used to keep the plasma spinning around a donut-shaped reactor called a tokamak.

The problem is that all this heating and magnetic confinement requires colossal amounts of energy. While we’ve managed to get fusion reactions running on Earth they’ve always used considerably more energy than they’ve produced. The International Thermonuclear Experimental Reactor (ITER) in France is designed to change that.

The project has been a long time in the making. The idea was formulated at the tail end of the Cold War as a multinational collaboration, but design work didn’t properly start until the turn of the millennium, and its parent organization wasn’t launched until 2007. Last week French president Emmanuel Macron hosted a ceremony to celebrate the beginning of the assembly of the reactor.

Over the past five years factories, universities, and national laboratories all over the world have been working to build the components for the plant, some of which weigh several hundred tons, including a magnet powerful enough to lift an aircraft carrier. It will take another five years to piece all the parts together and get the reactor ready for its first test run.

“Constructing the machine piece by piece will be like assembling a three-dimensional puzzle on an intricate timeline,” director-general of ITER Bernard Bigot said in a press release. “Every aspect of project management, systems engineering, risk management, and logistics of the machine assembly must perform together with the precision of a Swiss watch.”

The hope is that by 2025 the plant will be able to produce “first plasma,” a test designed to make sure the reactor works; the test will produce roughly 500 megawatts of thermal power. It will be another decade until the plant is expected to produce enough energy to be commercially viable, though. That will involve building an even larger plasma chamber to provide 10-15 times more electrical power.

While 15 years away might not seem like much of an improvement over 20, those behind the project are confident that these are the first steps towards fusion power fulfilling its promise of revolutionizing our energy systems.

It faces some competition, though. Both the UK government and a variety of startups have announced plans to pursue nuclear fusion, often aiming for much smaller and easier-to-build reactors than ITER. And despite its heavy backing from multiple nation-states, the project’s long history of cost overruns and delays means it’s certainly not a sure-fire winner.

But the project will soon be one of the world’s largest science experiments, and winner or not, there’s little doubt it will significantly push forward our understanding of fusion power. Harnessing the power of the sun on Earth may not sound like a crazy idea for much longer.

Image Credit: ITER

Kategorie: Transhumanismus

This AI Could Bring Us Computers That Can Write Their Own Software

2 Srpen, 2020 - 16:00

When OpenAI first published a paper on their new language generation AI, GPT-3, the hype was slow to build. The paper indicated GPT-3, the biggest natural language AI model yet, was advanced, but it only had a few written examples of its output. Then OpenAI gave select access to a beta version of GPT-3 to see what developers would do with it, and minds were blown.

Developers playing with GPT-3 have taken to Twitter with examples of its capabilities: short stories, press releases, articles about itself, a search engine. Perhaps most surprising was the discovery GPT-3 can write simple computer code. When web developer, Sharif Shameem, modified it to spit out HTML instead of natural language, the program generated code for webpage layouts from prompts like “a button that looks like a watermelon.”

“I used to say that AI research seemed to have an odd blind spot towards automation of programming work, and I suspected a subconscious self-preservation bias,” tweeted John Carmack, legendary computer game developer and consulting CTO at Oculus VR. “The recent, almost accidental, discovery that GPT-3 can sort of write code does generate a slight shiver.”

While the discovery of GPT-3’s coding skills may have been somewhat serendipitous, there is, in fact, a whole field dedicated to the development of machine learning algorithms that can code. The research has been making progress, and a new algorithm just recently took another step.

The algorithm, called machine inferred code similarity (MISIM), is the brainchild of researchers from Intel, Georgia Institute of Technology, University of Pennsylvania, and MIT. Trained on the huge amount of code already publicly available on the web, MISIM can figure out what a program is supposed to do. Then, after finding other similar programs and comparing it to them, MISIM can offer ways to make the program faster or more efficient.

It isn’t the first machine learning algorithm to make recommendations or compare similarity, but according to the researchers in a new preprint paper on MISIM, it was up to 40 times more accurate at the task when it went head to head with several of its most advanced competitors.

Near term, the AI could be a useful sidekick for today’s programmers. Further out, the field could open programming to anyone who can describe what they want to create in everyday language or bring machines that write and maintain their own code.

The Machine Programming Dream

The pursuit of computers that can code is almost as old as modern computer science itself. While there have been advances in programming automation, the recent explosion in machine learning is accelerating progress in a field called machine programming.

In a 2018 paper on the field, a group of Intel and MIT researchers wrote, “The general goal of machine programming is to remove the burden of writing correct and efficient code from a human programmer and to instead place it on a machine.”

Researchers are pursuing systems that can automate the steps required to transform a person’s intent—that is, what they want a piece of software to do—into a working program. They’re also aiming to automate the maintenance of software over time, like, for instance, finding and fixing bugs, keeping programs compatible, or updating code to keep up with hardware upgrades.

That’s easier said than done, of course. Writing software is as much art as it is science. It takes a lot of experience and creativity to translate human intent into the language of machines.

But as GPT-3 shows, language is actually a skill machine learning is rapidly mastering, and programming languages are not so different from English, Chinese, or Swahili. Which is why GPT-3 picking up a few coding skills as a byproduct of its natural language training is notable.

While algorithmic advances in machine learning, like GPT-3, are key to machine programming’s success, they’d be useless without good training data. Luckily, there’s a huge amount of publicly available code on sites like GitHub—replete with revision histories and notes—and code snippets and comment threads on sites like Stack Overflow. Even the internet at large, with accessible webpages and code, is an abundant source of learning material for AI to gobble up.

In theory, just as GPT-3 ingests millions of example articles to learn how to write, machine programming AIs could consume millions of programs and learn to code. But how to make this work in practice is an open question. Which is where MISIM comes in.

A Robot Sidekick to Write Kickass Code

MISIM advances machine programming a step by being able to accurately identify what a snippet of code is supposed to do. Once it’s classified the code, it compares it to millions of other snippets in its database, surfaces those that are most similar, and suggests improvements to the code snippet based on those other examples.

Because MISIM classifies the code’s purpose at a high level, it can find code snippets that do the same thing but are written differently—there’s more than one way to solve the same problem—and even snippets in other programming languages. Simplistically, this is a bit like someone reading a New Yorker article, identifying its topic, and then finding all the other articles on that topic—whether they’re in Der Spiegel or Xinhua.

Another benefit of working at that higher level of classification is the program doesn’t need the code to be compiled. That is, it doesn’t have to translate it into the machine code that’s executed by the computer. Since MISIM doesn’t require a compiler, it can analyze code snippets as they’re being written and offer similar bits of code that could be faster or more efficient. It’s a little like an email autocomplete feature finishing your sentences.

Intel plans to offer MISIM to internal developers for just this purpose. The hope is it’ll prove a useful sidekick, making the code-writing process faster, easier, and more effective. But there’s potentially more it can do. Translation between computer languages, for example, could also be a valuable application. It could perhaps help coders update government software written in archaic languages to something more modern.

But Justin Gottschlich, director of machine programming at Intel, has an even grander vision: the full democratization of coding.

Combine MISIM (or something like it) with natural language AI, and future programmers could simply write down what they want a piece of software to do, and the computer whips up the code. That would open programming to anyone with a decent command of their native language and a desire to make something cool.

As Gottschlich told MIT Technology Review, “I would like to see 8 billion people create software in whatever way is most natural for them.”

Image credit: Markus SpiskeUnsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through August 1)

1 Srpen, 2020 - 16:00

OpenAI’s Latest Breakthrough Is Astonishingly Powerful, But Still Fighting Its Flaws
James Vincent | The Verge
“What makes GPT-3 amazing, they say, is not that it can tell you that the capital of Paraguay is Asunción (it is) or that 466 times 23.5 is 10,987 (it’s not), but that it’s capable of answering both questions and many more beside simply because it was trained on more data for longer than other programs. If there’s one thing we know that the world is creating more and more of, it’s data and computing power, which means GPT-3’s descendants are only going to get more clever.”


I Tried to Live Without the Tech Giants. It Was Impossible.
Kashmir Hill | The New York Times
“Critics of the big tech companies are often told, ‘If you don’t like the company, don’t use its products.’ My takeaway from the experiment was that it’s not possible to do that. It’s not just the products and services branded with the big tech giant’s name. It’s that these companies control a thicket of more obscure products and services that are hard to untangle from tools we rely on for everything we do, from work to getting from point A to point B.”


Meet the Engineer Who Let a Robot Barber Shave Him With a Straight Razor
Luke Dormehl | Digital Trends
“No, it’s not some kind of lockdown-induced barber startup or a Jackass-style stunt. Instead, Whitney, assistant professor of mechanical and industrial engineering at Northeastern University School of Engineering, was interested in straight-razor shaving as a microcosm for some of the big challenges that robots have faced in the past (such as their jerky, robotic movement) and how they can now be solved.”


Can Trees Live Forever? New Kindling in an Immortal Debate
Cara Giaimo | The New York Times
“Even if a scientist dedicated her whole career to very old trees, she would be able to follow her research subjects for only a small percentage of their lives. And a long enough multigenerational study might see its own methods go obsolete. For these reasons, Dr. Munné-Bosch thinks we will never prove’ whether long-lived trees experience senescence…”


There’s No Such Thing as Family Secrets in the Age of 23andMe
Caitlin Harrington | Wired
“…technology has a way of creating new consequences for old decisions. Today, some 30 million people have taken consumer DNA tests, a threshold experts have called a tipping point. People conceived through donor insemination are matching with half-siblings, tracking down their donors, forming networks and advocacy organizations.”


The Problems AI Has Today Go Back Centuries
Karen Hao | MIT Techology Review
“In 2018, just as the AI field was beginning to reckon with problems like algorithmic discrimination, [Shakir Mohamed, a South African AI researcher at DeepMind], penned a blog post with his initial thoughts. In it he called on researchers to ‘decolonise artificial intelligence’—to reorient the field’s work away from Western hubs like Silicon Valley and engage new voices, cultures, and ideas for guiding the technology’s development.”


AI-Generated Text Is the Scariest Deepfake of All
Renee DiResta | Wired
“In the future, deepfake videos and audiofakes may well be used to create distinct, sensational moments that commandeer a press cycle, or to distract from some other, more organic scandal. But undetectable textfakes—masked as regular chatter on Twitter, Facebook, Reddit, and the like—have the potential to be far more subtle, far more prevalent, and far more sinister.”

Image credit: Adrien OlichonUnsplash

Kategorie: Transhumanismus

Dark Energy: Map Gives Clue About What It Is—but Deepens Dispute About the Cosmic Expansion Rate

31 Červenec, 2020 - 16:00

Dark energy is one of the greatest mysteries in science today. We know very little about it, other than it is invisible, it fills the whole universe, and it pushes galaxies away from each other. This is making our cosmos expand at an accelerated rate. But what is it? One of the simplest explanations is that it is a “cosmological constant”—a result of the energy of empty space itself—an idea introduced by Albert Einstein.

Many physicists aren’t satisfied with this explanation, though. They want a more fundamental description of its nature. Is it some new type of energy field or exotic fluid? Or is it a sign that Einstein’s equations of gravity are somehow incomplete? What’s more, we don’t really understand the universe’s current rate of expansion.

Now our project, the extended Baryon Oscillation Spectroscopic Survey (eBOSS), has come up with some answers. Our work has been released as a series of 23 publications, some of which are still being peer reviewed, describing the largest three-dimensional cosmological map ever created.

Currently, the only way we can feel the presence of dark energy is with observations of the distant universe. The farther galaxies are, the younger they appear to us. That’s because the light they emit took millions or even billions of years to reach our telescopes. Thanks to this sort of time-machine, we can measure different distances in space at different cosmic times, helping us work out how quickly the universe is expanding.

Using the Sloan Digital Sky Survey telescope, we measured more than two million galaxies and quasars—extremely bright and distant objects that are powered by black holes—over the last two decades. This new map covers around 11 billion years of cosmic history that was essentially unexplored, teaching us about dark energy like never before.

SDSS telescope. Image credit: Sloan Digital Sky Survey/wikipedia, CC BY-SA

Our results show that about 69 percent of our universe’s energy is dark energy. They also demonstrate, once again, that Einstein’s simplest form of dark energy—the cosmological constant—agrees the most with our observations.

When combining the information from our map with other cosmological probes, such as the cosmic microwave background—the light left over from the big bang—they all seem to prefer the cosmological constant over more exotic explanations of dark energy.

Cosmic Expansion in Dispute

The results also provide a better insight into some recent controversies about the expansion rate of the universe today and about the geometry of space.

Combining our observations with studies of the universe in its infancy reveals cracks in our description of its evolution. In particular, our measurement of the current rate of expansion of the universe is about 10 percent lower than the value found using direct methods of measuring distances to nearby galaxies. Both these methods claim their result is correct and very precise, so their difference cannot simply be a statistical fluke.

The precision of eBOSS enhances this crisis. There is no broadly accepted explanation for this discrepancy. It may be that someone made a subtle mistake in one of these studies. Or it may be a sign that we need new physics. One exciting possibility is that a previously unknown form of matter from the early universe might have left a trace on our history. This is known as “early dark energy,” thought to be present when the universe was young, which could have modified the cosmic expansion rate.

Recent studies of the cosmic microwave background suggested that the geometry of space may be curved instead of being simply flat, which is consistent with the most accepted theory of the big bang. But our study concluded that space is indeed flat.

Even after these important advances, cosmologists over the world will remain puzzled by the apparent simplicity of dark energy, the flatness of space and the controversial values of the expansion rate today. There is only one way forward in the quest for answers—making larger and more detailed maps of the universe. Several projects are aiming to measure at least ten times more galaxies than we did.

If the maps from eBOSS were the first to explore a previously missing gap of 11 billion years of our history, the new generation of telescopes will make a high-resolution version of the same period of time. It is exciting to think about the fact that future surveys may be able to resolve the remaining mysteries about the universe’s expansion in the next decade or so. But it would be equally exciting if they revealed more surprises.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA

Kategorie: Transhumanismus

A Year After Gene Therapy, Boys With Muscular Dystrophy Are Healthier and Stronger

30 Červenec, 2020 - 16:00

Two and a half years ago, a study published in Science Advances detailed how the gene editing tool CRISPR/Cas-9 repaired genetic mutations related to Duchenne Muscular Dystrophy (DMD). The study was a proof of concept, and used induced pluripotent stem cells (iPSCs).

But now a similar treatment has not only been administered to real people, it has worked and made a difference in their quality of life and the progression of their disorder. Nine boys aged 6 to 12 who have been living with DMD since birth received a gene therapy treatment from pharmaceutical giant Pfizer, and a year later, 7 of the boys show significant improvement in muscle strength and function.

Though the treatment’s positive results are limited to a small group, they’re an important breakthrough for gene therapy, and encouraging not just for muscular dystrophy but for many other genetic diseases that could soon see similar treatments developed.

About DMD

DMD is a genetic disorder that causes muscles to progressively degenerate and weaken. It’s caused by mutations in the gene that makes dystrophin, a protein that serves to rebuild and strengthen muscle fibers in skeletal and cardiac muscles. As the gene is carried on the X chromosome, the disorder primarily affects boys. Many people with DMD end up in wheelchairs, on respirators, or both, and while advances in cardiac and respiratory care have increased life expectancy into the early 30s, there’s no cure for the condition.

The Treatment

The gene therapy given to the nine boys by Pfizer was actually developed by a research team at the UNC Chapel Hill School of Medicine—and it took over 30 years.

The team was led by Jude Samulski, a longtime gene therapy researcher and professor of pharmacology at UNC. As a grad student in 1984, Samulski was part of the first team to clone an adeno-associated virus, which ended up becoming a leading method of gene delivery and thus crucial to gene therapy.

Adeno-associated viruses (AAVs) are small viruses whose genome is made up of single-stranded DNA. Like other viruses, AAVs can break through cells’ outer membranes—especially eye and muscle cells—get inside, and “infect” them (and their human hosts). But AAVs are non-pathogenic, meaning they don’t cause disease or harm; the bodies of most people treated with AAVs don’t launch an immune response, because their systems detect that the virus is harmless.

Samulski’s gene therapy treatment for DMD used an adeno-associated virus to carry a healthy copy of the dystrophin gene; the virus was injected into boys with DMD, broke into their muscle cells, and replaced their non-working gene.

Samulski said of the adeno-associated virus, “It’s a molecular FedEx truck. It carries a genetic payload and it’s delivering it to its target.” The company Samulski founded sold the DMD treatment to Pfizer in 2016 so as to scale it and make it accessible to more boys suffering from the condition.

It’s Working

A year after receiving the gene therapy, seven of nine boys are showing positive results. As reported by NPR, the first boy to be treated, a nine-year old from Connecticut, saw results that were not only dramatic, but fast. Before treatment he couldn’t walk up more than four stairs without needing to stop, but within three weeks of treatment he was able to run up the full flight of stairs. “I can run faster. I stand better. And I can walk […] more than two miles and I couldn’t do that before,” he said.

The muscle cells already lost to DMD won’t “grow back,” but the treatment appears to have restored normal function of the protein that fixes muscle fibers and helps them grow, meaning no further degeneration should take place.

Gene therapy trials are underway for several different genetic diseases, including sickle cell anemia, at least two different forms of inherited blindness, and Alzheimer’s, among others. It’s even been used as part of cancer treatment.

It’s only been a year, we don’t yet know whether these treatments may have some sort of detrimental effect in the longer term, and the treatment itself can still be improved. But all of that considered, signs point to the DMD treatment being a big win for gene therapy.

Before it can be hailed as a resounding success, though, scientists feel that a more extensive trial of the therapy is needed, and are working to launch such a trial later this year.

Image Credit: pixelRaw from Pixabay

Kategorie: Transhumanismus

Cars Will Soon Be Able to Sense and React to Your Emotions

29 Červenec, 2020 - 16:00

Imagine you’re on your daily commute to work, driving along a crowded highway while trying to resist looking at your phone. You’re already a little stressed out because you didn’t sleep well, woke up late, and have an important meeting in a couple hours, but you just don’t feel like your best self.

Suddenly another car cuts you off, coming way too close to your front bumper as it changes lanes. Your already-simmering emotions leap into overdrive, and you lay on the horn and shout curses no one can hear.

Except someone—or, rather, something—can hear: your car. Hearing your angry words, aggressive tone, and raised voice, and seeing your furrowed brow, the onboard computer goes into “soothe” mode, as it’s been programmed to do when it detects that you’re angry. It plays relaxing music at just the right volume, releases a puff of light lavender-scented essential oil, and maybe even says some meditative quotes to calm you down.

What do you think—creepy? Helpful? Awesome? Weird? Would you actually calm down, or get even more angry that a car is telling you what to do?

Scenarios like this (maybe without the lavender oil part) may not be imaginary for much longer, especially if companies working to integrate emotion-reading artificial intelligence into new cars have their way. And it wouldn’t just be a matter of your car soothing you when you’re upset—depending what sort of regulations are enacted, the car’s sensors, camera, and microphone could collect all kinds of data about you and sell it to third parties.

Computers and Feelings

Just as AI systems can be trained to tell the difference between a picture of a dog and one of a cat, they can learn to differentiate between an angry tone of voice or facial expression and a happy one. In fact, there’s a whole branch of machine intelligence devoted to creating systems that can recognize and react to human emotions; it’s called affective computing.

Emotion-reading AIs learn what different emotions look and sound like from large sets of labeled data; “smile = happy,” “tears = sad,” “shouting = angry,” and so on. The most sophisticated systems can likely even pick up on the micro-expressions that flash across our faces before we consciously have a chance to control them, as detailed by Daniel Goleman in his groundbreaking book Emotional Intelligence.

Affective computing company Affectiva, a spinoff from MIT Media Lab, says its algorithms are trained on 9.5 million face videos (videos of people’s faces as they do an activity, have a conversation, or react to stimuli) representing about 5 billion facial frames. Fascinatingly, Affectiva claims its software can even account for cultural differences in emotional expression (for example, it’s more normalized in Western cultures to be very emotionally expressive, whereas Asian cultures tend to favor stoicism and politeness), as well as gender differences.

But Why?

As reported in Motherboard, companies like Affectiva, Cerence, Xperi, and Eyeris have plans in the works to partner with automakers and install emotion-reading AI systems in new cars. Regulations passed last year in Europe and a bill just introduced this month in the US senate are helping make the idea of “driver monitoring” less weird, mainly by emphasizing the safety benefits of preemptive warning systems for tired or distracted drivers (remember that part in the beginning about sneaking glances at your phone? Yeah, that).

Drowsiness and distraction can’t really be called emotions, though—so why are they being lumped under an umbrella that has a lot of other implications, including what many may consider an eerily Big Brother-esque violation of privacy?

Our emotions, in fact, are among the most private things about us, since we are the only ones who know their true nature. We’ve developed the ability to hide and disguise our emotions, and this can be a useful skill at work, in relationships, and in scenarios that require negotiation or putting on a game face.

And I don’t know about you, but I’ve had more than one good cry in my car. It’s kind of the perfect place for it; private, secluded, soundproof.

Putting systems into cars that can recognize and collect data about our emotions under the guise of preventing accidents due to the state of mind of being distracted or the physical state of being sleepy, then, seems a bit like a bait and switch.

A Highway to Privacy Invasion?

European regulations will help keep driver data from being used for any purpose other than ensuring a safer ride. But the US is lagging behind on the privacy front, with car companies largely free from any enforceable laws that would keep them from using driver data as they please.

Affectiva lists the following as use cases for occupant monitoring in cars: personalizing content recommendations, providing alternate route recommendations, adapting environmental conditions like lighting and heating, and understanding user frustration with virtual assistants and designing those assistants to be emotion-aware so that they’re less frustrating.

Our phones already do the first two (though, granted, we’re not supposed to look at them while we drive—but most cars now let you use bluetooth to display your phone’s content on the dashboard), and the third is simply a matter of reaching a hand out to turn a dial or press a button. The last seems like a solution for a problem that wouldn’t exist without said… solution.

Despite how unnecessary and unsettling it may seem, though, emotion-reading AI isn’t going away, in cars or other products and services where it might provide value.

Besides automotive AI, Affectiva also makes software for clients in the advertising space. With consent, the built-in camera on users’ laptops records them while they watch ads, gauging their emotional response, what kind of marketing is most likely to engage them, and how likely they are to buy a given product. Emotion-recognition tech is also being used or considered for use in mental health applications, call centers, fraud monitoring, and education, among others.

In a 2015 TED talk, Affectiva co-founder Rana El-Kaliouby told her audience that we’re living in a world increasingly devoid of emotion, and her goal was to bring emotions back into our digital experiences. Soon they’ll be in our cars, too; whether the benefits will outweigh the costs remains to be seen.

Image Credit: Free-Photos from Pixabay

Kategorie: Transhumanismus

Towards ‘Eternal Sunshine’? New Links Found Between Memory and Emotion

28 Červenec, 2020 - 16:00

Nearly a decade ago, I almost drowned.

As an amateur scuba diver, I recklessly joined a group of experts for a deep—much deeper than I was qualified for—dive at night. Already exhausted from swimming my gear from shore, within minutes after I descended I lost my light, a flipper, and complete sense of space. I didn’t know which way was up. Oxygen ran low. Then very low.

Of course, it ended well. A partner found me and escorted me up to the surface then to shore. I starkly remember laying in the sand, watching the ocean waves roll in, trying to reconcile with the fact that I had almost died.

Emotionally-charged memories haunt us all. When infused with fear, wonder, or joy, these memories always seem sharp enough to transport us back into those exact life events. Memory may be the closest thing we have to a time machine, yet most of us struggle remembering where we parked our cars in the grocery store lot, or what we had for dinner a week ago.

“It makes sense we don’t remember everything,” said Dr. René Hen, a memory expert at Columbia University. “We have limited brain power. We only need to remember what’s important for our future wellbeing.”

In this way, emotion serves as a way to enhance crucial memories—the foundation that builds your psyche and sense of self. Yet why and how this happens in the brain remains a mystery.

As it happens, the answer might be waves. Like ocean waves, a special group of cells in the brain’s memory center, the hippocampus, synchronize their activity to speak to the “emotion” center every time you recall trauma.

In the recent issue of Nature Communications, Hen and colleagues published results from mice exposed to frightening situations. In real time, they looked at how neurons in the hippocampus activated in response to fear, and found that they tend to transmit the information to the amygdala—the emotion center—more than average. The more their activity synchronized in waves with their neighbors, the stronger the memory.

We all have traumatic memories we’d rather forget. For now, Eternal Sunshine of the Spotless Mind-esque memory wipe isn’t yet possible. But if synchronized neural waves are the target, then disrupting those waves may be a much easier and specific path towards emotional relief.

The Memory-Emotion Tag Team

Tucked away deep inside the brain, the sea-horse-shaped hippocampus is a crazy effective multi-tasker: it’s both a cognitive powerhouse and an ally for pure emotions.

The hippocampus is most famous for its ability to encode episodic memories—the memory of whats, whens, wheres, and whos. The exact neural code underlying this process remains perplexing, though there’ve been attempts to try to hijack the code and artificially amp up memory.

Part of the complexity in enhancing memory is that the hippocampus isn’t just a single uniform structure. As with most brainy things, it gets more complex. The dorsal, or backside, harbors neurons specially equipped to encode “facts.” The ventral, or frontal, side has neurons that are more attuned to emotions. Previous studies have found that by snipping off the ventral connections from the hippocampus to different brain areas, it’s possible to reduce the impact of memories that normally trigger anxiety.

Back in 2018, the same team found that the emotional hippocampal cells, dubbed vCA1 (v for ventral, not for vendetta though it should be), sends out bundles of neural fibers towards the amygdala—also a multi-structural region. Where these neural highways went seemed to make a difference in what they did. Connections to one part, for example, amped up the mice’s anxiety. Fibers to another—the basal amygdala—seemed to enhance the mice’s ability to associate fear with a particular place and memory.

The latter connection spiked the team’s interest. “Place” is an aspect mostly associated with the hippocampus. Is this connection how the brain supercharges emotional memories?

Wave After Wave

In the new study, the team first injected viral “tracers” into the brains of mice. Thanks to the viruses’ ability to jump from neuron to neuron, the tracers spread across neural fibers connecting the hippocampus to the amygdala.

The team had another trick up its sleeve: the tracers only activated—that is, glowed under fluorescent light—if the neurons burst with activity. This technique allowed them to track which neural pathways were active in real time.

Then came the shock treatment. The team put the mice into boxes and gave their paws a slight electrical shock. Immediately, this etches the memory of the box into the mice’s minds—so when placed back into the box, they’ll freeze in fear. All the while, the team peaked into activated neural pathways through their glow-in-the-dark tracers.

Under the microscope, two specific pathways dominated: vCA1 to basal amygdala, and vCA1 to another amygdala region. The latter was actually more prominent, the team said, but the first was surprising. For one, it lasted days—fairly uncommon in memory encoding. It also seemed to have more neurons that are specialized in representing the context around these shocks, the team said.

Further digging into the pathway found an even stranger signal. vCA1 “emotional” hippocampal neurons harmonized their activity into a symphony when the mice first encoded the fearful shock memory. The same neurons also synchronized their activity when the mice re-experienced the frightful box—but surprisingly, the neurons didn’t synchronize with each other. Their activity patterns matched up with their neighbors—even those who didn’t originally encode the “shock” memory.

It seems like the neighbor cells are the crux, in addition to the encoding neurons themselves, the team explained. These cells are likely “highly interconnected nodes that form a distinct network community” to tune the strength of emotional memories. If vCA1 cells are the tiny ripples after throwing a rock into water, then the additional wave circuit around them are the rippling waves.

Further experiments found that disrupting the vCA1 cells when giving mice a shock disrupted the entire network—that is, the waves broke down and the mice forgot their fear.

“We saw that it’s the synchrony that is critical to establish the fear memory, and the greater the synchrony, the stronger the memory,” said study author Jessica Jimenez. “These are the types of mechanisms that explain why you remember salient events.”

Playing With Memories

The takeaway is that there’s a neural pathway that connects vCA1 emotional hippocampal neurons with the basal amygdala, which in turn etches emotional impact onto memories and makes them stronger.

The pathway itself is a bit of a strange duck, because not all neurons that form it respond to the initial fear. Rather, like waves, the experience of fear flows to further recruit neighboring cells and pathways to amp up the strength of the memory—that is, to generate a fuller picture of when and where. It could be how I so clearly remember the night I almost drowned.

“Patterns of synchronous activity have recently been purposed to underlie the persistence of memories over long periods of time,” wrote the team. Well, if they’re right, then now we have a target to either erase those traumatic memories, or—potentially—enhance the memories of happy times so they last longer.

Image Credit: Free-Photos from Pixabay

Kategorie: Transhumanismus

A New Brain-Inspired Learning Method for AI Saves Memory and Energy

27 Červenec, 2020 - 16:00

Despite the frequent analogies, today’s AI operates on very different principles to the human brain. Now researchers have proposed a new learning method more closely tied to biology, which they think could help us approach the brain’s unrivaled efficiency.

Modern deep learning is at the very least biologically-inspired, encoding information in the strength of connections between large networks of individual computing units known as neurons. Probably the biggest difference, though, is the way these neurons communicate with each other.

Artificial neural networks are organized into layers, with each neuron typically connected to every neuron in the next layer. Information passes between layers in a highly synchronized fashion as numbers falling in a range that determines the strength of the connection between pairs of neurons.

Biological neurons, on the other hand, communicate by firing off electrical impulses known as spikes, and each neuron does so on its own schedule. Connections are not neatly divided into layers and feature many feedback loops that means the output of a neuron often ends up impacting its input somewhere down the line.

This spike-based approach is vastly more energy efficient, which is why training the most powerful AI requires kilowatts of electricity while the brain uses just 20 watts. That’s led to growing interest in the development of artificial spiking neural networks as well as so-called neuromorphic hardware—computer chips that mimic the physical organization and principles of the brain—that could run them more efficiently.

But our understanding of these spike-based approaches is still underdeveloped, and they struggle to reach the performance of more conventional artificial neural nets. Now though, researchers from the Graz University of Technology in Austria think they may have found a way to approach the power of deep learning using a biological plausible learning approach that works with spiking neural networks.

In deep learning the network is trained by getting it to make predictions on the data and then assessing how far off it is. This error is then fed backwards through the network to guide adjustments in the strength of connections between neurons. This process is called backpropagation, and over many iterations will tune the network until it makes accurate predictions.

A similar approach can be applied to spiking neural networks, but it requires huge amounts of memory. It’s also clear that this is not how the brain solves the learning problem, because it requires error signals to be sent backwards in both time and space across the synapses between neurons, which is clearly impossible.

That prompted the researchers, who are part of the Human Brain Project, to look at two features that have become clear in experimental neuroscience data: each neuron retains a memory of previous activity in the form of molecular markers that slowly fade with time; and the brain provides top-down learning signals using things like the neurotransmitter dopamine that modulates the behavior of groups of neurons.

In a paper in Nature Communications, the Austrian team describes how they created artificial analogues of these two features to create a new learning paradigm they call e-prop. While the approach learns slower than backpropagation-based methods, it achieves comparable performance.

More importantly, it allows online learning. That means that rather than processing big batches of data at once, which requires constant transfer to and from memory that contributes significantly to machine learning’s energy bills, the approach simply learns from data as it becomes available. That dramatically cuts the amount of memory and energy it requires, which makes it far more practical to use for on-chip learning in smaller mobile devices.

The team is now working with researchers from Intel to integrate the approach with the next version of the company’s neuromorphic chip Loihi, which is optimized for spiking networks. They’re also teaming up with fellow Human Brain Project researchers at the University of Manchester to apply e-prop to the neuromorphic supercomputer SpiNNaker.

There’s still a long way to go before the technique can match the power of today’s leading AI. But if it helps us start to approach the efficiencies we see in biological brains, it might not be long before AI is everywhere.

Image Credit: Gerd Altmann from Pixabay

Kategorie: Transhumanismus