Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity University
Aktualizace: 15 min 5 sek zpět

Watch London’s Cool, Quirky Augmented Reality Art Exhibit at Home

22 Leden, 2021 - 16:00

It hasn’t been a great few months for museums, what with the pandemic shutting many of them down and forcing the rest to greatly limit visitors. But a new, well-timed art exhibit went on display last month in London, and no reservations or masks were required.

Unreal City was an augmented reality art exhibit presented by Acute Art and Dazed Media. It took place along the Southbank of the River Thames, featuring 36 different “sculptures” that visitors could only see through a smartphone app.

Here’s how it worked: red buoys placed along the river walk indicated the locations of the digital artworks. Visitors had to install an app on their phones called Acute Art. Pointing their phones at the area around the buoys, they’d see the digital sculptures appear.

The artwork didn’t follow any particular theme, but rather consisted of everything from a giant, furry spider to a wriggling octopus to a levitating spiritual leader. Artists included Norwegian Bjarne Melgaard, Chinese Cao Fei, Argentine Tomas Saraceno, German Alicja Kwade, American KAWS, and several others.

“I want to use augmented reality to shape emotional connections with humans,” Fei told AnOther. “Augmented reality can re-enact what has happened in the past and provide an alternative to reality that is open-ended.”

AR is a medium full of potential for artistic expression. One of its key features is that it can be “on display” almost anywhere, to anyone with a smartphone. This seems especially relevant during a pandemic, when we’re trying not to be around a lot of people in enclosed spaces—but take away the pandemic, and AR is still an exciting way to democratize access to art.

As exhibit curator Daniel Birnbaum put it, “This is a glimpse of a totally new way of communicating art, making art available to very large audiences.”

The potential audience for Unreal City is even larger now, as the exhibit was extended and made available to viewers from home; through February 9, you can download the app and place the digital sculptures in your own living room, or on your front lawn, or really wherever you’d like.

“We’re trying to … realize ideas that artists have and that they couldn’t realize in more traditional mediums,” Birnbaum said. “In a century there are often one or two new mediums being introduced. AR and VR represent the first new artistic mediums of this century, so of course artists are interested.”

Image Credit: Acute Art/KAWS

Kategorie: Transhumanismus

Earth Has Stayed Habitable for Billions of Years. Exactly How Lucky Did We Get?

21 Leden, 2021 - 16:00

It took evolution three or four billion years to produce Homo sapiens. If the climate had completely failed just once in that time, then evolution would have come to a crashing halt and we would not be here now. So to understand how we came to exist on planet Earth, we’ll need to know how Earth managed to stay fit for life for billions of years.

This is not a trivial problem. Current global warming shows us that the climate can change considerably over the course of even a few centuries. Over geological timescales, it is even easier to change climate. Calculations show that there is the potential for Earth’s climate to deteriorate to temperatures below freezing or above boiling in just a few million years.

We also know that the sun has become 30 percent more luminous since life first evolved. In theory, this should have caused the oceans to boil away by now, given that they were not generally frozen on the early Earth. This is known as the “faint young sun paradox.” Yet, somehow, this habitability puzzle was solved.

Scientists have come up with two main theories. The first is that the Earth could possess something like a thermostat—a feedback mechanism (or mechanisms) that prevents the climate ever wandering to fatal temperatures.

The second is that, out of a large number of planets, perhaps some just make it through by luck, and Earth is one of those. This second scenario is made more plausible by the discoveries in recent decades of many planets outside our solar system—so-called exoplanets. Astronomical observations of distant stars tell us that many have planets orbiting them, and that some are of a size and density and orbital distance such that temperatures suitable for life are theoretically possible. It has been estimated that there are at least two billion such candidate planets in our galaxy alone.

Scientists would love to travel to these exoplanets to investigate whether any of them have matched Earth’s billion years of climate stability. But even the nearest exoplanets, those orbiting the star Proxima Centauri, are more than four light-years away. Observational or experimental evidence is hard to come by.

Instead, I explored the same question through modeling. Using a computer program designed to simulate climate evolution on planets in general (not just Earth), I first generated 100,000 planets, each with a randomly different set of climate feedbacks. Climate feedbacks are processes that can amplify or diminish climate change—think for instance of sea-ice melting in the Arctic, which replaces sunlight-reflecting ice with sunlight-absorbing open sea, which in turn causes more warming and more melting.

In order to investigate how likely each of these diverse planets was to stay habitable over enormous (geological) timescales, I simulated each 100 times. Each time the planet started from a different initial temperature and was exposed to a randomly different set of climate events. These events represent climate-altering factors such as supervolcano eruptions (like Mount Pinatubo but much larger) and asteroid impacts (like the one that killed the dinosaurs). On each of the 100 runs, the planet’s temperature was tracked until it became too hot or too cold or else had survived for three billion years, at which point it was deemed to have been a possible crucible for intelligent life.

The simulation results give a definite answer to this habitability problem, at least in terms of the importance of feedbacks and luck. It was very rare (in fact, just one time out of 100,000) for a planet to have such strong stabilizing feedbacks that it stayed habitable all 100 times, irrespective of the random climate events. In fact, most planets that stayed habitable at least once did so fewer than 10 times out of 100. On nearly every occasion in the simulation when a planet remained habitable for three billion years, it was partly down to luck. At the same time, luck by itself was shown to be insufficient. Planets that were specially designed to have no feedbacks at all never stayed habitable; random walks, buffeted around by climate events, never lasted the course.

Repeat runs in the simulation were not identical: 1,000 different planets were generated randomly and each run twice. (a) results on first run, (b) results on second run. Green circles show success (stayed habitable for 3 billion years) and black failure. Toby Tyrrell, Author provided

This overall result, that outcomes depend partly on feedbacks and partly on luck, is robust. All sorts of changes to the modeling did not affect it. By implication, Earth must therefore possess some climate-stabilizing feedbacks, but at the same time, good fortune must also have been involved in it staying habitable. If, for instance, an asteroid or solar flare had been slightly larger than it was, or had occurred at a slightly different (more critical) time, we would probably not be here on Earth today. It gives a different perspective on why we are able to look back on Earth’s remarkable, enormously extended, history of life evolving and diversifying and becoming ever more complex to the point that it gave rise to us.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: PIRO4D from Pixabay

Kategorie: Transhumanismus

This Artificial Heart Will Soon Be on the Market in Europe

20 Leden, 2021 - 16:00

Heart disease is one of the leading causes of death in the world, particularly in the US and Western Europe. Medical science has come up with some ingenious solutions to common heart problems, like pacemakers (which correct abnormal heart rhythms), stents (to hold clogged arteries open so blood can flow through), and bypass surgery (which implants a healthy blood vessel from another part of the body to redirect blood around a blocked artery in the heart). These procedures have saved and extended the lives of millions of people.

Now there’s another solution for cardiac patients, and this one goes beyond fixing just an arrhythmia or single artery: a total artificial heart.

If you’re brimming with questions, like—How does it work? Wouldn’t the body reject such a large foreign object (and in such a crucial place?)? What keeps it running?—you’re not the only one.

The artificial heart is made by a French company called Carmat, and is designed for people with end-stage biventricular heart failure. That’s when both of the heart’s ventricles—chambers near the bottom of the heart that pull in and push out blood between the lungs and the rest of the body—are too weak to carry out their function.

Like a real heart, the artificial heart has two ventricles. One is for hydraulic fluid and the other for blood, and a membrane separates the two. The blood-facing side of the membrane is made of tissue from a cow’s heart. A motorized pump moves hydraulic fluid in and out of the ventricles, and that fluid moves the membrane to let blood flow through. There are four “biological” valves, thus called because they’re also made from cow heart tissue.

Embedded electronics, microprocessors, and sensors automatically regulate responses to the patient’s activity; if, for example, they’re exercising, blood flow will increase, just as it would with a real heart. This is what differentiates Carmat’s product from the artificial heart made by American company SynCardia; theirs is a fixed rate device, meaning once a beat rate is set for the heart, the beats per minute will stay the same regardless of patient activity.

Carmat’s device weighs 900 grams, or just under 2 pounds. This is about three times the weight of the average human heart. Externally, patients carry a small bag of actuator fluid, a controller, and a lithium-ion battery.

“The idea behind this heart, which was born nearly 30 years ago, was to create a device which would replace heart transplants, a device that works physiologically like a human heart, one that’s pulsating, self-regulated and compatible with blood,” Carmat CEO Stéphane Piat reportedly told Reuters.

At present, however, Carmat’s product isn’t a permanent solution; it’s been approved as a temporary replacement while patients wait for donor hearts, and is estimated to last about five years. In November 2020, Carmat reported that one patient had been living with the implanted heart for a record two years.

Scientists have long been working on re-creating working versions of human organs using synthetic materials. One of the organs most sorely needed is the kidney, and it’s also one of the hardest to re-create. In comparison, perhaps surprisingly, the heart is one of the less complicated organs; it doesn’t have the hundreds of thousands of intricately-structured nephrons of kidneys, nor the complex insulin-monitoring function of the pancreas; it’s really just a little pump to push blood through our bodies.

After receiving the CE marking (a sort of stamp for products sold in Europe to indicate conformity with health and safety standards) in late 2020, Carmat’s artificial heart will launch commercially in Germany and France in the second quarter of this year. The company also got approval from the US Food & Drug Administration to start an Early Feasibility Study in the US this year.

Image Credit: Carmat

Kategorie: Transhumanismus

A Language AI Is Accurately Predicting Covid-19 ‘Escape’ Mutations

19 Leden, 2021 - 16:00

For all their simplicity, viruses are sneaky little life forces.

Take SARS-Cov-2, the virus behind Covid-19. Challenged with the human immune system, the virus has gradually reshuffled parts of its genetic material, making it easier to spread among a human population. The new strain has already terrorized South Africa and shut down the UK, and recently popped up in the United States.

The silver lining is that our existing vaccines and antibody therapies are still likely to be effective against the new strain. But that’s not always the case. “Viral escape” is a nightmare scenario, in which the virus mutates just enough so that existing antibodies no longer recognize it. The consequences are dire: it means that even if you’ve already had the infection, or produced antibodies from a vaccine, those protections are now kneecapped or useless.

From an evolutionary perspective, viral mutations and our immune system are constantly engaged in a cat-and-mouse game. Last week, thanks to an utterly unexpected resource, we may now have a leg up. In a mind-bending paper published in Science, one team developed a tool to predict viral escape—and it came from natural language processing (NLP), the AI field of mimicking human speech.

Weird, right?

The team’s critical insight was to construct a “viral language” of sorts, based purely on its genetic sequences. This language, if given sufficient examples, can then be analyzed using NLP techniques to predict how changes to its genome alter its interaction with our immune system. That is, using artificial language techniques, it may be possible to hunt down key areas in a viral genome that, when mutated, allow it to escape roaming antibodies.

It’s a seriously kooky idea. Yet when tested on some of our greatest viral foes, like influenza (the seasonal flu), HIV, and SARS-CoV-2, the algorithm was able to discern critical mutations that “transform” each virus just enough to escape the grasp of our immune surveillance system.

“The language of viral evolution and escape … provides a powerful framework for predicting mutations that lead to viral escape,” said Drs. Yoo-Ah Kim and Teresa Przytycka at the National Institute of Health, who were not involved in the study but provided perspectives on it.

“This is a phenomenal way of narrowing down the entire universe of potential mutant viruses,” added Dr. Benhur Lee at Mount Sinai. And if further validated, the algorithm could bolster attempts at an effective HIV vaccine, or a universal flu vaccine—rather than the piecemeal prediction approach we have now. It could also provide insight into how the new coronavirus could further mutate and put our immune system in “check,” and in turn, give us time to battle its escape plans and end the pandemic once and for all.

A Useful Analogy

The idea of using NLP to examine viruses started with an analogy. Last winter, study author Brian Hie was cruising around the snowy grounds of MIT when an idea popped into his head: what if it’s possible to explain the interaction between virus and the immune system in the same way we analyze language?

It’s an uber-nerdy realization that takes a few leaps of faith. But the more Hie thought about it, the more it made sense. Language contains both grammar and semantics. The first is rather immutable, before it sets up the structure of a sentence. But the second, semantics, is just the meaning of the sentence. Changing a single word could immediately alter the meaning to the point a listener could no longer comprehend, all the while keeping the grammar intact. In other words, it’s totally possible to say grammatically correct gibberish—Mad Libs comes to mind—while “escaping” the understanding of a listener.

Here’s the analogy leap. Viruses also run on two main traits to survive. Both involve their interaction with our immune system. The first is their ability to enter a cell to replicate more of themselves. This trait, dubbed “virulence,” needs to stay semi-consistent so that the virus can maintain itself inside a host.

Take SARS-CoV-2. Like most viruses, it’s a bubble-like being with spikes dotted on its surface. Encapsulated within is its genomic sequence. The spike proteins are necessary for the virus to “talk” to our cells, allowing the virus to enter. But it’s the viral genes that dictate the shape of the spike proteins. In other words, if changes to the viral genes also alter spike proteins, these mutations would change the virus’s interaction with our cells and immune system.

In order to survive, any given virus needs to follow its own “grammar.” These fundamental sequences, captured in its genome, allow its survival. Break the grammar with too many mutations, or mutations in critical spots, and the virus will no longer be able to enter a cell and replicate, and will reach an evolutionary dead end. Bottom line: a virus needs to keep its “grammar” intact.

Yet grammar is just half of comprehension. The other is semantics, the meaning of words. This, thought Hie, is where viruses have more leeway. Imagine the virus as a speaker, and our immune system as a listener. Mutations to a viral genome that swap out “words”—but leave the grammar intact—could fool the immune “listener” just enough so that it no longer understands the virus’s language, and halts an attack. Yet because the virus’s grammar remains, it’s free to replicate and cause havoc, hidden away from the immune system’s defenses. In other words, if a mutation allows a virus to keep its grammar but changes its semantics, it also allows viral escape.

The question is, how do we predict those nightmare mutations?

Enter Algorithms

Hie’s second leap in thought was to tap into a completely different field: AI language.

In recent years, AI has gotten extremely efficient at modeling both grammar and semantics in human language, without any prior knowledge or understanding of the content. Take GPT-3 by OpenAI, which produces startling human-like prose that’s both grammatically correct and stays mostly on topic. Rather than studying linguistics, these NLP algorithms learn through a vast corpus of text, arranged in words, short phrases, sentences, and paragraphs. Even without prior training, an NLP algorithm is capable of grasping patterns in human language. Forget rules—it’s pattern recognition all the way through.

Now imagine example text being the virus’s “normal” genome and mutations being alternative novel phrases; it’s then possible to analyze the language of the virus using NLP techniques. Take “grammar,” for example, or sequences in a viral genome that enable its entry into a cell. If considered a language, the NLP could begin grasping sequences related to a virus’s infectiousness, without needing any previous knowledge of microbiology.

A similar idea works for viral semantics. It’s possible to systematically change one viral genetic letter. Using NLP, we can then analyze how far the mutant strays in its “meaning”—for example, its behavior. Using the language example, swapping “cat” to “feline” is a tiny change. Swapping “cat” with “bulldozer,” however, yields a much larger difference. The degree of these alterations is captured by a number, rather than intuition, and allows the algorithm to judge how far a virus has strayed from its original form.

Using influenza, HIV, and SARS-CoV-2, the team set out to find genetic mutations that allow viral escape: ones that preserve the virus’s “grammar,” but alter its “semantics.” Scoring each region with their algorithm, the team uncovered several targeted protein spots—and their genetic blueprint—that massively raised the chance of viral escape. Remember: the algorithm had never previously encountered any data remotely related to the biology of a virus. But based solely on the “language” of the virus, it replicated previous lab results of sequences that led to influenza escape.

It’s not often that unrelated branches of science give each other a push. And Hie’s not about to stop. Further tapping into the language analogy, it’s possible that some people comprehend the same sentence differently based on their history, culture, and experience. Similarly, our immune systems aren’t all the same—each has its own plethora of molecules, antibodies, and immune cells, and overall “strength.”

“It will be interesting to see whether the proposed approach can be adapted to provide a ‘personalized’ view of the language of virus evolution,” said Kim and Przytycka.

Image Credit: Vektor Kunst from Pixabay

Kategorie: Transhumanismus

How Mirroring the Architecture of the Human Brain Is Speeding Up AI Learning

18 Leden, 2021 - 16:00

While AI can carry out some impressive feats when trained on millions of data points, the human brain can often learn from a tiny number of examples. New research shows that borrowing architectural principles from the brain can help AI get closer to our visual prowess.

The prevailing wisdom in deep learning research is that the more data you throw at an algorithm, the better it will learn. And in the era of Big Data, that’s easier than ever, particularly for the large data-centric tech companies carrying out a lot of the cutting-edge AI research.

Today’s largest deep learning models, like OpenAI’s GPT-3 and Google’s BERT, are trained on billions of data points, and even more modest models require large amounts of data. Collecting these datasets and investing the computational resources to crunch through them is a major bottleneck, particularly for less well-resourced academic labs.

It also means today’s AI is far less flexible than natural intelligence. While a human only needs to see a handful of examples of an animal, a tool, or some other category of object to be able pick it out again, most AI need to be trained on many examples of an object in order to be able to recognize it.

There is an active sub-discipline of AI research aimed at what is known as “one-shot” or “few-shot” learning, where algorithms are designed to be able to learn from very few examples. But these approaches are still largely experimental, and they can’t come close to matching the fastest learner we know—the human brain.

This prompted a pair of neuroscientists to see if they could design an AI that could learn from few data points by borrowing principles from how we think the brain solves this problem. In a paper in Frontiers in Computational Neuroscience, they explained that the approach significantly boosts AI’s ability to learn new visual concepts from few examples.

“Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples,” Maximilian Riesenhuber, from Georgetown University Medical Center, said in a press release. “We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing.”

Several decades of neuroscience research suggest that the brain’s ability to learn so quickly depends on its ability to use prior knowledge to understand new concepts based on little data. When it comes to visual understanding, this can rely on similarities of shape, structure, or color, but the brain can also leverage abstract visual concepts thought to be encoded in a brain region called the anterior temporal lobe (ATL).

“It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter,” said paper co-author Joshua Rule, from the University of California Berkeley.

The researchers decided to try and recreate this capability by using similar high-level concepts learned by an AI to help it quickly learn previously unseen categories of images.

Deep learning algorithms work by getting layers of artificial neurons to learn increasingly complex features of an image or other data type, which are then used to categorize new data. For instance, early layers will look for simple features like edges, while later ones might look for more complex ones like noses, faces, or even more high-level characteristics.

First they trained the AI on 2.5 million images across 2,000 different categories from the popular ImageNet dataset. They then extracted features from various layers of the network, including the very last layer before the output layer. They refer to these as “conceptual features” because they are the highest-level features learned, and most similar to the abstract concepts that might be encoded in the ATL.

They then used these different sets of features to train the AI to learn new concepts based on 2, 4, 8, 16, 32, 64, and 128 examples. They found that the AI that used the conceptual features yielded much better performance than ones trained using lower-level features on lower numbers of examples, but the gap shrunk as they were fed more training examples.

While the researchers admit the challenge they set their AI was relatively simple and only covers one aspect of the complex process of visual reasoning, they said that using a biologically plausible approach to solving the few-shot problem opens up promising new avenues in both neuroscience and AI.

“Our findings not only suggest techniques that could help computers learn more quickly and efficiently, they can also lead to improved neuroscience experiments aimed at understanding how people learn so quickly, which is not yet well understood,” Riesenhuber said.

As the researchers note, the human visual system is still the gold standard when it comes to understanding the world around us. Borrowing from its design principles might turn out to be a profitable direction for future research.

Image Credit: Gerd Altmann from Pixabay

Kategorie: Transhumanismus

China Wants to Be the World’s AI Superpower. Does It Have What It Takes?

17 Leden, 2021 - 16:00

China’s star has been steadily rising for decades. Besides slashing extreme poverty rates from 88 percent to under 2 percent in just 30 years, the country has become a global powerhouse in manufacturing and technology. Its pace of growth may slow due to an aging population, but China is nonetheless one of the world’s biggest players in multiple cutting-edge tech fields.

One of these fields, and perhaps the most significant, is artificial intelligence. The Chinese government announced a plan in 2017 to become the world leader in AI by 2030, and has since poured billions of dollars into AI projects and research across academia, government, and private industry. The government’s venture capital fund is investing over $30 billion in AI; the northeastern city of Tianjin budgeted $16 billion for advancing AI; and a $2 billion AI research park is being built in Beijing.

On top of these huge investments, the government and private companies in China have access to an unprecedented quantity of data, on everything from citizens’ health to their smartphone use. WeChat, a multi-functional app where people can chat, date, send payments, hail rides, read news, and more, gives the CCP full access to user data upon request; as one BBC journalist put it, WeChat “was ahead of the game on the global stage and it has found its way into all corners of people’s existence. It could deliver to the Communist Party a life map of pretty much everybody in this country, citizens and foreigners alike.” And that’s just one (albeit big) source of data.

Many believe these factors are giving China a serious leg up in AI development, even providing enough of a boost that its progress will surpass that of the US.

But there’s more to AI than data, and there’s more to progress than investing billions of dollars. Analyzing China’s potential to become a world leader in AI—or in any technology that requires consistent innovation—from multiple angles provides a more nuanced picture of its strengths and limitations. In a June 2020 article in Foreign Affairs, Oxford fellows Carl Benedikt Frey and Michael Osborne argued that China’s big advantages may not actually be that advantageous in the long run—and its limitations may be very limiting.

Moving the AI Needle

To get an idea of who’s likely to take the lead in AI, it could help to first consider how the technology will advance beyond its current state.

To put it plainly, AI is somewhat stuck at the moment. Algorithms and neural networks continue to achieve new and impressive feats—like DeepMind’s AlphaFold accurately predicting protein structures or OpenAI’s GPT-3 writing convincing articles based on short prompts—but for the most part these systems’ capabilities are still defined as narrow intelligence: completing a specific task for which the system was painstakingly trained on loads of data.

(It’s worth noting here that some have speculated OpenAI’s GPT-3 may be an exception, the first example of machine intelligence that, while not “general,” has surpassed the definition of “narrow”; the algorithm was trained to write text, but ended up being able to translate between languages, write code, autocomplete images, do math, and perform other language-related tasks it wasn’t specifically trained for. However, all of GPT-3’s capabilities are limited to skills it learned in the language domain, whether spoken, written, or programming language).

Both AlphaFold’s and GPT-3’s success was due largely to the massive datasets they were trained on; no revolutionary new training methods or architectures were involved. If all it was going to take to advance AI was a continuation or scaling-up of this paradigm—more input data yields increased capability—China could well have an advantage.

But one of the biggest hurdles AI needs to clear to advance in leaps and bounds rather than baby steps is precisely this reliance on extensive, task-specific data. Other significant challenges include the technology’s fast approach to the limits of current computing power and its immense energy consumption.

Thus, while China’s trove of data may give it an advantage now, it may not be much of a long-term foothold on the climb to AI dominance. It’s useful for building products that incorporate or rely on today’s AI, but not for pushing the needle on how artificially intelligent systems learn. WeChat data on users’ spending habits, for example, would be valuable in building an AI that helps people save money or suggests items they might want to purchase. It will enable (and already has enabled) highly tailored products that will earn their creators and the companies that use them a lot of money.

But data quantity isn’t what’s going to advance AI. As Frey and Osborne put it, “Data efficiency is the holy grail of further progress in artificial intelligence.”

To that end, research teams in academia and private industry are working on ways to make AI less data-hungry. New training methods like one-shot learning and less-than-one-shot learning have begun to emerge, along with myriad efforts to make AI that learns more like the human brain.

While not insignificant, these advancements still fall into the “baby steps” category. No one knows how AI is going to progress beyond these small steps—and that uncertainty, in Frey and Osborne’s opinion, is a major speed bump on China’s fast-track to AI dominance.

How Innovation Happens

A lot of great inventions have happened by accident, and some of the world’s most successful companies started in garages, dorm rooms, or similarly low-budget, nondescript circumstances (including Google, Facebook, Amazon, and Apple, to name a few). Innovation, the authors point out, often happens “through serendipity and recombination, as inventors and entrepreneurs interact and exchange ideas.”

Frey and Osborne argue that although China has great reserves of talent and a history of building on technologies conceived elsewhere, it doesn’t yet have a glowing track record in terms of innovation. They note that of the 100 most-cited patents from 2003 to present, none came from China. Giants Tencent, Alibaba, and Baidu are all wildly successful in the Chinese market, but they’re rooted in technologies or business models that came out of the US and were tweaked for the Chinese population.

“The most innovative societies have always been those that allowed people to pursue controversial ideas,” Frey and Osborne write. China’s heavy censorship of the internet and surveillance of citizens don’t quite encourage the pursuit of controversial ideas. The country’s social credit system rewards people who follow the rules and punishes those who step out of line. Frey adds that top-down execution of problem-solving is effective when the problem at hand is clearly defined—and the next big leaps in AI are not.

It’s debatable how strongly a culture of social conformism can impact technological innovation, and of course there can be exceptions. But a relevant historical example is the Soviet Union, which, despite heavy investment in science and technology that briefly rivaled the US in fields like nuclear energy and space exploration, ended up lagging far behind primarily due to political and cultural factors.

Similarly, China’s focus on computer science in its education system could give it an edge—but, as Frey told me in an email, “The best students are not necessarily the best researchers. Being a good researcher also requires coming up with new ideas.”

Winner Take All?

Beyond the question of whether China will achieve AI dominance is the issue of how it will use the powerful technology. Several of the ways China has already implemented AI could be considered morally questionable, from facial recognition systems used aggressively against ethnic minorities to smart glasses for policemen that can pull up information about whoever the wearer looks at.

This isn’t to say the US would use AI for purely ethical purposes. The military’s Project Maven, for example, used artificially intelligent algorithms to identify insurgent targets in Iraq and Syria, and American law enforcement agencies are also using (mostly unregulated) facial recognition systems.

It’s conceivable that “dominance” in AI won’t go to one country; each nation could meet milestones in different ways, or meet different milestones. Researchers from both countries, at least in the academic sphere, could (and likely will) continue to collaborate and share their work, as they’ve done on many projects to date.

If one country does take the lead, it will certainly see some major advantages as a result. Brookings Institute fellow Indermit Gill goes so far as to say that whoever leads in AI in 2030 will “rule the world” until 2100. But Gill points out that in addition to considering each country’s strengths, we should consider how willing they are to improve upon their weaknesses.

While China leads in investment and the US in innovation, both nations are grappling with huge economic inequalities that could negatively impact technological uptake. “Attitudes toward the social change that accompanies new technologies matter as much as the technologies, pointing to the need for complementary policies that shape the economy and society,” Gill writes.

Will China’s leadership be willing to relax its grip to foster innovation? Will the US business environment be enough to compete with China’s data, investment, and education advantages? And can both countries find a way to distribute technology’s economic benefits more equitably?

Time will tell, but it seems we’ve got our work cut out for us—and China does too.

Image Credit: Adam Birkett on Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through January 16)

16 Leden, 2021 - 16:00

Lost Passwords Lock Millionaires Out of Their Bitcoin Fortunes
Nathaniel Popper | The New York Times
“Stefan Thomas, a German-born programmer living in San Francisco, has two guesses left to figure out a password that is worth, as of this week, about $220 million. The password will let him unlock a small hard drive, known as an IronKey, which contains the private keys to a digital wallet that holds 7,002 Bitcoin.”


Scientists Have Sequenced Dire Wolf DNA. Thanks, Science!
Angela Watercutter | Wired
“Dire wolves: First of their name, last of their kind. Yes, you read that correctly. According to new research published today in Nature, scientists have finally been able to sequence the DNA of dire wolves—and, to borrow a phrase from the 11 o’clock news, what they found might surprise you.”


These Scientists Have a Wildly Futuristic Plan to Harvest Energy From Black Holes
Luke Dormehl | Digital Trends
“The idea, in essence, is to extract energy from black holes by gathering charged plasma particles as they try to escape from the event horizon, the threshold surrounding a black hole at which escape velocity is greater than the speed of light. To put it in even broader terms: The researchers believe that it would be possible to obtain energy directly from the curvature of spacetime. (And you thought that your new solar panels were exciting!).”


US Grid Will See 80 Percent of Its New Capacity Go Emission-Free
John Timmer | Ars Technica
“Earlier this week, the US Energy Information Agency (EIA) released figures on the new generating capacity that’s expected to start operating over the course of 2021. While plans can obviously change, the hope is that, with its new additions, the grid will look radically different than it did just five years ago.”


Worried About Your Firm’s AI Ethics? These Startups Are Here To Help
Karen Hao | MIT Technology Review
“Parity is among a growing crop of startups promising organizations ways to develop, monitor, and fix their AI models. They offer a range of products and services from bias-mitigation tools to explainability platforms. Initially most of their clients came from heavily regulated industries like finance and health care. But increased research and media attention on issues of bias, privacy, and transparency have shifted the focus of the conversation.”


Who Should Make the Online Rules
Shira Ovide | The New York Times
“There has been lots of screaming about what these [big tech] companies did, but I want us all to recognize that there are few easy choices here. Because at the root of these disputes are big and thorny questions: Is more speech better? And who gets to decide? …The oddity is not that we’re struggling with age-old questions about the trade-offs of free expression. The weird thing is that companies like Facebook and Apple have become such essential judges in this debate.”


He Created the Web. Now He’s Out to Remake the Digital World.
Steve Lohr | The New York Times
“The big tech companies are facing tougher privacy rules in Europe and some American states, led by California. Google and Facebook have been hit with antitrust suits. But Mr. Berners-Lee is taking a different approach: His answer to the problem is technology that gives individuals more power. …The idea is that each person could control his or her own data—websites visited, credit card purchases, workout routines, music streamed—in an individual data safe, typically a sliver of server space.”


Evolution’s Engineers
Kevin Laland | Aeon
“Evolving populations are less like zombie mountaineers mindlessly climbing adaptive peaks, and more like industrious landscape designers, equipped with digging and building apparatuses, remodeling the topography to their own ends. At a time when human niche construction and ecological inheritance are ravaging the planet’s ecology and driving a human population explosion, understanding how organisms retool ecology for their own purposes has never been more pressing.”

Image Credit: Dewang Gupta / Unsplash

Kategorie: Transhumanismus

How Many Galaxies Are in the Universe? A New Answer From the Darkest Sky Ever Observed

15 Leden, 2021 - 19:30

Ordinarily, we point telescopes at some object we want to see in greater detail. In the 1990s astronomers did the opposite. They pointed the most powerful telescope in history, the Hubble Space Telescope, at a dark patch of sky devoid of known stars, gas, or galaxies. But in that sliver of nothingness, Hubble revealed a breathtaking sight: The void was brimming with galaxies.

Astronomers have long wondered how many galaxies there are in the universe, but until Hubble, the galaxies we could observe were far outnumbered by fainter galaxies hidden by distance and time. The Hubble Deep Field series (scientists made two more such observations) offered a kind of core sample of the universe going back nearly to the Big Bang. This allowed astronomers to finally estimate the galactic population to be at least around 200 billion.

Why “at least”? Because even Hubble has its limits.

The further out (and back in time) you go, galaxies get harder to see. One cause of this is the pure distance the light must travel. A second reason is due to the expansion of the universe. The wavelength of the light of very distant objects is stretched (redshifted), so these objects can no longer be seen in the primarily ultraviolet and visible portions of the spectrum Hubble was designed to detect. Finally, theory suggests early galaxies were smaller and fainter to begin with and only later merged to form the colossal structures we see today. Scientists are confident these galaxies exist. We just don’t know how many there are.

In 2016, a study published in The Astrophysical Journal by a team led by the University of Nottingham’s Christopher Conselice used a mathematical model of the early universe to estimate how many of those as-yet-unseen galaxies are lurking just beyond Hubble’s sight. Added to existing Hubble observations, their results suggested such galaxies make up 90 percent of the total, leading to a new estimate—that there may be up to two trillion galaxies in the universe.

Such estimates, however, are a moving target. As more observations roll in, scientists can get a better handle on the variables at play and increase the accuracy of their estimates.

Which brings us to the most recent addition to the story.

After buzzing by Pluto and the bizarre Kuiper Belt object, Arrokoth, NASA’s New Horizons spacecraft is at the edge of the solar system cruising toward interstellar space—and recently, it pulled a Hubble. In a study presented this week at the American Astronomical Society and soon to be published in The Astrophysical Journal, a team led by astronomers Marc Postman and Tod Lauer described what they found after training the New Horizons telescope on seven slivers of empty space to try and measure the level of ambient light in the universe.

Their findings, they say, allowed them to establish an upper limit on the number of galaxies in existence and indicate space may be a little less crowded than previously thought. According to their data, the total number of galaxies is more likely in the hundreds of billions, not trillions. “We simply don’t see the light from two trillion galaxies,” Postman said in a release published earlier in the week.

How did they arrive at their conclusion?

The Search for Perfect Darkness

There is one more constraint on Hubble’s observations. Not only can’t it directly resolve early galaxies, it can’t even detect their light due to the diffuse glow of “zodiacal light.” Caused by a halo of dust scattering light within the solar system, zodiacal light is extremely faint, but just like light pollution on Earth, it can obscure even fainter objects in the early universe.

The New Horizons spacecraft has now escaped the domain of zodiacal light and is gazing at the darkest sky yet imaged. This offers the opportunity to measure the background light from beyond our galaxy and compare it to known and expected sources.

Postman told The New York Times that going an order of magnitude further wouldn’t have offered a darker view.

“When you have a telescope on New Horizons way out at the edge of the solar system, you can ask, how dark does space get anyway,” Lauer wrote. “Use your camera just to measure the glow from the sky.”

Still, the measurement was not straightforward. In an article, astrophysicist and writer Ethan Siegel, who was not part of the study, explains how the team meticulously identified, modeled, and removed contributions from “camera noise, scattered sunlight, excess off-axis starlight, crystals from the spacecraft’s thrust, and other instrumental effects.” They also removed any images too close to the Milky Way. After all this, they were left with the faint glow of the universe, and that’s the exciting bit.

The 2016 study predicted that a universe with two trillion galaxies would produce about ten times more light than the galaxies we’ve so far observed indicate. But the New Horizons team only found about twice as much light. This led them to their conclusion there are likely fewer total galaxies lurking out there than previously thought—a number closer to the original Hubble estimate.

“Take all the galaxies Hubble can see, double that number, and that’s what we see—but nothing more,” said Lauer.

Star Gazing: The Next Generation

These observations from New Horizons aren’t the end of the story. Our ability to view the earliest universe should get a leg up this year when (fingers crossed) Hubble’s successor, the James Webb Space Telescope will launch and begin operations.

The JWST is set to observe in longer wavelengths than Hubble and is much bigger. These attributes should allow it to see even further back and image those smaller, fainter first galaxies. Like the Hubble Deep Field, if all is in working order, adding those galaxies to the census should give us an even clearer picture of the whole.

Whatever number scientists finally land on, it’s unlikely to be anything but mind-bogglingly huge. Even a few hundred billion galaxies means there’s an entire galaxy out there for every star in the Milky Way. Such research will undoubtedly cast even more light on cosmological questions about how the universe formed. But it will also beg the question: Amid the vast sea of galaxies, stars, and planets, are we really the only species to ever look out and wonder if we’re alone?

Image Credit: eXtreme Deep Field / NASA

Kategorie: Transhumanismus

NASA Will Soon Choose One of These 3 Landers to Go Back to the Moon

14 Leden, 2021 - 19:33

America’s going back to the moon. It’s been over 50 years since the Apollo missions, when Neil Armstrong and Buzz Aldrin became the first people to walk on the moon in 1969. Both NASA and the current administration have decided it’s high time people walked on the moon again—this time, importantly, those people won’t just be men.

The timeline has shifted a few times—NASA initially set a target of 2028, which Vice President Pence asked the agency to push up to 2024. 2024 now seems unlikely, despite Pence urging NASA to meet the deadline “by any means necessary.”

Though it’s uncertain when Americans will walk on the moon again, there will soon be some certainty around how they’ll do so, as NASA will choose a new moon lander design in February. At present, the other components for a moon mission have already been chosen: the Space Launch System will be the most powerful rocket the agency has ever built, and the Orion spacecraft has been around since the Constellation missions started in 2005. But NASA wants an updated lunar lander, the vehicle astronauts will use to leave the spacecraft and actually, as its name implies, land on the moon.

In April 2020, the agency awarded a total of $967 million in contracts to three different private companies, giving them less than a year to come up with a lander design. Now the time has almost come to pick one of those three. Here are the contenders.

Blue Origin

Best known for its founder Jeff Bezos, Blue Origin is working on a three-stage lander called Blue Moon. And it’s not working alone—the company has partnered with Draper, Lockheed Martin, and Northrop Grumman for various components of the lander. Its modular design resembles the lander used in the Apollo missions; it has a descent stage to bring the lander to the moon’s surface, an ascent stage to carry astronauts back up to the spacecraft, and a transfer stage to move the ascent and descent stages from high lunar orbit to low lunar orbit.

Blue Origin’s moon lander, artist rendering. Image Credit: Blue Origin

The vertical crew cabin would require astronauts to descend to the moon’s surface on a long ladder, which could be seen as an advantage because the crew is safer being high up.


Probably the least well-known of the three contenders, Dynetics is an IT company based in Alabama, and has long been a contractor with both NASA and the Department of Defense. While all of the landers can be refueled on the moon, Dynetics’ actually relies on in-space refueling using cryogenic propellants. The lander would launch with empty propellant tanks, and once it’s in lunar orbit, two more rockets would launch to carry propellant to the lander. Dynetics would mitigate the issue of “boiloff,” where warming temperatures cause some of the propellant to be lost, by doing the two fuel launches two to three weeks apart.

Dynetics’ lunar lander, artist rendering. Image Credit: Dynetics

Unlike Blue Origin’s three-piece lander, Dynetics’ is a single module with thrusters and propellant tanks on either side. It’s specifically designed to be reusable for repeated exploration of the moon, and it’s the only one of the three contenders with a horizontal crew cabin. The barrel-shaped cabin would give astronauts faster and easier access to the moon’s surface, and more space within the cabin itself.


Now a household name, Elon Musk’s SpaceX is designing, perhaps unsurprisingly, the biggest and flashiest lunar lander. It’s so tall, in fact, that astronauts would use an elevator to get from the crew cabin down to the moon’s surface. It goes by the same name as the company’s famous spacecraft, Starship, but has some moon-specific modifications.

SpaceX lunar lander, artist rendering. Image Credit: SpaceX

For starters, the Raptor engines usually used on the Starship are far too powerful for landing on the moon. The lunar Starship will be equipped with lighter thrusters to ease it on and off the moon’s surface, and it won’t have the flaps and heat shield needed for reentering Earth’s atmosphere.

Like the Dynetics lander, Starship will need to be refueled while in orbit, except it will do so in Earth orbit rather than lunar orbit. The lander’s comparatively huge size could be advantageous because it could carry not just the astronauts, but useful cargo like rovers onboard.

Just this week, Intuitive Machines announced it selected SpaceX to launch its two commercial payload missions to the moon on a Falcon 9 rocket in 2022 or later.

A New Mission

Between China landing on the far side of the moon in 2019 and the US paying Russia $90 million to transport American astronauts to the International Space Station (until SpaceX recently took over), it seems the US needs to shake a leg and catch up in the ongoing space race.

NASA’s Artemis program will be the core of its spaceflight and exploration endeavors for the next decade, covering low-Earth orbit, the moon, and Mars. In Greek mythology, Artemis was the twin sister of Apollo, for whom the first moon missions were named; NASA chose the name Artemis as a gesture of inclusion, intending to land the first woman on the moon.

Incoming President Joe Biden has a lot on his plate between the pandemic, a decimated economy, and the other issues that made 2020 such a soul-crushing year. The space program may end up being low on his priority list, especially in the near term. But the wheels have already been set in motion for another American journey to the moon—and we’ll soon have a way to land on it.

Image Credit: NASA

Kategorie: Transhumanismus

How Explainable Artificial Intelligence Can Help Humans Innovate

13 Leden, 2021 - 18:57

The field of artificial intelligence has created computers that can drive cars, synthesize chemical compounds, fold proteins, and detect high-energy particles at a superhuman level.

However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation.

Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.

Learning From Experience

One field of AI, called reinforcement learning, studies how computers can learn from their own experiences. In reinforcement learning, an AI explores the world, receiving positive or negative feedback based on its actions.

This approach has led to algorithms that have independently learned to play chess at a superhuman level and prove mathematical theorems without any human guidance. In my work as an AI researcher, I use reinforcement learning to create AI algorithms that learn how to solve puzzles such as the Rubik’s Cube.

Through reinforcement learning, AIs are independently learning to solve problems that even humans struggle to figure out. This has got me and many other researchers thinking less about what AI can learn and more about what humans can learn from AI. A computer that can solve the Rubik’s Cube should be able to teach people how to solve it, too.

Peering Into the Black Box

Unfortunately, the minds of superhuman AIs are currently out of reach to us humans. AIs make terrible teachers and are what we in the computer science world call “black boxes.”

AI simply spits out solutions without giving reasons for its solutions. Computer scientists have been trying for decades to open this black box, and recent research has shown that many AI algorithms actually do think in ways that are similar to humans. For example, a computer trained to recognize animals will learn about different types of eyes and ears and will put this information together to correctly identify the animal.

The effort to open up the black box is called explainable AI. My research group at the AI Institute at the University of South Carolina is interested in developing explainable AI. To accomplish this, we work heavily with the Rubik’s Cube.

The Rubik’s Cube is basically a pathfinding problem: Find a path from point A—a scrambled Rubik’s Cube—to point B—a solved Rubik’s Cube. Other pathfinding problems include navigation, theorem proving and chemical synthesis.

My lab has set up a website where anyone can see how our AI algorithm solves the Rubik’s Cube; however, a person would be hard-pressed to learn how to solve the cube from this website. This is because the computer cannot tell you the logic behind its solutions.

Solutions to the Rubik’s Cube can be broken down into a few generalized steps—the first step, for example, could be to form a cross while the second step could be to put the corner pieces in place. While the Rubik’s Cube itself has over 10 to the 19th power possible combinations, a generalized step-by-step guide is very easy to remember and is applicable in many different scenarios.

Approaching a problem by breaking it down into steps is often the default manner in which people explain things to one another. The Rubik’s Cube naturally fits into this step-by-step framework, which gives us the opportunity to open the black box of our algorithm more easily. Creating AI algorithms that have this ability could allow people to collaborate with AI and break down a wide variety of complex problems into easy-to-understand steps.

A step-by-step refinement approach can make it easier for humans to understand why AIs do the things they do. Forest Agostinelli, CC BY-ND Collaboration Leads to Innovation

Our process starts with using one’s own intuition to define a step-by-step plan thought to potentially solve a complex problem. The algorithm then looks at each individual step and gives feedback about which steps are possible, which are impossible and ways the plan could be improved. The human then refines the initial plan using the advice from the AI, and the process repeats until the problem is solved. The hope is that the person and the AI will eventually converge to a kind of mutual understanding.

Currently, our algorithm is able to consider a human plan for solving the Rubik’s Cube, suggest improvements to the plan, recognize plans that do not work and find alternatives that do. In doing so, it gives feedback that leads to a step-by-step plan for solving the Rubik’s Cube that a person can understand. Our team’s next step is to build an intuitive interface that will allow our algorithm to teach people how to solve the Rubik’s Cube. Our hope is to generalize this approach to a wide range of pathfinding problems.

People are intuitive in a way unmatched by any AI, but machines are far better in their computational power and algorithmic rigor. This back and forth between man and machine utilizes the strengths from both. I believe this type of collaboration will shed light on previously unsolved problems in everything from chemistry to mathematics, leading to new solutions, intuitions and innovations that may have, otherwise, been out of reach.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Serg Antonov / Unsplash

Kategorie: Transhumanismus

Meet Assembloids, Mini Human Brains With Muscles Attached

12 Leden, 2021 - 16:00

It’s not often that a twitching, snowman-shaped blob of 3D human tissue makes someone’s day.

But when Dr. Sergiu Pasca at Stanford University witnessed the tiny movement, he knew his lab had achieved something special. You see, the blob was evolved from three lab-grown chunks of human tissue: a mini-brain, mini-spinal cord, and mini-muscle. Each individual component, churned to eerie humanoid perfection inside bubbling incubators, is already a work of scientific genius. But Pasca took the extra step, marinating the three components together inside a soup of nutrients.

The result was a bizarre, Lego-like human tissue that replicates the basic circuits behind how we decide to move. Without external prompting, when churned together like ice cream, the three ingredients physically linked up into a fully functional circuit. The 3D mini-brain, through the information highway formed by the artificial spinal cord, was able to make the lab-grown muscle twitch on demand.

In other words, if you think isolated mini-brains—known formally as brain organoids—floating in a jar is creepy, upgrade your nightmares. The next big thing in probing the brain is assembloids—free-floating brain circuits—that now combine brain tissue with an external output.

The end goal isn’t to freak people out. Rather, it’s to recapitulate our nervous system, from input to output, inside the controlled environment of a Petri dish. An autonomous, living brain-spinal cord-muscle entity is an invaluable model for figuring out how our own brains direct the intricate muscle movements that allow us stay upright, walk, or type on a keyboard.

It’s the nexus toward more dexterous brain-machine interfaces, and a model to understand when brain-muscle connections fail—as in devastating conditions like Lou Gehrig’s disease or Parkinson’s, where people slowly lose muscle control due to the gradual death of neurons that control muscle function. Assembloids are a sort of “mini-me,” a workaround for testing potential treatments on a simple “replica” of a person rather than directly on a human.

From Organoids to Assembloids

The miniature snippet of the human nervous system has been a long time in the making.

It all started in 2014, when Dr. Madeleine Lancaster, then a post-doc at Stanford, grew a shockingly intricate 3D replica of human brain tissue inside a whirling incubator. Revolutionarily different than standard cell cultures, which grind up brain tissue to reconstruct as a flat network of cells, Lancaster’s 3D brain organoids were incredibly sophisticated in their recapitulation of the human brain during development. Subsequent studies further solidified their similarity to the developing brain of a fetus—not just in terms of neuron types, but also their connections and structure.

With the finding that these mini-brains sparked with electrical activity, bioethicists increasingly raised red flags that the blobs of human brain tissue—no larger than the size of a pea at most—could harbor the potential to develop a sense of awareness if further matured and with external input and output.

Despite these concerns, brain organoids became an instant hit. Because they’re made of human tissue—often taken from actual human patients and converted into stem-cell-like states—organoids harbor the same genetic makeup as their donors. This makes it possible to study perplexing conditions such as autism, schizophrenia, or other brain disorders in a dish. What’s more, because they’re grown in the lab, it’s possible to genetically edit the mini-brains to test potential genetic culprits in the search for a cure.

Yet mini-brains had an Achilles’ heel: not all were made the same. Rather, depending on the region of the brain that was reverse engineered, the cells had to be persuaded by different cocktails of chemical soups and maintained in isolation. It was a stark contrast to our own developing brains, where regions are connected through highways of neural networks and work in tandem.

Pasca faced the problem head-on. Betting on the brain’s self-assembling capacity, his team hypothesized that it might be possible to grow different mini-brains, each reflecting a different brain region, and have them fuse together into a synchronized band of neuron circuits to process information. Last year, his idea paid off.

In one mind-blowing study, his team grew two separate portions of the brain into blobs, one representing the cortex, the other a deeper part of the brain known to control reward and movement, called the striatum. Shockingly, when put together, the two blobs of human brain tissue fused into a functional couple, automatically establishing neural highways that resulted in one of the most sophisticated recapitulations of a human brain. Pasca crowned this tissue engineering crème-de-la-crème “assembloids,” a portmanteau between “assemble” and “organoids.”

“We have demonstrated that regionalized brain spheroids can be put together to form fused structures called brain assembloids,” said Pasca at the time.” [They] can then be used to investigate developmental processes that were previously inaccessible.”

And if that’s possible for wiring up a lab-grown brain, why wouldn’t it work for larger neural circuits?

Assembloids, Assemble

The new study is the fruition of that idea.

The team started with human skin cells, scraped off of eight healthy people, and transformed them into a stem-cell-like state, called iPSCs. These cells have long been touted as the breakthrough for personalized medical treatment, before each reflects the genetic makeup of its original host.

Using two separate cocktails, the team then generated mini-brains and mini-spinal cords using these iPSCs. The two components were placed together “in close proximity” for three days inside a lab incubator, gently floating around each other in an intricate dance. To the team’s surprise, under the microscope using tracers that glow in the dark, they saw highways of branches extending from one organoid to the other like arms in a tight embrace. When stimulated with electricity, the links fired up, suggesting that the connections weren’t just for show—they’re capable of transmitting information.

“We made the parts,” said Pasca, “but they knew how to put themselves together.”

Then came the ménage à trois. Once the mini-brain and spinal cord formed their double-decker ice cream scoop, the team overlaid them onto a layer of muscle cells—cultured separately into a human-like muscular structure. The end result was a somewhat bizarre and silly-looking snowman, made of three oddly-shaped spherical balls.

Yet against all odds, the brain-spinal cord assembly reached out to the lab-grown muscle. Using a variety of tools, including measuring muscle contraction, the team found that this utterly Frankenstein-like snowman was able to make the muscle component contract—in a way similar to how our muscles twitch when needed.

“Skeletal muscle doesn’t usually contract on its own,” said Pasca. “Seeing that first twitch in a lab dish immediately after cortical stimulation is something that’s not soon forgotten.”

When tested for longevity, the contraption lasted for up to 10 weeks without any sort of breakdown. Far from a one-shot wonder, the isolated circuit worked even better the longer each component was connected.

Pasca isn’t the first to give mini-brains an output channel. Last year, the queen of brain organoids, Lancaster, chopped up mature mini-brains into slices, which were then linked to muscle tissue through a cultured spinal cord. Assembloids are a step up, showing that it’s possible to automatically sew multiple nerve-linked structures together, such as brain and muscle, sans slicing.

The question is what happens when these assembloids become more sophisticated, edging ever closer to the inherent wiring that powers our movements. Pasca’s study targets outputs, but what about inputs? Can we wire input channels, such as retinal cells, to mini-brains that have a rudimentary visual cortex to process those examples? Learning, after all, depends on examples of our world, which are processed inside computational circuits and delivered as outputs—potentially, muscle contractions.

To be clear, few would argue that today’s mini-brains are capable of any sort of consciousness or awareness. But as mini-brains get increasingly more sophisticated, at what point can we consider them a sort of AI, capable of computation or even something that mimics thought? We don’t yet have an answer—but the debates are on.

Image Credit: /

Kategorie: Transhumanismus

New Research Could Enable Direct Data Transfer From Computers to Living Cells

11 Leden, 2021 - 17:01

As the modern world produces ever more data, researchers are scrambling to find new ways to store it all. DNA holds promise as an extremely compact and stable storage medium, and now a new approach could let us write digital data directly into the genomes of living cells.

Efforts to repurpose nature’s built-in memory technology aren’t new, but in the last decade the approach has gained renewed interest and seen some major progress. That’s been driven by an explosion of data that shows no signs of slowing down. By 2025, it’s estimated that 463 exabytes will be created each day globally.

Storing all this data could quickly become impractical using conventional silicon technology, but DNA could hold the answer. For a start, its information density is millions of times better than conventional hard drives, with a single gram of DNA able to store up to 215 million gigabytes.

It’s also highly stable if properly stored. In 2017, researchers were able to extract the full genome of an extinct horse species from 700,000 years ago. Learning to store and manipulate data using the same language as nature could also open the door to a host of new capabilities in biotechnology.

The main complication lies in finding a way to interface the digital world of computers and data with the biochemical world of genetics. At present this relies on synthesizing DNA in the lab, and while costs are falling rapidly, this is still a complicated and expensive business. Once synthesized, the sequences then have to be carefully stored in vitro until they’re ready to be accessed again, or they can be spliced into living cells using CRISPR gene editing technology.

Now though, researchers from Columbia University have demonstrated a new approach that can directly convert digital electronic signals into genetic data stored in the genomes of living cells. That could lead to a host of applications both for data storage and beyond, says Harris Wang, who led the research published in Nature Chemical Biology.

“Imagine having cellular hard-drives that can compute and physically reconfigure in real time,” he wrote in an email to Singularity Hub. “We feel that the first step is to be able to directly encode binary data into cells, without having to do in vitro DNA synthesis.

“This is perhaps the hardest part of all DNA storage approaches. If you can get the cells to directly talk to a computer, and interface its DNA-based memory system with a silicon-based memory system, then there are lots of possibilities in the future.”

The work builds on a CRISPR-based cellular recorder Wang had previously designed for E. coli bacteria, which detects the presence of certain DNA sequences inside the cell and records this signal into the organism’s genome.

The system includes a DNA-based “sensing module” that produces elevated levels of a “trigger sequence” in response to specific biological signals. These sequences are incorporated into the recorder’s “DNA ticker tape” to document the signal.

In this new work, Wang and colleagues adapted the sensing module to work with a biosensor developed by another team that reacts to electrical signals. Large populations of the bacteria were then placed in a device made up of a series of chambers that enabled the team to expose them to electrical signals.

When they applied a voltage, levels of the trigger sequence were elevated and recorded into the DNA ticker tape. Stretches with high proportions of trigger sequence were used to represent a binary “1” and their absence a “0,” allowing the researchers to directly encode digital information into the bacteria’s genome.

The amount of data that a single cell can hold is pretty small, just three bits. So the researchers devised a way to encode 24 separate populations of bacteria with different 3-bit chunks of data simultaneously for a total of 72 bits. They used this to encode the message “hello world!” into the bacteria, and showed that by sequencing the combined population and using a specially-designed classifier, they could retrieve the message with 98 percent accuracy.

Obviously 72 bits is a long way off the storage capacity of modern hard drives, and even cell-free DNA storage techniques now deal in gigabytes. But Wang says this is just a proof of concept, and there is plenty of scope for boosting the efficiency of the CRISPR machinery that powers the recorder, the length of the ticker tape that can be reliably read, and even the electronics used to encode the data.

“All of these things are going to improve over the next few years and I definitely think it is possible to massively scale up the capacity of the system by several orders of magnitude even in the short term,” he said.

And storing data in cells rather than in vitro has a number of significant benefits, he added. For a start, it’s much cheaper to amplify or duplicate the data because you can simply grow more cells rather than having to carry out complex artificial DNA synthesis. In the paper the team showed that the recorded information remained stable for between 60 and 80 generations of cells.

Cells also already have a native capacity to keep DNA safe from environmental disturbances. They demonstrated this by adding the E. coli cells to unsterilized potting soil and then reliably retrieving a 52-bit message by sequencing the combined soil microbial community.

Perhaps most exciting, though, is the possibility of coupling this data recording ability to emerging research on biocomputers. Researchers have already started to engineer cells’ DNA to allow them to carry out logic and memory operations, but creating a direct interface between silicon and genomes could significantly accelerate our ability to reprogram cells for our own devices.

Image Credit: Ricarda Mölck from Pixabay

Kategorie: Transhumanismus

The World’s Oldest Story? Astronomers Say Global Myths About ‘Seven Sisters’ Stars May Reach Back 100,000 Years

10 Leden, 2021 - 18:30

In the northern sky in December is a beautiful cluster of stars known as the Pleiades, or the “seven sisters.” Look carefully and you will probably count six stars. So why do we say there are seven of them?

Many cultures around the world refer to the Pleiades as “seven sisters,” and also tell quite similar stories about them. After studying the motion of the stars very closely, we believe these stories may date back 100,000 years to a time when the constellation looked quite different.

The Sisters and the Hunter

In Greek mythology, the Pleiades were the seven daughters of the Titan Atlas. He was forced to hold up the sky for eternity, and was therefore unable to protect his daughters. To save the sisters from being raped by the hunter Orion, Zeus transformed them into stars. But the story says one sister fell in love with a mortal and went into hiding, which is why we only see six stars.

An Australian Aboriginal interpretation of the constellation of Orion from the Yolngu people of Northern Australia. The three stars of Orion’s belt are three young men who went fishing in a canoe, and caught a forbidden king-fish, represented by the Orion Nebula. Drawing by Ray Norris based on Yolngu oral and written accounts.

A similar story is found among Aboriginal groups across Australia. In many Australian Aboriginal cultures, the Pleiades are a group of young girls, and are often associated with sacred women’s ceremonies and stories. The Pleiades are also important as an element of Aboriginal calendars and astronomy, and for several groups their first rising at dawn marks the start of winter.

Close to the Seven Sisters in the sky is the constellation of Orion, which is often called “the saucepan” in Australia. In Greek mythology Orion is a hunter. This constellation is also often a hunter in Aboriginal cultures, or a group of lusty young men. The writer and anthropologist Daisy Bates reported people in central Australia regarded Orion as a “hunter of women,” and specifically of the women in the Pleiades. Many Aboriginal stories say the boys, or man, in Orion are chasing the seven sisters—and one of the sisters has died, or is hiding, or is too young, or has been abducted, so again only six are visible.

The Lost Sister

Similar “lost Pleiad” stories are found in European, African, Asian, Indonesian, Native American, and Aboriginal Australian cultures. Many cultures regard the cluster as having seven stars, but acknowledge only six are normally visible, and then have a story to explain why the seventh is invisible.

Why are the Australian Aboriginal stories so similar to the Greek ones? Anthropologists used to think Europeans might have brought the Greek story to Australia, where it was adapted by Aboriginal people for their own purposes. But the Aboriginal stories seem to be much, much older than European contact. And there was little contact between most Australian Aboriginal cultures and the rest of the world for at least 50,000 years. So why do they share the same stories?

Barnaby Norris and I suggest an answer in a paper to be published by Springer early next year in a book titled Advancing Cultural Astronomy, a preprint for which is available here.

All modern humans are descended from people who lived in Africa before they began their long migrations to the far corners of the globe about 100,000 years ago. Could these stories of the seven sisters be so old? Did all humans carry these stories with them as they traveled to Australia, Europe, and Asia?

Moving Stars The positions of the stars in the Pleiades today and 100,000 years ago. The star Pleione, on the left, was a bit further away from Atlas in 100,000 BC, making it much easier to see. Credit: Ray Norris

Careful measurements with the Gaia space telescope and others show the stars of the Pleiades are slowly moving in the sky. One star, Pleione, is now so close to the star Atlas they look like a single star to the naked eye.

But if we take what we know about the movement of the stars and rewind 100,000 years, Pleione was further from Atlas and would have been easily visible to the naked eye. So 100,000 years ago, most people really would have seen seven stars in the cluster.

We believe this movement of the stars can help to explain two puzzles: the similarity of Greek and Aboriginal stories about these stars, and the fact so many cultures call the cluster “seven sisters” even though we only see six stars today.

A simulation showing hows the stars Atlas and Pleione would have appeared to a normal human eye today and in 100,000 BC. Image Credit: Ray Norris

Is it possible the stories of the Seven Sisters and Orion are so old our ancestors were telling these stories to each other around campfires in Africa, 100,000 years ago? Could this be the oldest story in the world?


We acknowledge and pay our respects to the traditional owners and elders, both past and present, of all the Indigenous groups mentioned in this paper. All Indigenous material has been found in the public domain.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through January 9)

9 Leden, 2021 - 16:00

This Avocado Armchair Could Be the Future of AI
Will Douglas Heaven | MIT Technology Review
“OpenAI has extended GPT-3 with two new models that combine [natural language processing] with image recognition to give its AI a better understanding of everyday concepts. … ‘We live in a visual world,’ says Ilya Sutskever, chief scientist at OpenAI. ‘In the long run, you’re going to have models which understand both text and images. AI will be able to understand language better because it can see what words and sentences mean.’i”


Elon Musk Has Become the World’s Richest Person, as Tesla Stock Rallies
Gillian Friedman | The New York Times
“Elon Musk, the chief executive of Tesla and SpaceX, is now the richest person in the world. An increase in Tesla’s share price on Thursday pushed Mr. Musk past Jeff Bezos, the founder of Amazon, on the Bloomberg Billionaires Index, a ranking of the world’s 500 wealthiest people. ‘How strange,’ Mr. Musk said on Twitter. ‘Well, back to work,’ he added.”


The World’s Cryptocurrency Market Is Now Worth More Than $1 Trillion
Timothy B. Lee | Ars Technica
“The price of [Bitcoin] the oldest virtual currency has risen to almost $40,000, pushing the value of all bitcoins in circulation up to more than $700 billion. Ether, the cryptocurrency of the Ethereum network, is now worth more than $140 billion. Then there’s a long list of less valuable cryptocurrencies, including Tether at $22 billion, Litecoin at $11 billion, and Bitcoin Cash at $8 billion.”


A 25-Year-Old Bet Comes Due: Has Tech Destroyed Society?
Steven Levy | Wired
“Much more than a thousand bucks was at stake: The bet was a showdown between two fiercely opposed views on the nature of progress. In a time of climate crisis, a pandemic, and predatory capitalism, is optimism about humanity’s future still justified? [Kevin] Kelly and [Kirkpatrick] Sale each represent an extreme side of the divide. For the men involved, the bet’s outcome would be a personal validation—or repudiation—of their lifelong quests.”


New Quantum Algorithms Finally Crack Nonlinear Equations
Max G. Levy | Quanta
“i‘[Chaos in nonlinear systems] is part of why it’s difficult to predict the weather or understand complicated fluid flow,’ said Andrew Childs, a quantum information researcher at the University of Maryland. ‘There are hard computational problems that you could solve, if you could [figure out] these nonlinear dynamics.’ That may soon be possible.”


Hundreds of Google Employees Unionize, Culminating Years of Activism
Kate Conger | The New York Times
“The union’s creation is highly unusual for the tech industry, which has long resisted efforts to organize its largely white-collar work force. It follows increasing demands by employees at Google for policy overhauls on pay, harassment and ethics, and is likely to escalate tensions with top leadership.”


Robots Made of Ice Could Build and Repair Themselves on Other Planets
Evan Ackerman | IEEE Spectrum
“…even if (say) the Mars rovers did have the ability to swap their own wheels when they got worn out, where are you going to get new robot wheels on Mars, anyway? And this is the bigger problem—finding the necessary resources to keep robots running in extreme environments. …You can’t make wheels out of solar power, but you can make wheels, and other structural components, out of another material that can be found just lying around all over the place: ice.”


Galaxy-Size Bubbles Discovered Towering Over the Milky Way
Charlie Wood | Quanta
“When Peter Predehl, an astrophysicist at the Max Planck Institute for Extraterrestrial Physics in Germany, first laid eyes on the new map of the universe’s hottest objects, he immediately recognized the aftermath of a galactic catastrophe. A bright yellow cloud billowed tens of thousands of light-years upward from the Milky Way’s flat disk, with a fainter twin reflected below.”


VR Is Not a Hit. That’s OK.
Shira Ovide | The New York Times
“Not every technology needs to be in the hands of billions of people to make a difference. Finding a comfy niche can be good enough. …[VR and AR] have remained far outside the mainstream. As Kevin [Roose’s] column showed, that doesn’t mean that these technologies are destined for the dustbin of failure. It highlights the vast middle ground between a flop and a technology used by billions.”

Image Credit: Daniel Lincoln / Unsplash

Kategorie: Transhumanismus

These Futuristic Flying Ambulances May Soon Be Zooming Around New York

8 Leden, 2021 - 16:00

Ambulance use surged in 2020 because of the Covid-19 pandemic, even as emergency medical service providers struggled due to the revenue hit they took from delayed and canceled elective procedures. While we’re fervently hoping that far fewer people will need ambulances this year, there may soon be a whole new means of emergency transportation, at least in New York: flying ambulances.

Israeli aerospace company Urban Aeronautics announced this week that it sold its first four vertical takeoff and landing (VTOL) aircraft to Hatzolah Air, a nonprofit emergency medical air transport provider based in New York. The organization already operates fixed-wing aircraft (meaning propeller-driven or powered by a jet engine, with wings that don’t move) as part of its emergency missions.

To that end, “flying ambulances” isn’t a new concept; they’ve existed for a long time in the form of helicopters and planes. In fact, the Association of Air Medical Services estimates that around 550,000 people get medevaced in the US each year.

But Urban Aeronautics’ Cormorant CityHawk, as the aircraft is called, will bring some functional new features to the skies. Though it’s lightweight and has a compact footprint, its interior cabin is 20 to 30 percent larger than that of a helicopter, meaning it will be able to fit two EMTs, the patient plus a companion, and medical equipment (plus the pilot) without things getting too cramped.

The CityHawk is jet-propelled, so the absence of a spinning rotor with a wide diameter will make it more nimble, allowing it to land in places that aren’t heli-pads. “The combination of a relatively small external footprint, high payload, and a large and spacy cabin allows it to truly operate safely from anywhere within the city, near obstacles, and in the vicinity of people, with the peace of mind and safety of a car.” Nimrod Golan-Yanay, CEO of Urban Aeronautics, told Digital Trends. It’s also reportedly “much quieter” than comparable helicopters.

Though the contract between Urban Aeronautics and Hatzolah Air has been signed, getting the CityHawks into the skies will be a multi-year process. Engineers from both organizations will work together on the aircraft’s operational requirements, and the CityHawk will need to be granted regulatory permission before beginning flights.

Regulations for VTOL aircraft are different than those for drones. VTOLs can be autonomous or piloted; what makes them unique is that they use the same engine for vertical and horizontal flight by altering the path of the thrust. The CityHawk’s thrust is generated by two ducted fans, one at the front of the aircraft and one at the rear. Urban Aeronautics also says it’s working on a hydrogen-powered model.

Yet to be discussed is how much a flight in a CityHawk will cost. Hatzolah won’t be out to make a profit from its services, but given the cost of current medical airlifting and even of ground ambulances, it likely won’t be cheap. A May 2020 study at the University of Michigan noted that people who use medical helicopters or planes in an emergency can later be billed up to $20,000—and insurance often covers only a fraction, if any, of this sum.

Interestingly, a 2017 study done at the University of Kansas found that the onset of Uber caused ambulance use to drop by seven percent—apparently people in less dire straits have the prescience to avoid huge bills, even in their moment of crisis. Uber is actively working on getting flying taxis into the air, and while that’s probably still a few years away—due both to the regulatory environment and limitations of the technology—how might the cost of a flying Uber (which, of course, won’t have EMTs or life support equipment on board) compare to that of a Hatzolah Air ambulance?

And if Mohamed won’t come to the mountain, there’s also the option of a paramedic wearing a jet pack flying to the injured or ill person’s side—though this likely comes with a hefty price tag, too.

Let’s hope that between all these emergency transportation options of the future, we’ll see fewer lives lost—that is, after all, the end goal of all this technology.

Image Credit: Urban Aeronautics

Kategorie: Transhumanismus

SETI: New Signal Excites Alien Hunters—Here’s How We Could Find Out if It’s Real

7 Leden, 2021 - 16:00

The $100 million Breakthrough Listen Initiative, founded by the billionaire, technology and science investor Yuri Milner and his wife Julia, has identified a mysterious radio signal that seems to come from the nearest star to the sun, Proxima Centauri. This has generated a flood of excitement in the press and among scientists themselves. The discovery, which was reported by the Guardian but has yet to be published in a scientific journal, may be the search for extraterrestrial intelligence’s (SETI) first bona fide candidate signal. It has been dubbed Breakthrough Listen Candidate 1 or simply BLC-1.

Although the Breakthrough Listen team are still working on the data, we know that the radio signal was detected by the Parkes telescope in Australia while it was pointing at Proxima Centauri, which is thought to be orbited by at least one habitable planet. The signal was present for the full observation, lasting several hours. It also was absent when the telescope pointed in a different direction.

Artist’s impression of a planet orbiting Proxima Centauri. ESO/M. Kornmesser/wikipedia, CC BY-SA

The signal was “narrow-band,” meaning it only occupied a slim range of radio frequencies. And it drifted in frequency in a way that you would expect if it came from a moving planet. These characteristics are exactly the kind of attributes the SETI scientists have been looking for since the astronomer Frank Drake first began the pioneering initiative some 60 years ago.

While this represents remarkable progress in our pursuit of the ultimate question of whether we are alone in the universe, the BLC-1 signal also presents some food for thought on how we conduct these searches. In particular, BLC-1 highlights a problem that has dogged SETI research right from the beginning: disappearing signals. BLC-1 hasn’t been seen since it was first detected in the spring of 2019.

If BLC-1 finally emerges as a true SETI signal candidate, it will be the first since the “Wow! signal” recorded back in 1977. This is perhaps the most famous example of an inconclusive SETI candidate—it was never observed again. That doesn’t mean it cannot be extraterrestrial in nature. The perfect celestial alignment of moving and potentially rotating transmitters and receivers, separated by interstellar distances, is always likely to be a fortuitous and sometimes temporary circumstance.

Nevertheless, this represents a challenge for the Breakthrough Listen team. If BLC-1 is never seen to repeat, it will be very difficult to conduct the kind of detailed follow-up that will fully convince scientists that it was a message from aliens. Skeptics will rightly argue that this is more likely to be either a new form of human-generated radio interference or a rare feature of the complex observing instrumentation itself.

Indeed, it may never be possible to provide really compelling evidence of the extraterrestrial nature of a SETI event based on a telescope with a single dish, such as Parkes. This is especially the case for one-off events.

Ways Forward

One way forward would be to abandon the traditional approach of using large single dishes for SETI. While a parabolic dish has the useful property of being sensitive to a fairly large area of sky, if a candidate signal is detected, there is no way of knowing exactly where it came from. So, while the Parkes telescope was nominally pointing at Proxima Centauri, literally hundreds of thousands of other galactic stars were also present in the field of view. Ultimately, any one of them could potentially be the source of the BLC-1.

We can overcome this problem by observing with several large dishes simultaneously, preferably separated by hundreds and even thousands of kilometers. By combining their signals using a powerful technique known as Very Long Baseline Interferometry, we can pinpoint the position of a signal with exquisite accuracy, such as to a single star.

For nearby systems such as Proxima Centauri, we can achieve a precision of approximately one thousandth of an astronomical unit (the distance between the sun and Earth). This should allow us to identify not just the stellar system but the associated planet that transmitted the signal.

With such an approach, the motion on the sky of most signals could be measured in a year or even less. There are other advantages to observing with an interferometric array of telescopes, such as having many completely independent telescopes detecting the same signal.

In addition, radio interference from Earth wouldn’t be registered by telescope sites separated by hundreds of kilometers. So the human made interference that has contributed to so many false positives for SETI, and has included orbiting satellites and even microwave ovens, would completely disappear.

This kind of interferometry is a well-established technique that has been around since the late 1960s. So why are we not doing SETI with it systematically? One reason is that combining data together from an array of telescopes requires more effort in almost all regards, including greater computing resources. An observation of a few minutes would generate many terabytes of data (1 terabyte is 1,024 gigabytes).

Artist’s impression of the Square Kilometer Array. SPDO/TDP/DRAO/Swinburne Astronomy Productions – SKA Project Development Office and Swinburne Astronomy Productions, CC BY-SA

But none of these issues are show stoppers, especially as technology continues to advance at unprecedented rates. Perhaps a more important factor is human inertia. Until recently, the SETI community has been quite conservative in its approach, with staff traditionally drawn from single-dish telescopes. These scientists aren’t necessarily familiar with the quirks and foibles of interferometric arrays.

Luckily, that’s finally changing. Breakthrough Listen now looks towards incorporating arrays such as MeerKAT, the Jansky Very Large Telescope (JVLA), and eventually the Square Kilometer Array (SKA) in their future survey programs. In the meantime, prepare for a rising tide of ambiguous radio events—and hopefully the reappearance of BLC-1. Determining the precise location and motion of these signals may be the only way of reaching unequivocal conclusions.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: CSIRO/wikipedia

Kategorie: Transhumanismus

Start the New Year Right: By Watching These Robots’ Awesome Dance Moves

6 Leden, 2021 - 16:00

2020 was a tough year. There was almost nothing good about it, and we saw it off with a “good riddance” and hopes for a better 2021. But robotics company Boston Dynamics took a different approach to closing out the year: when all else fails, why not dance?

The company released a video last week that I dare you to watch without laughing—or at the very least, cracking a pretty big smile. Because dancing robots are funny. And it’s not just one dancing robot, it’s four of them: two humanoid Atlas bots, one four-legged Spot, and one Handle, a bot-on-wheels built for materials handling.

The robots’ killer moves look almost too smooth and coordinated to be real, leading many to speculate that the video was computer-generated. But if you can trust Elon Musk, there’s no CGI here.

This is not CGI

— Elon Musk (@elonmusk) December 29, 2020

Boston Dynamics went through a lot of changes in the last ten years; it was acquired by Google in 2013, then sold to Japanese conglomerate SoftBank in 2017 before being acquired again by Hyundai just a few weeks ago for $1.1 billion. But this isn’t the first time they teach a robot to dance and make a video for all the world to enjoy; Spot tore up the floor to “Uptown Funk” back in 2018.

Four-legged Spot went commercial in June, with a hefty price tag of $74,500, and was put to some innovative pandemic-related uses, including remotely measuring patients’ vital signs and reminding people to social distance.

Hyundai plans to implement its newly-acquired robotics prowess for everything from service and logistics robots to autonomous driving and smart factories.

They’ll have their work cut out for them. Besides being hilarious, kind of heartwarming, and kind of creepy all at once, the robots’ new routine is pretty impressive from an engineering standpoint. Compare it to a 2016 video of Atlas trying to pick up a box (I know it’s a machine with no feelings, but it’s hard not to feel a little bit bad for it, isn’t it?), and it’s clear Boston Dynamics’ technology has made huge strides. It wouldn’t be surprising if, in two years’ time, we see a video of a flash mob of robots whose routine includes partner dancing and back flips (which, admittedly, Atlas can already do).

In the meantime, though, this one is pretty entertaining—and not a bad note on which to start the new year.

Image Credit: Boston Dynamics

Kategorie: Transhumanismus

2021 Could Be a Banner Year for AI—If We Solve These 4 Problems

5 Leden, 2021 - 16:00

If AI has anything to say to 2020, it’s “you can’t touch this.”

Last year may have severed our connections with the physical world, but in the digital realm, AI thrived. Take NeurIps, the crown jewel of AI conferences. While lacking the usual backdrop of the dazzling mountains of British Columbia or the beaches of Barcelona, the annual AI extravaganza highlighted a slew of “big picture” problems—bias, robustness, generalization—that will encompass the field for years to come.

On the nerdier side, scientists further explored the intersection between AI and our own bodies. Core concepts in deep learning, such as backpropagation, were considered a plausible means by which our brains “assign fault” in biological networks—allowing the brain to learn. Others argued it’s high time to double-team intelligence, combining the reigning AI “golden child” method—deep learning—with other methods, such as those that guide efficient search.

Here are four areas we’re keeping our eyes on in 2021. They touch upon outstanding AI problems, such as reducing energy consumption, nixing the need for exuberant learning examples, and teaching AI some good ole’ common sense.

Greed: Less Than One-Shot Learning

You’ve heard this a billion times: deep learning is extremely greedy, in that the algorithms need thousands (if not more) examples to showcase basic signs of learning, such as identifying a dog or a cat, or making Netflix or Amazon recommendations.

It’s extremely time-consuming, wasteful in energy, and a head-scratcher in that it doesn’t match our human experience of learning. Toddlers need to see just a few examples of something before they remember it for life. Take the concept of “dog”—regardless of the breed, a kid who’s seen a few dogs can recognize a slew of different breeds without ever having laid eyes on them. Now take something completely alien: a unicorn. A kid who understands the concept of a horse and a narwhal can infer what a unicorn looks like by combining the two.

In AI speak, this is “less than one-shot” learning, a sort of holy-grail-like ability that allows an algorithm to learn more objects than the amount of examples it was trained on. If realized, the implications would be huge. Currently-bulky algorithms could potentially run smoothly on mobile devices with lower processing capabilities. Any sort of “inference,” even if it doesn’t come with true understanding, could make self-driving cars far more efficient at navigating our object-filled world.

Last year, one team from Canada suggested the goal isn’t a pipe dream. Building on work from MIT analyzing hand-written digits—a common “toy problem” in computer vision—they distilled 60,000 images into 5 using a concept called “soft labels.” Rather than specifying what each number should look like, they labeled each digit—say, a “3”—as a percentage of “3,” or “8,” or “0.” Shockingly, the team found that with carefully-constructed labels, just two examples could in theory encode thousands of different objects. Karen Hao at MIT Technology Review gets into more detail here.

Brittleness: A Method to Keep AI Hacker-Proof

For everything AI can do, it’s flawed at defending insidious attacks targeting its input data. Slight or seemingly random perturbations to a dataset—often undetectable by the human eye—can enormously alter the final output, something dubbed “brittle” for an algorithm. Too abstract? An AI trained to recognize cancer from a slew of medical scans, annotated in yellow marker by a human doctor, could learn to associate “yellow” with “cancer.” A more malicious example is nefarious tampering. Stickers placed on a roadway can trick Tesla’s Autopilot system to mistake lanes and careen into oncoming traffic.

Brittleness requires AI to learn a certain level of flexibility, but sabotage—or “adversarial attacks”—is becoming an increasingly recognized problem. Here, hackers can change the AI’s decision-making process with carefully-crafted inputs. When it comes to network security, medical diagnoses, or other high-stakes usage, building defense systems against these attacks is critical.

This year, a team from the University of Illinois proposed a powerful way to make deep learning systems more resilient. They used an iterative approach, having two neural nets battle it out—one for image recognition, and the other for generating adversarial attacks. Like a cat-and-mouse game, the “enemy” neural net tries to fool the computer vision network into recognizing things that are fictitious; the latter network fights back. While far from perfect, the study highlights one increasingly popular approach to make AI more resilient and trustworthy.

AI Savant Syndrome: Learning Common Sense

One of the most impressive algorithms this year is GPT-3, a marvel by OpenAI that spits out eerily human-like language. Dubbed “one of the most interesting and important AI systems ever produced,” GPT-3 is the third generation of an algorithm that produces writing so “natural” that at a glance it’s hard to decipher machine from human.

Yet GPT-3’s language proficiency is, upon deeper inspection, just a thin veil of “intelligence.” Because it’s trained on human language, it’s also locked into the intricacies and limitations of our everyday phrases—without any understanding of what they mean in the real world. It’s akin to learning slang from Urban Dictionary instead of living it. An AI may learn to associate “rain” with “cats and dogs” in all situations, gaining its inference from the common vernacular describing massive downpours.

One way to make GPT-3 or any natural language-producing AI smarter is to combine it with computer vision. Teaching language models to “see” is an increasingly popular area in AI research. The technique combines the strength of language with images. AI language models, including GPT-3, learn through a process called “unsupervised training,” which means they can parse patterns in data without explicit labels. In other words, they don’t need a human to tell them grammatical rules or how words relate to one another, which makes it easier to scale any learning by bombarding the AI with tons of example text. Image models, on the other hand, better reflect our actual reality. However, these require manual labeling, which makes the process slower and more tedious.

Combining the two yields the best of both worlds. A robot that can “see” the world captures a sort of physicality—or common sense—that’s missing from analyzing language alone. One study in 2020 smartly combined both approaches. They started with language, using a scalable approach to write captions for images based on the inner workings of GPT-3 (details here). The takeaway is that the team was able to connect the physical world—represented through images—by linking it with language on how we describe the world.

Translation? A blind, deaf, and utterly quarantined AI learns a sort of common sense. For example, “cats and dogs” can just mean pets, rather than rain.

The trick is still mostly experimental, but it’s an example of thinking outside the artificial confines of a particular AI domain. By combining the two areas—natural language processing and computer vision—it works better. Imagine an Alexa with common sense.

Deep Learning Fatigue

Speaking of thinking outside the box, DeepMind is among those experimenting with combining different approaches to AI into something more powerful. Take MuZero, an Atari-smashing algorithm they released just before Christmas.

Unlike DeepMind’s original Go, poker, and chess-slaying AI wizard, MuZero has another trick up its sleeve. It listens to no one, in that the AI doesn’t start with previous knowledge of the game or decision-making processes. Rather, it learns without a rulebook, instead observing the game’s environment—akin to a novice human observing a new game. In this way, after millions of games, it doesn’t just learn the rules, but also a more general concept of policies that could lead it to get ahead and evaluate its own mistakes in hindsight.

Sounds pretty human, eh? In AI vernacular, the engineers combined two different approaches, decision trees and a learned model, to make an AI great at planning winning moves. For now, it’s only been shown to master games at a level similar to AlphaGo. But we can’t wait to see what this sort of cross-fertilization of ideas in AI can lead to in 2021.

Image Credit: Oleg Gamulinskiy from Pixabay

Kategorie: Transhumanismus

‘The Secret History of the Moon’: The Epic Story of How the Moon Was Made

4 Leden, 2021 - 16:00

Where did the moon come from?

The leading theory suggests the moon was formed after a massive collision between a Mars-sized planet Theia and Earth in the early days of the solar system. Theia was smashed apart and reformed in Earth’s orbit as the moon. Called the giant impact theory, the general idea is solid but the exact details remain a work in progress. In recent years, scientists have proposed new ideas to further sharpen science’s best lunar creation story.

In this epic video, filmmaker John D. Boswell explores the secret history of the moon—what we think we know, what still puzzles us, and how new theory may help reconcile the two. As Boswell notes, much mystery remains. To assemble the full history, we need to go back to the moon and dig deeper. Whatever we find, there’s no question the fates of life on Earth and our nearest neighbor have been entwined since nearly the beginning.

Image Credit: melodysheep/John D. Boswell via YouTube

Kategorie: Transhumanismus

Scientists Just Created a Catalyst That Turns CO2 Into Jet Fuel

3 Leden, 2021 - 17:38

Air travel is one of the worst contributors to global warming, burping out nearly a billion tons of CO2 a year. But what if we could close that circle by converting those greenhouse gases back into jet fuel?

In the face of phenomena like climate change, plastic pollution, deforestation, and land degradation people are increasingly questioning the short-term thinking that underpins our societies. Some have dubbed our current approach a “linear economy” where we extract raw materials, process them into products, and then dispose of them once they’ve outlived their usefulness.

As the global population grows, this strategy is becoming increasingly unsustainable. That’s prompting growing interest in a different model known as the “circular economy.” Rather than simply discarding our waste, we find ways to reuse it or recycle it into something more useful.

For years now, chemists have been trying to apply this idea to one of the most environmentally damaging sectors of our economy: the aviation industry. Not only do planes emit huge amounts of CO2, they also pump other greenhouse gases like nitrogen oxide directly into the upper atmosphere, where their warming effect is greatly increased.

The fossil fuels they burn to create all these emissions are hydrocarbons, which means they are made up of a combination of carbon and hydrogen. That’s led some to suggest it might be possible to create synthetic versions of these fuels by capturing the CO2 planes produce and combining it with hydrogen extracted with water.

If the energy used to power these reactions came from renewable sources, their production wouldn’t lead to any increase in emissions. And when these fuels were burned they would simply be returning CO2 captured from the atmosphere, making the fuel effectively carbon neutral.

It’s a nice idea, but the process of turning CO2 into useful fuels is more complex than it might sound. Most efforts so far have required expensive catalysts—substances that boost the speed of a chemical reaction—or multiple energy-intensive processing steps, which means the resulting fuel is far pricier than fossil fuels.

Now though, researchers from the University of Oxford have developed a new low-cost catalyst that can directly convert CO2 into jet fuel, which they say could eventually lay the foundation for a circular economy for aviation fuel.

“Instead of consuming fossil crude oil, jet aviation fuels and petrochemical starting compounds are produced from a valuable and renewable raw material, namely, carbon dioxide,” they write in a paper in Nature Communications.

Within a jet fuel CO2 circular economy, the “goods” (here the jet fuel) are continually reprocessed in a closed environment, they add. This would not only save the natural fossil resources and preserve the environment, but would also create new jobs, economies, and markets.

Creating jet fuel is particularly challenging because most routes for synthesizing hydrocarbons from CO2 tend to produce smaller molecules with only a few carbon atoms, like methane and methanol. Jet fuels are made up of molecules with many long chains of carbon atoms, and there have been few successful attempts to produce them directly from CO2 without extra processing.

But by combining findings from previous research, the group was able to create a low-cost iron-based catalyst that could produce substantial yields of jet fuel from CO2 and hydrogen. Iron is already commonly used in these kinds of reactions, but they combined it with manganese, which has been shown to boost the activity of iron catalysts, and potassium, which is known to encourage the formation of longer-chain hydrocarbons.

They prepared the catalysts using an approach known as the Organic Combustion Method (OCM), in which the raw ingredients are combined with citric acid to make a slurry that is then ignited at 662°F and burned for four hours to create a fine powder. This is a much simpler processing technique than previous approaches, which means it holds promise for industrial applications.

Scaling up this process to meet the demands of the aviation industry won’t be easy. Boosting the efficiency of the synthesis step is only one part of the puzzle. Collecting large amounts of CO2 from the air is very tricky, and splitting water to make hydrogen also uses a lot of power.

Plans are already afoot to build a pilot plant that will convert CO2 into jet fuel at Rotterdam Airport in the Netherlands, but as Friends of the Earth campaigner Jorien de Lege told the BBC, scaling up the technology will be a herculean task.

“If you think about it, this demonstration plant can produce a thousand liters a day based on renewable energy. That’s about five minutes of flying in a Boeing 747,” she said.

Nonetheless, developing a cheap, high-yield catalyst is a major step towards making the idea more feasible. Getting our planes to fly on thin air may sound like a wildly ambitious idea, but that goal has just come a little bit closer.

Image Credit: Free-Photos from Pixabay

Kategorie: Transhumanismus