Transhumanismus
Scientists Just Revealed the Most Detailed Geological Model of Earth’s Past 100 Million Years
Earth’s surface is the “living skin” of our planet—it connects the physical, chemical, and biological systems. Over geological time, landscapes change as this surface evolves, regulating the carbon cycle and nutrient circulation as rivers carry sediment into the oceans.
All these interactions have far-reaching effects on ecosystems and biodiversity—the many living things inhabiting our planet.
As such, reconstructing how Earth’s landscapes have evolved over millions of years is a fundamental step towards understanding the changing shape of our planet, and the interaction of things like the climate and tectonics. It can also give us clues on the evolution of biodiversity.
Working with scientists in France (French National Center for Scientific Research, ENS Paris university, University of Grenoble, and University of Lyon), our team at the University of Sydney has now published a detailed geological model of Earth’s surface changes in the prestigious journal Science.
Ours is the first dynamic model—a computer simulation—of the past 100 million years at a high resolution down to ten kilometers. In unprecedented detail, it reveals how Earth’s surface has changed over time, and how that has affected the way sediment moves around and settles.
Broken into frames of a million years, our model is based on a framework that incorporates plate tectonic and climatic forces with surface processes such as earthquakes, weathering, changing rivers, and more.
Three Years in the MakingThe project started about three years ago when we began the development of a new global-scale landscape evolution model, capable of simulating millions of years of change. We also found ways to automatically add other information into our framework, such as paleogeography—the history of Earth’s landscapes.
For this new study, our framework used state-of-the-art plate tectonic reconstructions and simulations of past climates on a global scale.
Our advanced computer simulations used Australia’s National Computational Infrastructure, running on hundreds of computer processors. Each simulation took several days, building a complete picture to reconstruct the past 100 million years of Earth’s surface evolution.
All this computing power has resulted in global high-resolution maps that show the highs and lows of Earth’s landscapes (elevation), as well as the flows of water and sediment.
All of these fit well with existing geological observations. For instance, we combined data from present-day river sediment and water flows, drainage basin areas, seismic surveys, and long-term local and global erosion trends.
Our main outputs are available as time-based global maps at five-million-year intervals from the Open Science Framework.
Water and Sediment Flux Through Space and TimeOne of Earth’s fundamental surface processes is erosion, a slow process in which materials like soil and rock are worn and carried away by wind or water. This results in sediment flows.
Erosion plays an important role in Earth’s carbon cycle—the never-ending global circulation of one of life’s essential building blocks, carbon. Investigating the way sediment flows have changed through space and time is crucial for our understanding of how Earth’s climates have varied in the past.
We found that our model reproduces the key elements of Earth’s sediment transport, from catchment dynamics depicting river networks over time to the slow changes of large-scale sedimentary basins.
From our results, we also found several inconsistencies between existing observations of rock layers (strata), and predictions of such layers. This shows our model could be useful for testing and refining reconstructions of past landscapes.
Our simulated past landscapes are fully integrated with the various processes at play, especially the hydrological system—the movement of water—providing a more robust and detailed view of Earth’s surface.
Our study reveals more detail on the role that the constantly-evolving Earth’s surface has played in the movement of sediments from mountaintops to ocean basins, ultimately regulating the carbon cycle and Earth’s climate fluctuations through deep time.
As we explore these results in tandem with the geological record, we will be able to answer long-standing questions about various crucial features of the Earth system—including the way our planet cycles nutrients, and has given rise to life as we know it.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Sander Lenaerts on Unsplash
This New Material Absorbs Three Times More CO2 Than Current Carbon Capture Tech
According to the IEA, there are currently 18 direct air capture plants in operation around the world. They’re located in Europe, Canada, or the US, and most of them use the CO2 for commercial purposes, with a couple storing it away for all eternity. Direct air capture (DAC) is a controversial technology, with opponents citing its high cost and energy usage. Indeed, when you consider the amount of CO2 in the atmosphere relative to the amount that any single DAC plant—or many of them collectively—can capture, and hold that up against their cost, it seems a bit silly to even be trying.
But given the lack of other great options available to stop the planet from bursting into flames, both the Intergovernmental Panel on Climate Change and the International Energy Agency say we shouldn’t discard DAC just yet—on the contrary, we should be trying to find ways to cut its costs and up its efficiency. A team from Lehigh University and Tianjin University have made one such breakthrough, developing a material they say can capture three times as much carbon as those currently in use.
Described in a paper published today in Science Advances, the material could make DAC a far more viable technology by eliminating some of its financial and practical obstacles, the team says.
Many of the carbon capture plants that are currently operational or under construction (including Iceland’s Orca and Mammoth and Wyoming’s Project Bison) use solid DAC technology: blocks of fans push air through sorbent filters that chemically bind with CO2. The filters need to be heated and placed under a vacuum to release the CO2, which must then be compressed under extremely high pressure.
These last steps are what drive carbon capture’s energy use and costs so high. The CO2 in Earth’s atmosphere is very diluted; according to the paper’s authors, its average concentration is about 400 parts per million. That means a lot of air needs to be blown through the sorbent filters for them to capture just a little CO2. Since it takes so much energy to separate the captured CO2 (called the “desorption” process), we want as much CO2 as possible to be getting captured in the first place.
The Lehigh-Tianjin team created what they call a hybrid sorbent. They started with a synthetic resin, which they soaked in a copper-chloride solution. The copper acts as a catalyst for the reaction that causes CO2 to bind to the resin, making the reaction go faster and use less energy. Besides being mechanically strong and chemically stable, the sorbent can be regenerated using salt solutions—including seawater—at temperatures lower than 90 degrees Celsius.
The team reported that one kilogram of their material was able to absorb 5.1 mol of CO2; in comparison, most solid sorbents currently in use for DAC have absorption capacities of 1.0 to 1.5 mol per kilogram. In between capture cycles they used seawater to regenerate the capture column, repeating the cycle 15 times without a noticeable decrease in the amount of CO2 the material was able to capture.
The main byproduct of the chemical reaction was carbonic acid, which the team noted can be easily neutralized into baking soda and deposited in the ocean. “Spent regenerant can be safely returned to the sea, an infinite sink for captured CO2,” they wrote. “Such a sequestration technique will also eliminate the energy needed for pressurizing and liquefying CO2 before deepwell injection.” This method would be most relevant in locations close to an ocean where geological storage—that is, injecting CO2 underground to turn it into rock—isn’t possible.
Using this newly-created material in large-scale carbon capture operations could be a game-changer. Not only would the manufacturing process for the sorbent be cheap and scalable, it would capture more CO2 and require less energy.
But would all that be enough to make direct air capture worthwhile, and truly put a dent in atmospheric CO2? To put it bluntly, probably not. Right now the world’s DAC facilities collectively capture 0.01 million metric tons of CO2. The IEA’s 2022 report on the technology estimates we’ll need to be capturing 85 million metric tons by 2030 to avoid the worst impacts of climate change.
No matter which way you do the math, it seems like a long shot; rather than a material that absorbs three times as much CO2 per unit, we need one that absorbs 3,000 times as much. But as we’ve witnessed throughout history, most scientific advances happen incrementally, not all at once. If we’re to reach a point where direct air capture is a true solution, it will take many more baby steps—like this one—to get there.
Image Credit: Michaela / Pixabay
Biocomputing With Mini-Brains as Processors Could Be More Powerful Than Silicon-Based AI
The human brain is a master of computation. It’s no wonder that from brain-inspired algorithms to neuromorphic chips, scientists are borrowing the brain’s playbook to give machines a boost.
Yet the results—in both software and hardware—only capture a fraction of the computational intricacies embedded in neurons. But perhaps the major roadblock in building brain-like computers is that we still don’t fully understand how the brain works. For example, how does its architecture—defined by pre-established layers, regions, and ever-changing neural circuits—make sense of our chaotic world with high efficiency and low energy usage?
So why not sidestep this conundrum and use neural tissue directly as a biocomputer?
This month, a team from Johns Hopkins University laid out a daring blueprint for a new field of computing: organoid intelligence (OI). Don’t worry—they’re not talking about using living human brain tissue hooked up to wires in jars. Rather, as in the name, the focus is on a surrogate: brain organoids, better known as “mini-brains.” These pea-sized nuggets roughly resemble the early fetal human brain in their gene expression, wide variety of brain cells, and organization. Their neural circuits spark with spontaneous activity, ripple with brain waves, and can even detect light and control muscle movement.
In essence, brain organoids are highly-developed processors that duplicate the brain to a limited degree. Theoretically, different types of mini-brains could be hooked up to digital sensors and output devices—not unlike brain-machine interfaces, but as a circuit outside the body. In the long term, they may connect to each other in a super biocomputer trained using biofeedback and machine learning methods to enable “intelligence in a dish.”
Sound a bit creepy? I agree. Scientists have long debated where to draw the line; that is, when the mini-brain becomes too similar to a human one, with the hypothetical nightmare scenario of the nuggets developing consciousness.
The team is well aware. As part of organoid intelligence, they highlight the need for “embedded ethics,” with a consortium of scientists, bioethicists, and the public weighing in throughout development. But to senior author Dr. Thomas Hartung, the time for launching organoid intelligence research is now.
“Biological computing (or biocomputing) could be faster, more efficient, and more powerful than silicon-based computing and AI, and only require a fraction of the energy,” the team wrote.
A Brainy SolutionUsing brain tissue as computational hardware may seem bizarre, but there’ve been previous pioneers. In 2022, the Australian company Cortical Labs taught hundreds of thousands of isolated neurons in a dish to play Pong inside a virtual environment. The neurons connected with silicon chips powered by deep learning algorithms into a “synthetic biological intelligence platform” that captured basic neurobiological signs of learning.
Here, the team took the idea a step further. If isolated neurons could already support a rudimentary form of biocomputing, what about 3D mini-brains?
Since their debut a decade ago, mini-brains have become darlings for examining neurodevelopmental disorders such as autism and testing new drug treatments. Often grown from a patient’s skin cells—transformed into induced pluripotent stem cells (iPSCs)—the organoids are especially powerful for mimicking a person’s genetic makeup, including their neural wiring. More recently, human organoids partially restored damaged vision in rats after integrating with their host neurons.
In other words, mini-brains are already building blocks for a plug-and-play biocomputing system that readily connects with biological brains. So why not leverage them as processors for a computer? “The question is: can we learn from and harness the computing capacity of these organoids?” the team asked.
A Hefty BlueprintLast year, a group of biocomputing experts united in the first organoid intelligence workshop in an effort to form a community tackling the use and implications of mini-brains as biocomputers. The overarching theme, consolidated into “the Baltimore declaration,” was collaboration. A mini-brain system needs several components: devices to detect input, the processor, and a readable output.
In the new paper, Hartung envisions four trajectories to accelerate organoid intelligence.
The first focuses on the critical component: the mini-brain. Although densely packed with brain cells that support learning and memory, organoids are still difficult to culture on a large scale. An early key aim, explained the authors, is scaling up.
Microfluidic systems, which act as “nurseries,” also need to improve. These high-tech bubble baths provide nutrients and oxygen to keep burgeoning mini-brains alive and healthy while removing toxic waste, giving them time to mature. The same system can also pump neurotransmitters—molecules that bridge communication between neurons—into specific regions to modify their growth and behavior.
Scientists can then monitor growth trajectories using a variety of electrodes. Although most are currently tailored for 2D systems, the team and others are leveling up with 3D interfaces specifically designed for organoids, inspired by EEG (electroencephalogram) caps with multiple electrodes placed in a spherical shape.
Then comes the decoding of signals. The second trajectory is all about deciphering the whens and wheres of neural activity inside the mini-brains. When zapped with certain electrical patterns—for example, those that encourage the neurons to play Pong—do they output the expected results?
It’s another hard task; learning changes neural circuits on multiple levels. So what to measure? The team suggests digging into multiple levels, including altered gene expression in neurons and how they connect into neural networks.
Here is where AI and collaboration can make a splash. Biological neural networks are noisy, so multiple trials are needed before “learning” becomes apparent—in turn generating a deluge of data. To the team, machine learning is the perfect tool to extract how different inputs, processed by the mini-brain, transform into outputs. Similar to large-scale neuroscience projects such as the BRAIN Initiative, scientists can share their organoid intelligence research in a community workspace for global collaborations.
Trajectory three is further in the future. With efficient and long-lasting mini-brains and measuring tools in hand, it’s possible to test more complex inputs and see how the stimulation feeds back into the biological processor. For example, does it make its computation more efficient? Different types of organoids—say, those that resemble the cortex and the retina—can be interconnected to build more complex forms of organoid intelligence. These could help “empirically test, explore, and further develop neurocomputational theories of intelligence,” the authors wrote.
Intelligence on Demand?The fourth trajectory is the one that underlines the entire project: the ethics of using mini-brains for biocomputing.
As brain organoids increasingly resemble the brain—so much so that they can integrate and partially restore a rodent’s injured visual system—scientists are asking if they may gain a sort of awareness.
To be clear, there is no evidence that mini-brains are conscious. But “these concerns will mount during the development of organoid intelligence, as the organoids become structurally more complex, receive inputs, generate outputs, and—at least theoretically—process information about their environment and build a primitive memory,” the authors said. However, the goal of organoid intelligence isn’t to recreate human consciousness—rather, it’s to mimic the brain’s computational functions.
The mini-brain processor is hardly the only ethical concern. Another is cell donation. Because mini-brains retain their donor’s genetic makeup, there’s a chance of selection bias and limitation on neurodiversity.
Then there’s the problem of informed consent. As history with the famous cancer cell line HeLa cells has shown, cell donation can have multi-generational impacts. “What does the organoid exhibit about the cell donor?” the authors asked. Will researchers have an obligation to inform the donor if they discover neurological disorders during their research?
To navigate the “truly uncharted territory,” the team proposes an embedded ethics approach. At each step, bioethicists will collaborate with research teams to map out potential issues iteratively while gathering public opinions. The strategy is similar to other controversial topics, such as genetic editing in humans.
A mini-brain-powered computer is years away. “It will take decades before we achieve the goal of something comparable to any type of computer,” said Hartung. But it’s time to start—launching the program, consolidating multiple technologies across fields, and engaging in ethical discussions.
“Ultimately, we aim toward a revolution in biological computing that could overcome many of the limitations of silicon-based computing and AI and have significant implications worldwide,” the team said.
Image Credit: Jesse Plotkin/Johns Hopkins University
New Results From NASA’s DART Mission Confirm We Could Deflect Deadly Asteroids
What would we do if we spotted a hazardous asteroid on a collision course with Earth? Could we deflect it safely to prevent the impact?
Last year, NASA’s Double Asteroid Redirection Test (DART) mission tried to find out whether a “kinetic impactor” could do the job: smashing a 600-kilogram spacecraft the size of a fridge into an asteroid the size of the Roman Colosseum.
Early results from this first real-world test of our potential planetary defense systems looked promising. However, it’s only now that the first scientific results are being published: five papers in Nature have recreated the impact, and analyzed how it changed the asteroid’s momentum and orbit, while two studies investigate the debris knocked off by the impact.
The conclusion: “Kinetic impactor technology is a viable technique to potentially defend Earth if necessary.”
Small Asteroids Could Be Dangerous, but Hard to SpotOur Solar System is full of debris, left over from the early days of planet formation. Today, some 31,360 asteroids are known to loiter around Earth’s neighborhood.
Asteroid statistics and the threats posed by asteroids of different sizes. Image Credit: NASA’s DART press brief
Although we have tabs on most of the big, kilometer-sized ones that could wipe out humanity if they hit Earth, most of the smaller ones go undetected.
Just over 10 years ago, an 18-meter asteroid exploded in our atmosphere over Chelyabinsk, Russia. The shockwave smashed thousands of windows, wreaking havoc and injuring some 1,500 people.
A 150-meter asteroid like Dimorphos wouldn’t wipe out civilization, but it could cause mass casualties and regional devastation. However, these smaller space rocks are harder to find: we think we have only spotted around 40 percent of them so far.
The DART MissionSuppose we did spy an asteroid of this scale on a collision course with Earth. Could we nudge it in a different direction, steering it away from disaster?
Hitting an asteroid with enough force to change its orbit is theoretically possible, but can it actually be done? That’s what the DART mission set out to determine.
Specifically, it tested the “kinetic impactor” technique, which is a fancy way of saying “hitting the asteroid with a fast-moving object.”
The asteroid Dimorphos was a perfect target. It was in orbit around its larger cousin, Didymos, in a loop that took just under 12 hours to complete.
The impact from the DART spacecraft was designed to slightly change this orbit, slowing it down just a little so that the loop would shrink, shaving an estimated seven minutes off its round trip.
A Self-Steering SpacecraftFor DART to show the kinetic impactor technique is a possible tool for planetary defense, it needed to demonstrate two things: that its navigation system could autonomously maneuver and target an asteroid during a high-speed encounter, and that such an impact could change the asteroid’s orbit.
In the words of Cristina Thomas of Northern Arizona University and colleagues, who analyzed the changes to Dimorphos’ orbit as a result of the impact, “DART has successfully done both.”
The DART spacecraft steered itself into the path of Dimorphos with a new system called Small-body Maneuvering Autonomous Real Time Navigation (SMART Nav), which used the onboard camera to get into a position for maximum impact.
More advanced versions of this system could enable future missions to choose their own landing sites on distant asteroids where we can’t image the rubble-pile terrain well from Earth. This would save the trouble of a scouting trip first!
Dimorphos itself was one such asteroid before DART. A team led by Terik Daly of Johns Hopkins University has used high-resolution images from the mission to make a detailed shape model. This gives a better estimate of its mass, improving our understanding of how these types of asteroids will react to impacts.
Dangerous DebrisThe impact itself produced an incredible plume of material. Jian-Yang Li of the Planetary Science Institute and colleagues have described in detail how the ejected material was kicked up by the impact and streamed out into a 1,500-kilometer tail of debris that could be seen for almost a month.
The DART impact blasted a vast plume of dust and debris from the surface of the asteroid Dimorphos. Image Credit: CTIO / NOIRLab / SOAR / NSF / AURA / T. Kareta (Lowell Observatory), M. Knight (US Naval Academy)
Streams of material from comets are well known and documented. They are mainly dust and ice and are seen as harmless meteor showers if they cross paths with Earth.
Asteroids are made of rockier, stronger stuff, so their streams could pose a greater hazard if we encounter them. Recording a real example of the creation and evolution of debris trails in the wake of an asteroid is very exciting. Identifying and monitoring such asteroid streams is a key objective of planetary defense efforts such as the Desert Fireball Network we operate from Curtin University.
A Bigger Than Expected ResultSo how much did the impact change Dimorphos’ orbit? By much more than the expected amount. Rather than changing by 7 minutes, it had become 33 minutes shorter!
This larger-than-expected result shows the change in Dimorphos’ orbit was not just from the impact of the DART spacecraft. The larger part of the change was due to a recoil effect from all the ejected material flying off into space, which Ariel Graykowski of the SETI Institute and colleagues estimated as between 0.3 percent and 0.5 percent of the asteroid’s total mass.
A First SuccessThe success of NASA’s DART mission is the first demonstration of our ability to protect Earth from the threat of hazardous asteroids.
At this stage, we still need quite a bit of warning to use this kinetic impactor technique. The earlier we intervene in an asteroid’s orbit, the smaller the change we need to make to push it away from hitting Earth. (To see how it all works, you can have a play with NASA’s NEO Deflection app.)
But should we? This is a question that will need answering if we ever do have to redirect a hazardous asteroid. In changing the orbit, we’d have to be sure we weren’t going to push it in a direction that would hit us in future too.
However, we are getting better at detecting asteroids before they reach us. We have seen two in the past few months alone: 2022WJ1, which impacted over Canada in November, and Sar2667, which came in over France in February.
We can expect to detect a lot more in future, with the opening of the Vera Rubin Observatory in Chile at the end of this year.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: CTIO / NOIRLab / SOAR / NSF / AURA/ T. Kareta (Lowell Observatory), M. Knight (US Naval Academy)
[PDF] Data Science market Research Report 2020: size, share, opportunities, and forecast 2030
The Data Science Market includes a wide range of products and services, including data analytics software, data visualization tools, machine learning …
Link to Full Article: Read Here
How the first chatbot predicted the dangers of AI more than 50 years ago – Vox
… how will our escalating relationship with artificial intelligences … artificial intelligence, morality, and the biggest threats to society.
Link to Full Article: Read Here
Defence orders machine learning research from Uni SA and Deakin – InnovationAus.com
The Department of Defence is tipping $1.7 million into two university research projects to develop a machine learning algorithm for wearable …
Link to Full Article: Read Here
One of the biggest autonomous transportation tests is operating deep underwater – CNBC
China recently completed construction on the Zhu Hai Yun, an unmanned ship made to transport drones and AUVs that utilizes artificial intelligence …
Link to Full Article: Read Here
Top 10 best AI tools – TechStory
An open-source deep learning framework called Apache MXNet creates and trains neural networks. Although the Apache Software Foundation currently owns …
Link to Full Article: Read Here
Data Book podcast: Ajay Khanna, Tellius CEO, talks about ‘decision intelligence‘
In the latest episode, Ajay Khanna explains how healthcare organizations can use artificial intelligence to gain new insights into their business.
Link to Full Article: Read Here
Data Book podcast: Ajay Khanna, Tellius CEO, talks about ‘decision intelligence‘
Khanna talks about machine learning and artificial intelligence, and its potential to help healthcare organizations gain new insights into their …
Link to Full Article: Read Here
Europe’s AI weaknesses could matter less in generative world, says Insight Partners – Sifted
… use of data — which is much stricter in Europe, making it harder to train machine learning models on lots of information than in the States.
Link to Full Article: Read Here
Elon Musk wants AI devs to build ‘anti-woke’ ChatGPT bot • The Register
… it – is key to improving the performance of machine learning models, … to use artificial intelligence," both nationally and internationally, …
Link to Full Article: Read Here
Artificial Intelligence for Evaluation of Retinal Vasculopathy in Facioscapulohumeral … – MDPI
Facioscapulohumeral muscular dystrophy (FSHD) is a slowly progressive muscular dystrophy with a wide range of manifestations including retinal …
Link to Full Article: Read Here
AI and microscopy may revolutionize study of cells, molecule behavior | Laser Focus World
“AI has become increasingly used to analyze images obtained from digital microscopy, following the deep-learning revolution,” says Jesús Pineda, …
Link to Full Article: Read Here
Monetizing the Math: AI Strategies that Boost ROI for Enterprise Investments – Spiceworks
Business leaders are drawn to Artificial Intelligence to generate new revenue, save money, expand infrastructure to serve customers, and establish …
Link to Full Article: Read Here
Targeted Reminders Increase Prescriptions for High-Intensity Statins | DAIC
Overall, the study, which is among the largest to date to use machine learning-generated reminders to influence clinicians' prescribing practices, …
Link to Full Article: Read Here
UTRGV Baseball Blanks Houston, 1-0 – YouTube
LangChain Demo + Q&A with Harrison Chase. Full Stack Deep Learning. Full Stack Deep Learning. •. •. 2.4K views 12 days ago …
Link to Full Article: Read Here
Tackling Artificial Intelligence: How to Use AI to Improve Business Endeavors
The fact that AI is even using other AI to improve its machine learning shows how effective artificial intelligence can be at consolidating and …
Link to Full Article: Read Here
Starting with SNOWFLAKE – DataDrivenInvestor
Snowflake's machine learning capabilities include pre-built algorithms and models, and support for popular machine learning frameworks, such as …
Link to Full Article: Read Here
- « první
- ‹ předchozí
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »
