Wednesday, 20 November 2024

Ancient Aztec 'skull whistles' found to instill fear in modern people

NOVEMBER 18, 2024, **REPORT**, by B. Yirka , Phys.org

Original exemplars and replicas of Aztec skull whistles. 
Credit: Communications Psychology (2024). DOI: 10.1038/s44271-024-00157-7

A team of cognitive neuroscientists at the University of Zurich, has found that ancient Aztec "skull whistles" found in gravesites are able to instill fear in modern people. In their study, published in the journal Communications Psychology, the group recorded the neural and psychological responses of volunteers as they listened to the screams produced by the whistles.

In digging up ancient Aztec graves dating from the years 1250 to 1521 AD, archaeologists have found many examples of small whistles made of clay and formed into the shape of a skull. These whistles still work today as they did when they were buried next to a person in a grave. They produce sounds most often described as a scream of sorts.

Prior research has shown that the sound is produced by air pushed through different parts of the whistle that then collide. In this new study, the research team sought to uncover the reason behind the creation and use of the whistle.

Anecdotal evidence has suggested that hearing the whistle can lead to a sense of alarm or fear in those in the vicinity; hence its name. To get a more measurable result, the researchers recruited several European volunteers, each of whom were monitored with a device that could measure their neural and psychological responses while they heard the whistle. The researchers also asked the volunteers to describe whatever sensations they were feeling.

The volunteers exhibited similar reactions—certain low-level cortical auditory regions of the brain became instantly activated, indicating that they were on high alert. They also found that the volunteers said the sound made them feel frightened and aversive—they wanted it to stop. The researchers also found that the whistle sound tended to confuse the brain, leaving it reeling momentarily. This, they suggest, hints at the possibility that the whistle was used during ceremonies surrounding the dead, possibly as a way to frighten attendees.



Recommend this post 

and follow

The birth of modern Man

Research reveals even single-cell organisms exhibit habituation, a simple form of learning

Nov. 19, 2024, by Harvard Medical School

Microscopy image of the single-celled ciliate Stentor roeseli. 
Credit: Joseph Dexter



A dog learns to sit on command, a person hears and eventually tunes out the hum of a washing machine while reading … The capacity to learn and adapt is central to evolution and, indeed, survival.

Habituation—adaptation's less-glamorous sibling—involves the lessening response to a stimulus after repeated exposure. Think the need for a third espresso to maintain the same level of concentration you once achieved with a single shot.

Up until recently, habituation—a simple form of learning—was deemed the exclusive domain of complex organisms with brains and nervous systems, such as worms, insects, birds, and mammals.

But a new study, published Nov. 19 in Current Biology, offers compelling evidence that even tiny single-cell creatures such as ciliates and amoebae, as well as the cells in our own bodies, could exhibit habituation akin to that seen in more complex organisms with brains.

The work, led by scientists at Harvard Medical School and the Center for Genomic Regulation (CRG) in Barcelona, suggests that single cells are capable of behaviors more complex than currently appreciated.

"This finding opens up an exciting new mystery for us: How do cells without brains manage something so complex?" said study senior author Jeremy Gunawardena, associate professor of systems biology at the Blavatnik Institute at HMS. He co-led the study with Rosa Martinez Corral, a former post-doctoral researcher in his lab who now leads a research group in systems and synthetic biology at CRG.

The results add to a small but growing body of work on this subject. Earlier work led by Gunawardena found that a single-cell ciliate showed avoidance behavior, not unlike the actions observed in animals that encounter unpleasant stimuli.

What the researchers discovered

Instead of studying cells in a lab dish, the scientists used advanced computer modeling to analyze how molecular networks inside ciliate and mammalian cells respond to different patterns of stimulation. They found four networks that exhibit hallmarks of habituation present in animal brains.

These networks shared a common feature: Each molecular network had two forms of "memory" storage that captured information learned from the environment. One memory decayed much faster than the other—a form of memory loss necessary for habituation, the researchers noted. This finding suggests that single cells process and remember information over different time spans.

Studying habituation in single cells could help propel understanding of how learning in general works, the researchers said. The findings also cast the humble single-cell creatures in a new, more tantalizing light: They are not merely molecular machines packed in microscopic bodies, but they are also agents that can learn.

But what about more practical applications?

The researchers caution that these remain purely speculative for now. Yet one daring idea would be to apply the concept of habituation to the relationship between cancer and immunity.

Tumors are notoriously good evaders of immune surveillance because they trick immune cells into viewing them as innocent bystanders. In other words, the immune cells responsible for recognizing cancer may get somehow habituated to the presence of a cancer cell—the immune cell gets used to the stimulus and no longer responds to it.

"It's akin to delusion. If we knew how these false perceptions get encoded in immune cells, we may be able to re-engineer them so that immune cells begin to perceive their environments correctly, the tumor becomes visible as malign, and they get to work," Gunawardena said.

"It is a fantasy right now, but it is a direction I would love to explore down the road."



Recommend this post and follow
The Life of Earth

Hunter-gatherer study helps explain how children have learned for 99% of human history

Nov. 19, 2024, by W. Ferguson, Washington State U.

An Aka man shows a group of children how to weave a hunting net. 
Credit: WSU

Unlike kids in the United States, hunter-gatherer children in the Congo Basin have often learned how to hunt, identify edible plants and care for babies by the tender age of six or seven.

This rapid learning is facilitated by a unique social environment where cultural knowledge is passed down not just from parents but from the broader community, according to a new Washington State University-led study in the Proceedings of the National Academy of Sciences.

The research helps explain how many cultural traits have been preserved for thousands of years among hunter-gatherer groups across a wide range of natural environments in Africa.

"We focus on hunter-gatherers because this way of life characterized 99% of human history," said Barry Hewlett, a professor of anthropology at WSU and lead author of the study.

"Our bodies and minds are adapted to this intimate, small group living, rather than to contemporary urban life. By examining how children in these societies learn, we aim to uncover the mechanisms that have allowed humans to adapt to diverse environments across the globe."

For the study, Hewlett and colleagues used observational and ethnographic data to examine nine different modes of cultural transmission, meaning from whom and how children learn, in hunter-gatherer societies.

A young adolescent Aka boy at the front door of the WSU field station house in the Central African Republic. 
Credit: WSU



Their analysis reveals that members related to a child's extended family have likely played a greater role in transmitting knowledge to children than previously thought. Additionally, the study shows about half of the cultural knowledge hunter-gatherer children and adolescents acquire comes from people they are not related to. This contrasts with previous studies on the topic that have more heavily emphasized the transmission of knowledge from parent to child.

Hewlett explains that the findings are likely due in large part to how children in hunter-gatherer societies learn from a variety of sources, including parents, peers and even unrelated adults in the community. This contrasts with the Western nuclear family model, where learning is often centered around parents or teachers in a formalized school setting.

The broad informal learning network in hunter-gatherer societies is made possible by intimate living conditions. Small camps, usually consisting of 25–35 individuals living in homes a few feet from each other, create an environment where children can observe and interact with a wide range of people. This allows them to learn essential skills, including caring for infants and cooking as well as hunting and gathering, through a process that is often subtle and nonverbal.

The study also highlights the importance of egalitarianism, respect for individual autonomy and extensive sharing in shaping how cultural knowledge is passed down among hunter gatherers. For example, children learn the importance of equality and autonomy by observing the behavior of adults and children around them. They are not coerced into learning but are given the freedom to explore and practice skills on their own, fostering a deep understanding of their culture.

Two young adolescent Aka boys getting ready to go net hunting. 
Credit: WSU



"This approach to learning contributes to what we call 'cumulative culture'—the ability to build on existing knowledge and pass it down through generations," Hewlett said.


"Unlike in many non-human animals, where social learning is limited to a few skills, humans have developed complex mental and social structures that allow for the transmission of thousands of cultural traits. This has enabled us to innovate and adapt to various environments, from dense forests to arid deserts."

Moving forward, Hewlett hopes that this research offers a more nuanced understanding of the nature of social learning in humans and how cultures in general are conserved and change over time. His co-authors on the study are Adam Boyette, Max Planck Institute for Evolutionary Anthropology, Sheina Lew-Levy, Durham University Department of Anthropology, Sandrine Gallois, Autonomous University of Barcelona Institute of Environmental Science and Technology, and Samuel Dira, Hawassa University Department of Anthropology.


Recommend this post and follow
The birth of modern Man

Tuesday, 19 November 2024

Saltwater flooding is a serious fire threat for EVs and other devices with lithium-ion batteries

NOV. 18, 2024, by X. Huang, The Conversation

Most electric vehicles and plug-in hybrid cars use arrays of lithium-ion batteries like these. 
Credit: DOE

Flooding from hurricanes Helene and Milton inflicted billions of dollars in damage across the Southeast in September and October 2024, pushing buildings off their foundations and undercutting roads and bridges. It also caused dozens of electric vehicles and other battery-powered objects, such as scooters and golf carts, to catch fire.

According to one tally, 11 electric cars and 48 lithium-ion batteries caught fire after exposure to salty floodwater from Helene. In some cases, these fires spread to homes.

When a lithium-ion battery pack bursts into flames, it releases toxic fumes, burns violently and is extremely hard to put out. Frequently, firefighters' only option is to let it burn out by itself.

Particularly when these batteries are soaked in saltwater, they can become "ticking time bombs," in the words of Florida State Fire Marshall Jimmy Patronis. That's because the fire doesn't always occur immediately when the battery is flooded. According to the National Highway Traffic Safety Administration, about 36 EVs flooded by Hurricane Ian in Florida in 2022 caught fire, including several that were being towed after the storm on flatbed trailers.

Many consumers are unaware of this risk, and lithium-ion batteries are widely used in EVs and hybrid cars, e-bikes and scooters, electric lawnmowers and cordless power tools.

I'm a mechanical engineer and am working to help solve battery safety issues for our increasingly electrified society. Here's what all owners should know about water and the risk of battery fires:

https://www.youtube.com/watch?v=gWkEGEbpqFc

Emergency responders handle EVs that were immersed in saltwater during Hurricane Ian in Florida in 2022, including some that ignited.

The threat of saltwater

The trigger for lithium-ion battery fires is a process called thermal runaway—a cascading sequence of heat-releasing reactions inside the battery cell.

Under normal operating conditions, the probability of a lithium-ion cell going into thermal runaway is less than 1 in 10 million. But it increases sharply if the cell is subjected to electrical, thermal or mechanical stress, such as short-circuiting, overheating or puncture.

Saltwater is a particular problem for batteries because salt dissolved in water is conductive, which means that electric current readily flows through it. Pure water is not very conductive, but the electrical conductivity of seawater can be more than a thousand times higher than that of fresh water.

All EV battery pack enclosures use gaskets to seal off their internal space from the elements outside. Typically, they have waterproof ratings of IP66 or IP67. While these ratings are high, they do not guarantee that a battery will be watertight when it is immersed for a long period of time—say, over 30 minutes.

Battery packs also have various ports to equalize pressure inside the battery and move electrical power in and out. These can be potential pathways for water to leak into the pack enclosure. Inadequate seal ratings and manufacturing defects can also enable water to find its way into the battery pack if it is immersed.

How water leads to fire

All batteries have two terminals: One is marked positive (+), and the other is marked negative (-). When the terminals are connected to a device that uses electricity to do work, such as a light bulb, chemical reactions occur inside the battery that cause electrons to flow from the negative to the positive terminal. This creates an electric current and releases the energy stored in the battery.

Electrons flow between a battery's terminals because the chemical reactions inside the battery create different electrical potentials between the two terminals. This difference is also known as voltage. When saltwater comes into contact with metal battery terminals with different electrical potentials, the battery can short-circuit, inducing rapid corrosion and electric arcing, and generating excessive current and heat. The more conductive the liquid is that penetrates the battery pack, the higher the shorting current and rate of corrosion.

Rapid corrosion reactions within the battery pack produce hydrogen and oxygen, corroding away materials from metallic terminals on the positive side of the battery and depositing them onto the negative side. Even after the water drains away, these deposited materials can form solid shorting bridges that remain inside the battery pack, causing a delayed thermal runaway. A fire can start days after the battery is flooded.

Even a battery pack that is fully discharged isn't necessarily safe during flooding. A lithium-ion cell, even at 0% state of charge, still has about a three-volt potential difference between its positive and negative terminals, so some current can flow between them. For a battery string with many cells in a series—a typical configuration in electric cars—residual voltage can still be high enough to drive these reactions.

Many scientists, including me and my colleagues, are working to understand the exact sequence of events that can occur in a battery pack after it is exposed to saltwater and lead to thermal runaway. We also are looking for ways to help reduce fire risks from flooded battery packs.

These could include finding better ways to seal the battery packs; using alternative, more corrosion-resistant materials for the battery terminals; and applying waterproof coatings to exposed terminals inside the battery pack.


Recommend this post and follow
The birth of modern Man

Global antibiotic consumption has increased substantially since 2016, study finds

NOV. 18, 2024, by One Health Trust

Change in global antibiotic consumption by country and country income classification, 2016–2023. 
(A) Yearly antibiotic consumption rate, measured in DDDs per 1,000 inhabitants per day, by country income classification. 
(B) Absolute change in antibiotic consumption rate between 2016 and 2023 by country in DDDs per 1,000 inhabitants per day. Countries in gray have no data in the database. Country income classifications noted as LMIC = lower-middle-income countries, MIC = middle-income countries, UMIC = upper-middle-income countries, HIC = high-income countries. 
Data Source: Based on IQVIA MIDAS® sales data for period 2016–2023. 
Credit: IQVIA

A new study highlights the recent but fluctuating growth in global human antibiotic consumption, one of the main drivers of growing antimicrobial resistance (AMR). AMR results in infections that no longer respond to antibiotics (and other antimicrobial medicines) and often leads to longer hospital stays, higher treatment costs, and higher mortality rates. AMR is estimated to be associated with nearly five million global deaths annually.

Researchers affiliated with the One Health Trust (OHT), the Population Council, GlaxoSmithKline, the University of Zurich, the University of Brussels, Johns Hopkins University, and the Harvard T.H. Chan School of Public Health analyzed pharmaceutical sales data from 67 countries from 2016-2023 for the effects of the COVID-19 pandemic and economic growth on human antibiotic consumption.

The study provides a breakdown of global antibiotic sales in reported countries by national income level, antibiotic class, and antibiotic grouping according to the World Health Organization's (WHO) AWaRe classification system for antibiotic stewardship and projects consumption through 2030.

The study is published in the Proceedings of the National Academy of Sciences.

The study found:Overall antibiotic sales increased in reporting countries from 2016-2023. Antibiotic sales in 67 reporting countries increased by 16.3% from 2016 to 2023, from 29.5 billion defined daily doses (DDDs) to 34.3 billion DDDs. This result reflected a 10.2% increase in the overall consumption rate in these countries from 13.7 to 15.2 DDDs per 1,000 inhabitants per day.
Before the COVID-19 pandemic, antibiotic consumption rates in high-income countries were decreasing, and consumption rates in middle-income countries were increasing. From 2016-2019, antibiotic consumption rates (DDDs per 1,000 inhabitants per day) increased in middle-income countries (9.8 percent) while decreasing in high-income countries (-5.8 percent).

The COVID-19 pandemic was significantly correlated with an overall reduction in antibiotic sales, most pronounced in high-income countries. An interrupted time series analysis showed that the onset of the COVID-19 pandemic in 2020 resulted in significantly decreased antibiotic consumption rates across income groups. The decrease was most pronounced in high-income countries, with the consumption rate falling 17.8% from 2019 to 2020. In 2021, lower-middle-income countries led high-income countries in antibiotic consumption rates as high-income countries experienced more sustained reductions.

Middle-income countries experienced increased Watch antibiotic sales relative to Access antibiotic sales throughout the study period. High-income countries consumed consistently higher and overall increasing levels of Access antibiotics compared to Watch antibiotics as defined by the WHO's AWaRe system. Middle-income countries consumed consistently higher and overall increasing Watch antibiotics relative to Access antibiotics.

Middle-income countries experienced the largest increases in antibiotic consumption rates from 2016-2023. All five of the regions with the largest increases in their antibiotic consumption rate over the study period were made up of middle-income countries.

By 2030, global consumption is expected to increase by 52.3% to 75.1 billion DDDs. Global projections based on the data from 67 countries show that by 2030, antibiotic consumption is expected to increase from 49.3 billion DDDs by 52.3% (uncertainty range [UR]: 22.1 to 82.6 percent) to a total of 75.1 (UR: 60.2 to 90.1) billion DDDs.

This study sheds light on recent trends in consumption across country income levels that can be used to help promote the careful use of antibiotics and other public health interventions that may reduce antibiotic consumption, such as improved infection prevention and control measures and increased childhood vaccination coverage. The study also has implications for future pandemic preparedness.

According to Dr. Eili Klein, lead author of the study and Senior Fellow at OHT, "The COVID-19 pandemic temporarily disrupted antibiotic use, but global consumption has rebounded quickly and continues to rise at an alarming rate. To address this escalating crisis, we must prioritize reducing inappropriate antibiotic use in high-income nations while making substantial investments in infrastructure in low- and middle-income countries to curb disease transmission effectively."


Recommend this post and follow
The Life of Earth

California’s Farmland is Sinking Faster Than Ever – Can We Stop It?

BY STANFORD U. NOV. 19, 2024


Land subsidence in California’s San Joaquin Valley is worsening, with a Stanford study showing an average sinking rate of nearly an inch per year from 2006 to 2022.

California’s San Joaquin Valley is experiencing severe land subsidence due to groundwater over-extraction, causing extensive damage and economic loss.

A Stanford study from 2006 to 2022 highlights an average subsidence rate of nearly an inch per year. The research suggests using flood-managed aquifer recharge to address this issue sustainably by refilling aquifers and preventing further land sinking.

Subsidence in California’s Heartland

A new study reveals that California’s San Joaquin Valley has been sinking at unprecedented rates over the past two decades due to excessive groundwater extraction surpassing natural recharge.

The research found that, on average, the valley sank nearly an inch per year from 2006 to 2022. While scientists and water managers have long known about this phenomenon, called “subsidence,” its full impact remained unclear because the total extent of sinking had not been measured.

This knowledge gap was partly due to a lack of consistent data. Satellite radar systems, which are essential for accurately tracking changes in ground elevation, did not continuously monitor the San Joaquin Valley between 2011 and 2015. Stanford researchers have now filled in this missing data, estimating how much the land sank during that period.

“Our study is the first attempt to really quantify the full Valley-scale extent of subsidence over the last two decades,” said senior study author Rosemary Knight, a professor of geophysics in the Stanford Doerr School of Sustainability. “With these findings, we can look at the big picture of mitigating this record-breaking subsidence.”

The new study, published today (November 19) in Communications Earth and Environment, offers ideas on how to stop the sinking through strategic regional water recharge and other management approaches.

The Price of Subsidence

Rapid and uneven declines in land elevation have forced multimillion-dollar repairs to canals and aqueducts that ferry critical water through the San Joaquin Valley to southern California’s major cities. By damaging local wells and irrigation ditches, this subsidence is also exacerbating water supply issues for one of the most agriculturally productive regions in the world.

“The bill for repairing major aqueducts like the Friant-Kern Canal and the California Aqueduct is exceptionally high,” said lead author Matthew Lees, PhD ’23, a research associate with the University of Manchester who worked on the study as a PhD student in geophysics at Stanford. “But the subsidence is having other effects, too. How much was last year’s flooding worsened by subsidence? How much are farmers spending to re-level their land? A lot of the costs of subsidence aren’t well known.”

Historical Context and Modern Challenges

Subsidence occurs as water is removed from natural reservoirs called aquifers, where water is stored in underground sediments including sand, gravel, and clay. Like a sponge, the sediments are full of pores. As those spaces are emptied, the sediments compact – in some cases permanently, altering future water-carrying capacity – and cause the ground level to fall.

In the San Joaquin Valley, which runs from east of the San Francisco Bay Area down to the mountains north of Los Angeles, booming agriculture and population growth prompted aggressive pumping of groundwater between 1925 and 1970. The result: More than 4,000 square miles – an area half the size of New Jersey – sank by over 12 inches, reaching about 30 feet in some locations, a profound landscape change that a 1999 governmental report described as “one of the single largest alterations of the land surface attributed to humankind.”

The problem ebbed during the 1970s following the installation of new aqueducts. But it roared back in the early 2000s amid a series of droughts, intensified groundwater pumping, land-use changes, and reduced deliveries from Northern California rivers. “There are two astonishing things about the subsidence in the valley. First, is the magnitude of what occurred prior to 1970. And second, is that it is happening again today,” said Knight.

Insights Into Current Subsidence Rates

To gauge the recent subsidence rate, Lees and Knight turned to a technique known as interferometric synthetic aperture radar, or InSAR. The technique captures elevation changes across roughly football field-size chunks of land as frequently now as a few times per month by beaming radar signals from orbit. The signals reflect off the ground back to the satellites, and analysis of the received signal reveals changes in ground elevation.

The InSAR data record for the San Joaquin Valley is patchy between 2011 and 2015, due to limited satellite coverage. To fill this gap, Lees and Knight used elevation data from Global Positioning System (GPS) stations scattered throughout the region. They identified spatial patterns in the InSAR record and used these to interpolate elevation in the vast areas between GPS stations.

Sustainable Solutions for the Future

Additional analysis by the researchers suggests that San Joaquin Valley aquifers require approximately 220 billion gallons of water coming in each year – through natural or engineered processes – to prevent future subsidence.

This is about 7 billion gallons less than the amount of surface water left over in the San Joaquin Valley in an average year after all environmental needs are covered. “I am optimistic that we can do something about subsidence,” said Knight. “My group and others have been studying this problem for some time, and this study is a key piece in figuring out how to sustainably address it.”

Replenishing Aquifers to Prevent Sinking

A water management approach called flood-managed aquifer recharge (flood-MAR), which is being widely adopted in California, could help. It involves diverting excess surface water from precipitation and snowmelt to locations where the water can percolate down and recharge aquifers.

Drenching the whole of the Valley via flood-MAR water is not feasible. “We should be targeting the places where subsidence will cause the greatest social and economic costs,” said Knight. “So, we look at places where subsidence is going to damage an aqueduct or domestic wells in small communities, for instance.”

“By taking this Valley-scale perspective,” added Knight, “we can start to get our head around viable solutions.”


Recommend this post and follow
The Life of Earth

Monday, 18 November 2024

One or many? Exploring the population groups of the Antarctic blue whale using historical mark-recovery data

Nov. 15, 2024, by U. of Washington

Antarctic blue whales are the world's largest animal, and are still recovering from being hunted nearly to extinction during 20th century whaling. 
Credit: Paula Olson

Hunted nearly to extinction during 20th century whaling, the Antarctic blue whale, the world's largest animal, went from a population size of roughly 200,000 to little more than 300. The most recent estimate in 2004 put Antarctic blue whales at less than 1% of their pre-whaling levels.

But is this population recovering? Is there just one population of Antarctic blue whales, or multiple? Do these questions matter for conservation?

A team led by Zoe Rand, a University of Washington doctoral student, tackles these questions in a study, published Nov. 14 in Endangered Species Research.

Building on the last assessment of Antarctic blue whales in 2004 and using old whaling records, which were surprisingly detailed, Rand and her colleagues investigated if the Antarctic blue whales consist of different populations or are one big circumpolar population. Study co-authors are Trevor Branch, a UW professor of aquatic and fishery sciences, and Jennifer Jackson from the British Antarctic Survey.

Antarctic blue whales are listed as an endangered species, and understanding their population structure is essential for their conservation. Conservation at the population-level increases biodiversity, which helps the species adapt better to environmental changes and increases chances of long-term survival.

During the whaling years, biologists began the Discovery Marking Program. Foot-long metal rods with serial numbers were shot into the muscles of whales. When these whales were caught, the metal rod was returned, and the whale's size, sex, length and location where it was caught was noted. Looking at where whales were marked compared to where they were caught could shed valuable insight into the movement of Antarctic blue whales, but these data have never been used before to look at population structure.

Antarctic blue whales are listed as an endangered species, and understanding their population structure is essential for their conservation. 
Credit: Paula Olson

In this new study, this historical data were used alongside contemporary survey data in Bayesian models to calculate inter-annual movement rates among the three ocean basins that make up the Southern Ocean—Atlantic, Indian and Pacific—which make up the feeding grounds for Antarctic blue whales. The team found frequent mixing among the ocean basins, suggesting that whales do not return to the same basin every year. This points to Antarctic blue whales being one single circumpolar population in the Southern Ocean.

These results are consistent with studies of Antarctic blue whale songs, heard throughout the Southern Ocean. Only one song type has been recorded among the Antarctic blue whales. In comparison, pygmy blue whales have five different songs corresponding to five different populations. These results are also consistent with genetic studies, which found that Antarctic blue whales are more closely related than would be expected if they were separate populations.

This study is the first time that historical mark-recovery data from the Discovery Marking Program has been analyzed using modern quantitative methods. These data exist for many other hunted whale species, such as fin and sei whales, so the new study's methods could provide a framework for similar analyses for those whale species too.

There is still a lot scientists don't know about the Antarctic blue whale. Even though they do not appear to be separated geographically on their feeding waters in different ocean basins, they could still have distinct population structures based on differences in breeding habitats or the timing of migration.

However, almost nothing is known about Antarctic blue whale breeding behavior, according to the researchers. Using historical data from whaling alongside contemporary data—such as satellite tagging and photo-identification—remains scientists' best hope for uncovering the secrets of the largest animal on Earth.


Recommend this post and follow
The Life of Earth

Clinical Trial: Mushroom Supplement May Halt Prostate Cancer Growth

BY CITY OF HOPE, NOV. 17, 2024
https://scitechdaily.com/clinical-trial-mushroom-supplement-may-halt-prostate-cancer-growth/


City of Hope researchers found that white button mushroom supplements may slow prostate cancer progression by reducing cancer-promoting immune cells. Early clinical trial results show improved immune responses, but further research is needed.

The bidirectional research examines both laboratory findings and human clinical trial data, revealing that the medicinal use of white button mushrooms reduces the type of cells that suppress the immune system and facilitate the spread of prostate cancer.

Researchers at City of Hope, one of the largest and most advanced cancer research and treatment organizations in the United States—ranked among the nation’s top five cancer centers by U.S. News & World Report and a national leader in providing cancer patients with best-in-class, integrated supportive care programs—have now uncovered why taking an investigational white button mushroom supplement shows promise in slowing and even preventing the spread of prostate cancer. This discovery comes from a phase 2 clinical trial exploring the concept of “food as medicine.”

Looking at preclinical and preliminary human data, the City of Hope scientists found that taking white button mushroom pills reduces a class of immune cells called myeloid-derived suppressor cells (MDSCs), which has been linked to cancer development and spread.

Exploring the Potential of Plant-Derived Therapies

“City of Hope researchers are investigating foods like white button mushroom, grape seed extract, pomegranate, blueberries, and ripe purple berries called Jamun for their potential medicinal properties. We’re finding that plant-derived substances may one day be used to support traditional cancer treatment and prevention practices,” said Shiuan Chen, Ph.D., the Lester M. and Irene C. Finkelstein Chair in Biology, professor and chair of the Department of Cancer Biology and Molecular Medicine at Beckman Research Institute of City of Hope, and senior author of the new Clinical and Translational Medicine study.

“This study suggests that ‘food as medicine’ treatments could eventually become normal, evidence-based cancer care that is recommended for everyone touched by cancer.”


The use of naturally derived therapies for cancer treatment — called integrative oncology — is growing in popularity as people become more health-conscious and aware of the benefits of whole-person cancer care. Supported by a $100 million gift from Panda Express Co-CEOs Andrew and Peggy Cherng, City of Hope’s Cherng Family Center for Integrative Oncology is accelerating the research, education, and clinical care needed to ensure cancer patients and their doctors have access to safe, proven approaches.
Translating Lab Discoveries Into Clinical Care

At City of Hope, lab researchers work closely with physicians, allowing for streamlined bidirectional research so that laboratory findings can be taken to patients and what is observed in patients can be taken and put back under the microscope for the development of expedited, more effective cancer treatments.

In mouse models, researchers found that administration of white button mushroom extract significantly delayed the growth of tumors and extended the survival of mice. It also improved T cell immune response through the reduction of MDSC levels in animal models, meaning it improved the immune system’s ability to kill cancer.

The researchers profiled blood draws from some of the men participating in City of Hope’s phase 2 clinical trial. The men were under active surveillance as they took white button mushroom supplements. Focusing on eight participants’ samples before and after three months of white button mushroom treatment, the scientists found that there were less tumor-creating MDSCs and more anti-tumor T and natural killer cells, suggesting white button mushroom rebuilds anti-cancer immune defense and slows cancer growth.

“Our study emphasizes the importance of seeking professional guidance to ensure safety and to avoid self-prescribing supplements without consulting a health care provider,” said Xiaoqiang Wang, M.D., Ph.D., City of Hope staff scientist and first author of the study. “Some people are buying mushroom products or extract online, but these are not FDA-approved. While our research has promising early results, the study is ongoing. That said, it couldn’t hurt if people wanted to add more fresh white button mushrooms to their everyday diet.”

People interested in joining the National Cancer Institute-funded phase 2 clinical trial should visit https://www.cityofhope.org/research/clinical-trials. City of Hope researchers are now focusing on whether the reduction in MDSCs is associated with improved clinical outcomes in patients with prostate cancer.


Recommend this post and follow
The Life of Earth

We Live in Cold Times

Apr 26, 2021

Jørgen Peder Steffensen is a professor in ice core related research at the Niels Bohr Institute at the University of Copenhagen.

Using ice core date, his team has reconstructed the last 10,000 years of climate history.

https://www.youtube.com/watch?v=WE0zHZPQJzA


Recommend this post and follow
The Life of Earth


Sunday, 17 November 2024

Genes of ancient animal relatives used to grow a mouse: Study reveals hidden history of stem cells

Nov.15, 2024, by Queen Mary, U. of London

The mouse on the left is a chimeric with dark eyes and patches of black fur, a result of stem cells derived from a choanoflagellate Sox gene. 
The wildtype mouse on the right has red eyes and all white fur. The colour difference is due to genetic markers used to distinguish the stem cells, not a direct effect of the gene itself. 
Credit: Gao Ya and Alvin Kin Shing Lee, with thanks to the Centre for Comparative Medicine Research (CCMR) for their support.

An international team of researchers has achieved an unprecedented milestone: the creation of mouse stem cells capable of generating a fully developed mouse using genetic tools from a unicellular organism, with which we share a common ancestor that predates animals.

This breakthrough reshapes our understanding of the genetic origins of stem cells, offering a new perspective on the evolutionary ties between animals and their ancient single-celled relatives. The research is published in the journal Nature Communications.

In an experiment that sounds like science fiction, Dr. Alex de Mendoza of Queen Mary University of London collaborated with researchers from The University of Hong Kong to use a gene found in choanoflagellates, a single-celled organism related to animals, to create stem cells which they then used to give rise to a living, breathing mouse.

Choanoflagellates are the closest living relatives of animals, and their genomes contain versions of the genes Sox and POU, known for driving pluripotency—the cellular potential to develop into any cell type—within mammalian stem cells. This unexpected discovery challenges a longstanding belief that these genes evolved exclusively within animals.

"By successfully creating a mouse using molecular tools derived from our single-celled relatives, we're witnessing an extraordinary continuity of function across nearly a billion years of evolution," said Dr. de Mendoza. "The study implies that key genes involved in stem cell formation might have originated far earlier than the stem cells themselves, perhaps helping pave the way for the multicellular life we see today."

The 2012 Nobel prize to Shinya Yamanaka demonstrated that it is possible to obtain stem cells from "differentiated" cells just by expressing four factors, including a Sox (Sox2) and a POU (Oct4) gene. In this new research, through a set of experiments conducted in collaboration with Dr. Ralf Jauch's lab in The University of Hong Kong / Center for Translational Stem Cell Biology, the team introduced choanoflagellate Sox genes into mouse cells, replacing the native Sox2 gene achieving reprogramming towards the pluripotent stem cell state.

To validate the efficacy of these reprogrammed cells, they were injected into a developing mouse embryo. The resulting chimeric mouse displayed physical traits from both the donor embryo and the lab induced stem cells, such as black fur patches and dark eyes, confirming that these ancient genes played a crucial role in making stem cells compatible with the animal's development.

The study traces how early versions of Sox and POU proteins, which bind DNA and regulate other genes, were used by unicellular ancestors for functions that would later become integral to stem cell formation and animal development. "Choanoflagellates don't have stem cells, they're single-celled organisms, but they have these genes, likely to control basic cellular processes that multicellular animals probably later repurposed for building complex bodies," explained Dr. de Mendoza.

This novel insight emphasizes the evolutionary versatility of genetic tools and offers a glimpse into how early life forms might have harnessed similar mechanisms to drive cellular specialization, long before true multicellular organisms came into being, and into the importance of recycling in evolution.

This discovery has implications beyond evolutionary biology, potentially informing new advances in regenerative medicine. By deepening our understanding of how stem cell machinery evolved, scientists may identify new ways to optimize stem cell therapies and improve cell reprogramming techniques for treating diseases or repairing damaged tissue.

"Studying the ancient roots of these genetic tools lets us innovate with a clearer view of how pluripotency mechanisms can be tweaked or optimized," Dr. Jauch said, noting that advancements could arise from experimenting with synthetic versions of these genes that might perform even better than native animal genes in certain contexts.


Recommend this post and follow
The Life of Earth

Study confirms Egyptians drank hallucinogenic cocktails in ancient rituals

NOV. 15, 2024, by U of South Florida

(a) Drinking vessel in shape of Bes head; El-Fayūm Oasis, Egypt; Ptolemaic-Roman period (4th century BCE—3rd century CE), (courtesy of the Tampa Museum of Art, Florida). 
(b) Bes mug from the Ghalioungui collection, 10.7 × 7.9 cm (Ghalioungui, G. Wagner 1974, Kaiser 2003, cat. no. 342). 
(c) Bes mug inv. no. 14.415 from the Allard Pierson Museum, 11.5 × 9.3 cm (courtesy of the Allard Pierson Museum, Amsterdam; photo by Stephan van der Linden). 
(d) Bes mug from El-Fayum, dimensions unknown (Kaufmann 1913; Kaiser 2003, cat. no. 343). 
Credit: Scientific Reports (2024). DOI: 10.1038/s41598-024-78721-8

A University of South Florida professor found the first-ever physical evidence of hallucinogens in an Egyptian mug, validating written records and centuries-old myths of ancient Egyptian rituals and practices. Through advanced chemical analyses, Davide Tanasi examined one of the world's few remaining Egyptian Bes mugs.

Such mugs, including the one donated to the Tampa Museum of Art in 1984, are decorated with the head of Bes, an ancient Egyptian god or guardian demon worshiped for protection, fertility, medicinal healing and magical purification. Published in Scientific Reports, the study sheds light on an ancient Egyptian mystery: The secret of how Bes mugs were used about 2,000 years ago.

"There's no research out there that has ever found what we found in this study," Tanasi said. "For the first time, we were able to identify all the chemical signatures of the components of the liquid concoction contained in the Tampa Museum of Art's Bes mug, including the plants used by Egyptians, all of which have psychotropic and medicinal properties."

The presence of Bes mugs in different contexts over a long period of time made it extremely difficult to speculate on their contents or roles in ancient Egyptian culture.

"For a very long time now, Egyptologists have been speculating what mugs with the head of Bes could have been used for, and for what kind of beverage, like sacred water, milk, wine or beer," said Branko van Oppen, curator of Greek and Roman art at the Tampa Museum of Art. "Experts did not know if these mugs were used in daily life, for religious purposes or in magic rituals."

(a–c) optical image of the sample collected from the Bes vessel at different magnifications. 
(d) optical image of sample TMA1 flattened on half diamond compression cell. 
(e) average spectrum of sample TMA1. 
Credit: Scientific Reports (2024). DOI: 10.1038/s41598-024-78721-8

Several theories about the mugs and vases were formulated on myths, but few of them were ever tested to reveal their exact ingredients until the truth was extracted layer by layer.

Tanasi, who developed this study as part of the Mediterranean Diet Archaeology project promoted by the USF Institute for the Advanced Study of Culture and the Environment, collaborated with several USF researchers and partners in Italy at the University of Trieste and the University of Milan to perform chemical and DNA analyses. With a pulverized sample from scraping the inner walls of the vase, the team combined numerous analytical techniques for the first time to uncover what the mug last held.

The new tactic was successful and revealed the vase had a cocktail of psychedelic drugs, bodily fluids and alcohol—a combination that Tanasi believes was used in a magical ritual reenacting an Egyptian myth, likely for fertility. The concoction was flavored with honey, sesame seeds, pine nuts, licorice and grapes, which were commonly used to make the beverage look like blood.

"This research teaches us about magic rituals in the Greco-Roman period in Egypt," Van Oppen said. "Egyptologists believe that people visited the so-called Bes Chambers at Saqqara when they wished to confirm a successful pregnancy because pregnancies in the ancient world were fraught with dangers.

"So, this combination of ingredients may have been used in a dream-vision inducing magic ritual within the context of this dangerous period of childbirth."

"Religion is one of the most fascinating and puzzling aspects of ancient civilizations," Tanasi said. "With this study, we've found scientific proof that the Egyptian myths have some kind of truth and it helps us shed light on the poorly understood rituals that were likely carried out in the Bes Chambers in Saqqara, near the Great Pyramids at Giza."


Recommend this post and follow
The birth of modern Man

Ancient Humans Were Apex Predators For 2 Million Years, Study Discovers

17 Nov. 2024, By M. MCRAE

Cave art reflects our ancient eating habits. 
(Gallo Images-Denny Allen/Getty Images)

Paleolithic cuisine was anything but lean and green, according to a study on the diets of our Pleistocene ancestors.
For a good 2 million years, Homo sapiens and their ancestors ditched the salad and dined heavily on meat, putting them at the top of the food chain.

It's not quite the balanced diet of berries, grains, and steak we might picture when we think of 'paleo' food.

But according to a study from 2021 by anthropologists from Israel's Tel Aviv University and the University of Minho in Portugal, modern hunter-gatherers have given us the wrong impression of what we once ate.
"This comparison is futile, however, because 2 million years ago hunter-gatherer societies could hunt and consume elephants and other large animals – while today's hunter gatherers do not have access to such bounty," researcher Miki Ben‐Dor from Israel's Tel Aviv University explained when the research was published.

A look through hundreds of previous studies – on everything from modern human anatomy and physiology to measures of the isotopes inside ancient human bones and teeth – suggests we were primarily apex predators until roughly 12,000 years ago.

Reconstructing the grocery list of hominids who lived as far back as 2.5 million years ago is made all that much more difficult by the fact plant remains don't preserve as easily as animal bones, teeth, and shells.
Other studies have used chemical analysis of bones and tooth enamel to find localized examples of diets heavy in plant material. But extrapolating this to humanity as a whole isn't so straight forward.

We can find ample evidence of game hunting in the fossil record, but to determine what we gathered, anthropologists have traditionally turned to modern-day ethnography based on the assumption that little has changed.

According to Ben-Dor and his colleagues, this is a huge mistake.

"The entire ecosystem has changed, and conditions cannot be compared," said Ben‐Dor.
The Pleistocene epoch was a defining time in Earth's history for us humans. By the end of it, we were marching our way into the far corners of the globe, outliving every other hominid on our branch of the family tree.

Graph showing where Homo sapiens sat on the spectrum of carnivore to herbivore during the Pleistocene and Upper Pleistocene (UP).
 (Dr Miki Ben Dor)

Dominated by the last great ice age, most of what is today Europe and North America was regularly buried under thick glaciers.
With so much water locked up as ice, ecosystems around the world were vastly different to what we see today. Large beasts roamed the landscape, including mammoths, mastodons, and giant sloths – in far greater numbers than we see today.

Of course it's no secret that Homo sapiens used their ingenuity and uncanny endurance to hunt down these massive meal-tickets. But the frequency with which they preyed on these herbivores hasn't been so easy to figure out.

Rather than rely solely on the fossil record, or make tenuous comparisons with pre-agricultural cultures, the researchers turned to the evidence embedded in our own bodies and compared it with our closest cousins.

"We decided to use other methods to reconstruct the diet of stone-age humans: to examine the memory preserved in our own bodies, our metabolism, genetics and physical build," said Ben‐Dor.

"Human behavior changes rapidly, but evolution is slow. The body remembers."

For example, compared with other primates, our bodies need more energy per unit of body mass. Especially when it comes to our energy-hungry brains. Our social time, such as when it comes to raising children, also limits the amount of time we can spend looking for food.

We have higher fat reserves, and can make use of them by rapidly turning fats into ketones when the need arises. Unlike other omnivores, where fat cells are few but large, ours are small and numerous, echoing those of a predator.

Our digestive systems are also suspiciously like that of animals higher up the food chain. Having unusually strong stomach acid is just the thing we might need for breaking down proteins and killing harmful bacteria you'd expect to find on a week-old mammoth chop.

Mammoth chops, anyone? 

Even our genomes point to a heavier reliance on a meat-rich diet than a sugar-rich one.

"For example, geneticists have concluded that areas of the human genome were closed off to enable a fat-rich diet, while in chimpanzees, areas of the genome were opened to enable a sugar-rich diet," said Ben‐Dor.

The team's argument is extensive, touching upon evidence in tool use, signs of trace elements and nitrogen isotopes in Paleolithic remains, and dental wear.

It all tells a story where our genus' trophic level – Homo's position in the food web – became highly carnivorous for us and our cousins, Homo erectus, roughly 2.5 million years ago, and remained that way until the upper Paleolithic around 11,700 years ago.

From there, studies on modern hunter-gatherer communities become a little more useful as a decline in populations of large animals and fragmentation of cultures around the world saw to more plant consumption, culminating in the Neolithic revolution of farming and agriculture.

None of this is to say we ought to eat more meat. Our evolutionary past isn't an instruction guide on human health, and as the researchers emphasize, our world isn't what it used to be.

But knowing where our ancestors sat in the food web has a big impact on understanding everything from our own health and physiology, to our influence over the environment in times gone by.


Recommend this post and follow
The birth of modern Man

Saturday, 16 November 2024

New Research Challenges Long-Held Theories on How Migratory Birds Navigate

By Bangor U., Nov. 15, 2024


Migratory birds rely on Earth’s magnetic inclination and declination for navigation, not total intensity. This reveals a flexible and precise internal mapping system.

Migratory birds can navigate using just magnetic inclination and declination. This discovery challenges assumptions about the need for all magnetic field components and highlights the flexibility of avian navigation systems.

Migratory birds are renowned for their remarkable ability to travel thousands of kilometers to reach their breeding or wintering sites. A study conducted by Bangor University revealed that Eurasian reed warblers (Acrocephalus scirpaceus) rely solely on the Earth’s magnetic inclination and declination to pinpoint their location and navigate their course. This discovery challenges the long-standing assumption that all aspects of the Earth’s magnetic field, especially total intensity, are crucial for accurate navigation.

Scientists have long believed that these birds use a ‘map-and-compass’ system: they first determine their location using a ‘map’ and then use a ‘compass’ to orient themselves in the correct direction. However, the exact nature of this ‘map’ has been the subject of ongoing debate.

Experimental Design: Simulating Magnetic Displacement

In a carefully designed experiment, warblers were exposed to artificially altered magnetic inclination and declination values, simulating a displacement to a different geographic location while keeping the total magnetic intensity unchanged.

Despite this ‘virtual displacement’, the birds adjusted their migratory routes as if they were in the new location, demonstrating compensatory behavior. This response suggests that birds can extract both positional and directional information from magnetic cues, even when other components of the Earth’s magnetic field, such as total intensity, remain unchanged.

The research provided strong evidence that migratory birds rely on inclination and declination to determine their location, even when these cues conflict with other magnetic field components.

“What’s interesting is that these findings reveal that the birds don’t necessarily need all components of the Earth’s magnetic field to determine their position,” said Professor Richard Holland, who specializes in animal behavior and led the study. “They can rely solely on inclination and declination, which are also used in compass orientation, to extract their location.”

Implications for Understanding Avian Navigation

The study challenges previous assumptions that all components of the Earth’s magnetic field, particularly total intensity, are necessary for accurate navigation. “It remains to be seen whether birds use the total intensity of the Earth’s magnetic field for navigation in other contexts, but what we’ve shown is that these two components—magnetic inclination and declination—are enough to provide positional information,” explained Richard.

This discovery advances the understanding of avian navigation and supports the theory that birds possess a complex and flexible internal navigation system. This mechanism allows them to adjust to changes in their environment, even when encountering conditions they’ve never experienced before.

The findings open new avenues for research into animal navigation and may hold implications for broader biological studies, including how animals interact with and interpret their environment.



Recommend this post and follow
The Life of Earth