Connect with us

Science

Fluttering leaves could be one day generate electricity

Published

on

Fluttering leaves could be one day generate electricity

The quest for clean energy alternatives may soon include the ability to harvest energy from leaves shifting in the wind.

Researchers from the Laboratory of Organic Electronics at Linköping University have discovered a way to power a small electric circuit using the fluctuations between sun and shade, similar to a leaf fluttering in the wind.

Although the technology is in its early stages, the study shows that heat fluctuations from sunlight to shade can be converted into energy.

“Plants and their photosynthesis systems are continuously subjected to fluctuations between sunshine and shade,” said Magnus Jonsson, an author of the study. “We have drawn inspiration from this and developed a combination of materials in which changes in heating between sunshine and shade generate electricity.”

The study was published in the journal Advanced Optical Materials.

Previously, Magnus Jonsson had worked on a project that developed small nanoantennas made up of gold nanodiscs with the ability to absorb sunlight and generate heat by reacting to near-infrared light.

In this new study, the nanoantennas powered a tiny optical generator using pyroelectric film. Pyroelectric means that power is generated from changes in heat.

As the antennae were subjected to fluctuations in sunlight and shade, they generated heat which was then converted to electricity.

“The nanoantennas can be manufactured across large areas, with billions of the small discs uniformly distributed over the surface,” said Jonsson. “The spacing between discs in our case is approximately 0.3 micrometres. We have used gold and silver, but they can also be manufactured from aluminium or copper.”

A polarized polymer was also required for energy conversion.

“We force the polarisation into the material, and it remains polarised for a long time, “ said Mina Shiran Chaharsoughi, the lead author of the study.

The researchers tested their theory with experiments and computer simulations. In one of the experiments, a twig with leaves in front of a fan powered a small external circuit.

The study has many exciting applications and result in another dependable source of renewable energy.

By Kay Vandette, Earth.com Staff Writer

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Science

There are actually two different types of water

Published

on

By

There are actually two different types of water

Many people do not realize that water molecules take on two different forms, yet contain almost identical physical properties. A new study led by the University of Basel is the first to demonstrate that these two different forms of water do not have the same reaction to chemicals.

Water is a molecule comprised of a single oxygen atom linked to two hydrogen atoms. At this molecular level, water exists in two forms, or isomers.

The two hydrogen atoms that make up water spin in a different relative position in one isomer versus the other. Depending on the orientation of the spins, they are referred to as being either ortho- or para-water.

The nearly-identical physical properties of the isotopes make it challenging to separate them. However, thanks to a technique based on electric fields developed by Professor Jochen Küpper from the Hamburg Center for Free-Electron Laser Science, the team succeeded in its observation.

The experts managed to separate the two forms of water to examine how they differ in terms of their chemical reactivity, and found distinctive reactions between them.

The researchers determined that there were controlled reactions between the “pre-sorted” water isomers and ultra-cold protonated nitrogen held in a trap. During this process, a diazenylium ion transfers its proton to a water molecule. This reaction is also observed in the chemistry of interstellar space.

According to the study, para-water reacts about 25 percent faster than ortho-water, which can be explained by the nuclear spin that also impacts the rotation of the water molecules. This means that different attractive forces act between the reaction partners. Para-water is able to attract its reaction partner more strongly than the ortho-form, which leads to an increased chemical reactivity.

“The better one can control the states of the molecules involved in a chemical reaction, the better the underlying mechanisms and dynamics of a reaction can be investigated and understood,” explained study lead author Professor Stefan Willitsch.

The research is published in the journal Nature Communications.

By Chrissy Sexton, Earth.com Staff Writer

Continue Reading

Science

Why is ice so slippery? It’s more complicated than you may think

Published

on

By

Why is ice so slippery? It’s more complicated than you may think

If you’ve ever been unfortunate enough to accidentally step on a patch of ice, you’re probably well aware of what happens next. The fact that ice is slippery – much to the chagrin of unassuming pedestrians – has been known for years. It’s why we use ice as a surface for sports such as speed skating, curling, and hockey. But while the slipperiness of ice is common knowledge, it’s not entirely understood from a science perspective.

In 1886, an Irish physicist by the name of John Joly presented the first scientific explanation for low friction on ice. He posited that when an object touches the surface of the ice, the local contact pressure is so high that the ice melts, creating a liquid water layer that lubricates the sliding. Currently, the consensus is that although liquid water at the ice surface does reduce sliding friction on ice, it is not melted by pressure. Rather, the ice is melted by frictional heat produced during sliding.

Researchers Daniel Bonn, a professor at the University of Amsterdam, and Mischa Bonn, from the Max Planck Institute for Polymer Research, have now demonstrated that friction on ice is more complicated than previously thought. Using macroscopic friction experiments at temperatures ranging from 0 °C to -100 °C, they showed that the ice surface transforms from an extremely slippery surface at typical winter sports temperatures, to a surface with high friction at -100 °C.

In determining this, the research team performed spectroscopic measurements of the state of water molecules at the surface, and compared these with molecular dynamics simulations. This combination of theory and experiment determined that two types of water molecules exist at the ice surface: water molecules that are stuck to the underlying ice, bound by three hydrogen bonds, and mobile water molecules that are bound by only two hydrogen bonds. The mobile water molecules continuously roll over the ice, powered by thermal vibrations.

When the temperature is increased, the two types of surface molecules are interconverted: the quantity of mobile molecules is increased at the expense of molecules that are stuck to the surface of the ice. Interestingly, this change in mobility of the topmost water molecules at the ice surface – caused by increases in temperature – perfectly matches the temperature-dependence of the measured friction force. In other words, the larger the mobility at the ice surface, the lower the friction, and vice versa.

Ultimately, the researchers conclude that the high mobility of the surface water molecules is responsible for the slipperiness of ice. They also determined that, while the surface mobility continues to increase all the way up to 0°C, this is not an ideal temperature for sliding on ice. Experiments showed that the friction is minimal at a temperature of -7°C, which is the exact same temperature used at speed skating rinks. So although this study – published in the Journal of Physical Chemistry Letters – is the first to uncover the hidden complexities of ice’s slippery nature, it seems like the sports world has known the ideal ice temperature all along.

By Connor Ertz, Earth.com Staff Writer

Continue Reading

Science

What happens when the sun dies?

Published

on

By

What happens when the sun dies?

Our sun is very gradually dying and eventually will no longer be able to support any life on Earth.

The good news is that the death and collapse of our sun is a long way off. Scientists predict that the sun will die in 10 billion years and now, a new study has discovered what will happen after this takes place.

An international team of astronomers conducted the research and the results were published in the journal Nature Astronomy.

When a star is in its last stages, it typically turns into a planetary nebula, which traces the star’s transition from a red giant to a white dwarf. This nebula is visible as a large ring of bright gas and dust where the star once was.

This is true for a majority of all active stars, but researchers were unsure if the sun would follow the same process because of its low mass.

For the new study, the researchers wanted to find a clear answer for whether or not the sun would have enough mass to create a visible ring or planetary nebula.

The researchers created a new model to predict the life-cycle of stars. The model showed the research team the luminosity of the gas and dust that is ejected from stars of varying masses and ages when they die.

“When a star dies it ejects a mass of gas and dust – known as its envelope – into space. The envelope can be as much as half the star’s mass,” said Albert Zijlstra from the University of Manchester, a member of the research team. “This reveals the star’s core, which by this point in the star’s life is running out of fuel, eventually turning off and before finally dying.”

Zijlstra also said that envelope can shine for 10,000 years and be seen from distances measuring tens of millions of years.

Data from the model also sheds insight on a problem that had astronomers confused for years.

It was discovered 25 years ago that you could measure the distance of a galaxy by that galaxy’s brightest nebulae. In any galaxy, the brightest nebulae always had the same measurable levels of brightness.

The new model contradicted previous data that corroborated this theory.

“The data said you could get bright planetary nebulae from low mass stars like the sun, the models said that was not possible, anything less than about twice the mass of the sun would give a planetary nebula too faint to see,” said Zijlstra

Even though the sun is low mass, the model still showed that it would produce a bright planetary nebula. After a star ejects its envelope, the researchers found that the star heats up three times faster than was previously calculated which makes it possible for the sun to produce a bright ring.

However, the researchers found that the sun was the lowest mass a star could be in order to produce visible planetary nebula.

“We found that stars with mass less than 1.1 times the mass of the sun produce fainter nebula, and stars more massive than 3 solar masses brighter nebulae, but for the rest, the predicted brightness is very close to what had been observed. Problem solved, after 25 years!” said Zijlstra.

The new study shows what will happen after the sun dies and adds to our understanding of the life-cycle of stars.

“This is a nice result,” said Zijlstra. “Not only do we now have a way to measure the presence of stars of ages a few billion years in distant galaxies, which is a range that is remarkably difficult to measure, we even have found out what the sun will do when it dies!”

By Kay Vandette, Earth.com Staff Writer

Continue Reading

Trending

en_USEnglish
en_USEnglish