(PhysOrg.com) — Entropy can decrease, according to a new proposal – but the process would destroy any evidence of its existence, and erase any memory an observer might have of it. It sounds like the plot to a weird sci-fi movie, but the idea has recently been suggested by theoretical physicist Lorenzo Maccone, currently a visiting scientist at MIT, in an attempt to solve a longstanding paradox in physics. Citation: Physicist Proposes Solution to Arrow-of-Time Paradox (2009, August 27) retrieved 18 August 2019 from https://phys.org/news/2009-08-physicist-solution-arrow-of-time-paradox.html Explore further A new theory suggests that we don’t observe phenomena where entropy decreases because all evidence from these processes is erased when correlations are removed from the system. Image credit: cguu.com. The laws of physics, which describe everything from electricity to moving objects to energy conservation, are time-invariant. That is, the laws still hold if time is reversed. However, this time reversal symmetry is in direct contrast with everyday phenomena, where it’s obvious that time moves forward and not backward. For example, when milk is spilt, it can’t flow back up into the glass, and when pots are broken, their pieces can’t shatter back together. This irreversibility is formalized through the second law of thermodynamics, which says that entropy always increases or stays the same, but never decreases.This contrast has created a reversibility paradox, also called Loschmidt’s paradox, which scientists have been trying to understand since Johann Loschmidt began considering the problem in 1876. Scientists have proposed many solutions to the conundrum, from trying to embed irreversibility in physical laws to postulating low-entropy initial states.Maccone’s idea, published in a recent issue of Physical Review Letters, is a completely new approach to the paradox, based on the assumption that quantum mechanics is valid at all scales. He theoretically shows that entropy can both increase and decrease, but that it must always increase for phenomena that leave a trail of information behind. Entropy can decrease for certain phenomena (when correlated with an observer), but these phenomena won’t leave any information of their having happened. For these situations, it’s like the phenomena never happened at all, since they leave no evidence. As Maccone explains, the second law of thermodynamics is then reduced to a mere tautology: physics cannot study processes where entropy has decreased, due to a complete absence of information. The solution allows for time-reversible phenomena to exist (in agreement with the laws of physics), but not be observable (in agreement with the second law of thermodynamics).In his study, Maccone presents two thought experiments to illustrate this idea, followed by an analytical derivation. He describes two situations where entropy decreases and all records of it are permanently erased. In both scenarios, the entropy in the systems first increases and then decreases, but the decrease is accompanied by an erasure of any memory of its occurrence. The key to entropy decrease in the first place is a correlation between the observer and the phenomenon in question. As Maccone explains, when an interaction occurs between an observer and an observed phenomenon that decreases the entropy of the correlated observer-observed system, the interaction must also reduce their quantum mutual information. When this information is destroyed, the observer’s memory is destroyed along with it. Could Maxwell’s Demon Exist in Nanoscale Systems? In the first situation where entropy decreases, Maccone describes a situation where Bob sends Alice some energy in the form of light, initially in a zero-entropy state. Using detectors, Alice receives the light and observes her detectors warming up, revealing that heat has been lost and entropy is increasing in her isolated lab. However, Bob can theoretically manipulate the situation by withdrawing the energy he has sent Alice, and then erasing all evidence of the energy’s existence – including erasing her memory and the notepads where she wrote the detectors’ temperatures. First, to recover the energy, Bob must return the energy to a zero-entropy state. He does this by erasing all correlations between the energy and Alice, and any other macroscopic systems in the lab. By erasing all initial correlations, Bob can enable the system to lose entropy. Although the act of decorrelating requires energy, Maccone explains that it doesn’t necessarily cause entropy to increase.“Any physical transformation requires energy (no energy implies no time evolution, i.e. a static system),” he told PhysOrg.com. “This, however, doesn’t automatically imply that entropy is increased. Entropy increases when (part of) the energy employed becomes unusable as waste heat.“Some energy is employed in the decorrelation transformation. Not only is such energy still available afterwards, but the decorrelation might also decrease the entropy in two systems, and that can ‘free’ some more energy that was previously unavailable (as it was locked up as heat).”The second situation where entropy decreases involves a quantum measurement instead of a classical one. Here, Bob sends Alice a particle in a specific spin state. Alice performs a quantum measurement that consists of coupling the particle with a macroscopic reservoir, which increases the entropy of the system. But once again, Bob can theoretically manipulate the situation, this time by inverting the transformation of Alice’s measurement. This action decorrelates all records of Alice’s measurement results from the spin state. Although Alice remembers performing the experiment, she has no memory or evidence of what the measurement result was, and the spin is back to its initial zero-entropy state. Although theoretically possible, these situations in which entropy decreases would be very difficult to demonstrate experimentally, due to the difficulty in manipulating macroscopic correlations. That is why, for all practical purposes, these phenomena are unobservable by physics. Still, as Maccone explains, the theory is a straightforward application of quantum mechanics when applied to macroscopic systems, and could potentially be verified. “I think that if quantum coherence can be indubitably proven on a macroscopic observer (not necessarily a human being), then my approach would be verified,” he said. “An experiment of the sort of the second thought experiment, for example. The state of the art of experiments is quite far from anything of that sort. The biggest system where quantum coherence has been experimentally shown is, I think, some biological molecule composed of a few hundred atoms by A. Zeilinger’s group in Vienna.”The explanation may also provide insight into understanding entropy in the universe. The approach supports the idea that the universe may be in a state of zero entropy, even though it appears to us observers to have higher entropy. As Maccone explains, the universe is in a zero-entropy pure state because it cannot be entangled with any other system.“My theory requires that the global state of the observer plus the environment be in a quantum pure state,” he said. “This means that it works only if we consider a sufficiently large system that it cannot be correlated with any other system. However, correlations (entanglement, in quantum systems) are very sticky; namely, systems get correlated very quickly also if they are very weakly interacting. This is why, when one considers macroscopic systems, the only safe choice is to consider the whole universe, which cannot be correlated with any other system, since, by definition, it comprises all physical systems.”More information: Lorenzo Maccone. “Quantum Solution to the Arrow-of-Time Dilemma.” Physical Review Letters 103, 080401 (2009).Copyright 2009 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
(PhysOrg.com) — America’s first commercial “TV White Spaces Network” was launched this week in Wilmington, New Hanover County, North Carolina. Wilmington, as the first U.S. city to shift from analog to digital TV, was chosen as the present-day site of the first commercial network since the city had early access to white spaces in that TV changeover, and was used as the test bed for the new technology. The city has been testing white space applications since 2010. The network that went live makes use of technology that includes a Spectrum Bridge database along with 1.5-pound white space radios from KTS Wireless and cameras from other vendors. Florida-based Spectrum Bridge describes itself as a company with a software platform that manages available bandwidth in real-time for licensed and unlicensed spectrum. Devices check the Spectrum Bridge database to prevent interference with local TV signals. Its platform was sanctioned by the FCC as the first TV White Spaces database. Florida-based KTS Wireless says it is a leader in data radio development used in the white space industry.The KTS device is a small transmitter. According to details in DailyWireless, the white space device operates on all TV channels (174-216 MHz and 470-698 MHz) and in the unlicensed 900 MHz frequencies at data rates from 1.5 to 3.1 MBps.White spaces, sometimes used in the context of “Super Wi-Fi,” is being re-tagged by some as “SuperWhiteFi” to more closely describe the unused spectrum between TV stations that resulted from the 2008 transition from analog to digital transmission of TV broadcast. The TV frequencies are lower, enabling signals to travel further, and penetrate foliage and walls better. The tradeoff to achieving more range is less speed. Nonetheless, city officials presiding over the Madison rollout see better range as an important plus for delivering services.Cameras and wireless Internet access were installed at Wilmington’s city parks, where the white space spectrum could allow wireless service to go through trees and thick foliage. According to a press release, the network applications are designed to provide access for local functions such as video-security surveillance and transmitting data about water quality.Observers say the Wilmington rollout is indicative of what is to come, in a transformation of cities to “smart cities” where the low-frequency spectrum enables broader access for delivering public services, and for monitoring. The vision is that cities will be connected through integrated wireless networking technology to manage congestion, maximize energy efficiency, enhance public safety and provide key services.Beyond deployment in cities, proponents hope white-space technologies will be deployed in rural areas and other places which standard wireless signals might not access. While this week’s white-space news focused on Wilmington, reports are likely to grow about further developments in standards-based products and services for broadband capabilities via TV band spectrum. One sign is the appearance of an industry group called the WhiteSpace Alliance, which was formed in December to work on standards for white spaces. The Whitespace Alliance backs the new IEEE 802.22 standard. Citation: North Carolina becomes home of White Spaces network (2012, January 29) retrieved 18 August 2019 from https://phys.org/news/2012-01-north-carolina-home-white-spaces.html New Wi-Fi Technology Using White Spaces This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Explore further More information: www.whitespacealliance.org/ © 2011 PhysOrg.com
Oerlikon Solar works to pull down PV costs in 2014 PausePlay% buffered00:0000:00UnmuteMuteDisable captionsEnable captionsSettingsCaptionsDisabledQuality0SpeedNormalCaptionsGo back to previous menuQualityGo back to previous menuSpeedGo back to previous menu0.5×0.75×Normal1.25×1.5×1.75×2×Exit fullscreenEnter fullscreen Citation: Cost-cutting drives solar cell process at Twin Creeks (2012, March 15) retrieved 18 August 2019 from https://phys.org/news/2012-03-cost-cutting-solar-cell-twin-creeks.html © 2011 PhysOrg.com Play In the bigger picture, the company’s vision is to make a real difference in the costs of generating solar power. Silicon, notes Twin Creeks, is still the most expensive component of a finished solar module and the single highest expense when it comes to generating solar power. The startup’s goal is, according to the company’s website, “to disruptively reduce the cost of solar energy to achieve grid parity.” The company says its Hyperion system can produce ultra-thin wafers less than one-tenth the thickness of conventional silicon solar wafers. “Hyperion can fundamentally change the cost structure of many other industries that rely on high-cost, single-crystal wafers for their devices.”To prove its points, the company has demonstrated the technology at a 25-megawatt-per-year solar-cell factory that it built in Senatobia, Mississippi. The solar factory was built through loans from the state of Mississippi, venture capital, and other sources. The plant is being used as a site where Twin Creeks and its customers can fine-tune processes for generating ultra-thin solar modules and wafers with Hyperion. The plant, though currently capable of producing 25 megawatts of solar cells a year, will be expanded to a capability of 100 megawatts, according to plans. (PhysOrg.com) — A San Jose, California, startup company, Twin Creeks Technologies, says it has figured out a way to substantially cut the cost of making silicon solar cells. The company’s technology reduces both the amount of silicon needed and the cost of the manufacturing equipment. The company can produce solar cells for about 40 cents per watt, half the present-day price of the cheapest cells at 80 cents. Explore further More information: www.twincreekstechnologies.com The Twin Creeks approach differs in a process that reduces the use of wire saws and related equipment as well as makes thinner wafers. Its production system for making the ultra-thin wafers is called “Hyperion.” The technology is described at Twin Creeks as “Proton Induced Exfoliation” (PIE). Crystalline silicon wafers, accounting for the bulk of solar cells, conventionally are made in a process of cutting blocks or cylinders of silicon into 200-micrometer-thick wafers. According to the company, the key to Hyperion is “thinness.” The Twin Creeks business plan is to sell the manufacturing equipment, rather than produce solar cells itself. The selling point is that, with thin wafers, manufacturers can profitably produce solar cells and other devices below today’s best-in-class cost structure, according to the company. Twin Creeks estimates that Hyperion will permit manufacturers to produce solar cells for under 40 cents a watt in commercial-scale volume production facilities, with prices going down over time. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Tao in attacking the problem proved something else—that little comments mean a lot. Cesare reported a blog comment as one stepping stone that led Tao to solve this longstanding number theory problem. The article referred to a comment on his blog which alerted him to its potential relationship with the Erdős conjecture. “Tao realized that combining the fresh insight with previous results could lead to a solution, said Nature. Tao included an acknowledgment thanking the commenter, Uwe Stroinski, who received a PhD in mathematics from the University of Tübingen. New Scientist noted how that he combined recent results in number theory with some earlier crowdsourced work Tao submitted his paper to the arXiv preprint server earlier this month, presenting a proof of the Erdős discrepancy problem, (a puzzle about the properties of an infinite, random sequence of +1s and -1s, said New Scientist).The paper claims to prove what Paul Erdős posed in the 1930s, and offering $500 dollars for an answer. Erdős, born in Budapest, is considered one of the most prolific mathematicians of the 20th century. Both of Erdős’s parents were high school mathematics teachers. Much of his family, died in Budapest during the Holocaust. Cesare said the mathematical puzzle had resisted an answer for 80 years. Even computerized attempts failed. Then came Tao.In 2006, he was one of four mathematicians to receive the Fields Medal which is a prize in mathematics and something like the Nobel Prize, at the International Congress of Mathematicians in Madrid. Tao was one of the recipients because he developed techniques that can simplify equations describing general relativity, along with those describing the quantum mechanics governing the way light moves around in a fiber optic cable. Citation: Blog comment, collab help man attack old maths problem (2015, September 26) retrieved 18 August 2019 from https://phys.org/news/2015-09-blog-comment-collab-maths-problem.html More information: The Erdos discrepancy problem, arXiv:1509.05363 [math.CO] arxiv.org/abs/1509.05363 © 2015 Phys.org As for the recent news, what exactly is the Erdős discrepancy problem? Jacob Aron provided a brief introduction last year in New Scientist.”Imagine a random, infinite sequence of numbers containing nothing but +1s and -1s. Erdős was fascinated by the extent to which such sequences contain internal patterns. One way to measure that is to cut the infinite sequence off at a certain point, and then create finite sub-sequences within that part of the sequence, such as considering only every third number or every fourth. Adding up the numbers in a sub-sequence gives a figure called the discrepancy, which acts as a measure of the structure of the sub-sequence and in turn the infinite sequence, as compared with a uniform ideal. Erdős thought that for any infinite sequence, it would always be possible to find a finite sub-sequence summing to a number larger than any you choose – but couldn’t prove it.”The article in Nature, too, discussed what Erdos had in mind. Erdős, said Cesar, “speculated that any infinite string of the numbers 1 and -1 could add up to an arbitrarily large (positive or negative) value by counting only the numbers at a fixed interval for a finite number of steps. The task is intuitively easy for some arrangements—tallying digits at any interval in a sequence that is all 1s will add up to a big number. And in an alternating sequence of 1s and -1s, choosing every second digit will do the job. But Erdős conjectured that it was true for any such sequence.”Tao proved him right.Aside from the pleasure of seeing this proven, the take-home for New Scientist also appeared to be that this was a human win over computers in doing so. “Crowds beat computers in answer to Wikipedia-sized maths problem,” said the headline on Friday. Tao was better at this than a computer. New Scientist reminded readers that the maths problem previously tackled with the help of a computer produced “a proof the size of Wikipedia.” (Last year, Alexei Lisitsa and Boris Konev used a computer to prove that the discrepancy will always be larger than two. The resulting proof was a 13 gigabyte file – around the size of the entire text of Wikipedia.).Lisitsa praised Tao nonetheless. “It is typical example of high-class human mathematics,” he said in New Scientist. Still, he added, while computers may not be required to solve this problem they may be useful in other problems.” Chris Cesare in Nature reported on Friday that Terence Tao successfully attacked the Erdős discrepancy problem by building on an online collaboration. Tao is a professor in the math department at UCLA. He works primarily in mathematical areas of harmonic analysis, PDE, geometric combinatorics, arithmetic combinatorics, analytic number theory, compressed sensing, and algebraic combinatorics. Explore further Journal information: Nature Computer generated math proof is too large for humans to check This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Citation: Scientists determine the structure of Titan’s evaporites (2016, January 7) retrieved 18 August 2019 from https://phys.org/news/2016-01-scientists-titan-evaporites.html Explore further Cordier and his colleagues have computed the possible vertical structure of evaporite deposits. Their simulations confirm that butane and acetylene are good candidates for species that could compose the surface of evaporite. They found that a couple of compounds could form a thick external layer; due to the combination of the existence of two crystallographic phases and of the rather thick layer, this external butane-acetylene-enriched layer could explain the radar brightness of evaporites.The researchers have also shown in the study that the seasonal cycle may offer a mechanism which leads to a growth of evaporate thickness only limited by atmospheric production of organics. Their calculation suggests that Titan’s ethane-enriched south pole lake, Ontario Lacus, could have trapped a large quantity of solutes, explaining the lack of evaporites in the south polar regions of this moon. The models also confirm the possibility of the formation of “bathtub rings” showing a complex chemical composition and the possible existence of trimodal “bathtub ring” compositions when the evaporation is completed.Cordier underlined the need for a future space mission to Titan involving a lander focused partly on the exploration of the lakes shores, where the chemical diversity is clearly high. Sending a spacecraft could advance our knowledge about Titan’s evaporites and its interesting geological features.”The perfect appropriate probe to explore these terrains would be an amphibian rover equipped with a drill. This kind of robot could allow the exploration of both evaporitic terrains around a partially filled lake and the liquid phase itself. But this kind of project belongs to the future,” Cordier concluded. ‘Bathtub rings’ suggest Titan’s dynamic seas This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. The NASA/ESA Cassini spacecraft, studying Saturn and its moons since 2004, is an invaluable source of important information about Titan and its geological features. The probe’s on-board radar instrument investigates the surface of Titan by taking four types of observations: imaging, altimetry, backscatter, and radiometry. These observations are crucial to understanding the geology of this hazy, cold moon and the topography of its solid surface.The models—developed by a team of scientists led by Daniel Cordier of the University of Reims Champagne-Ardenne, France—could be the most important tools for investigating geologic formations on Titan. They confirm the possibility of an acetylene- or butane-enriched central layer of evaporitic deposit. The computations indicate that an accumulation of poorly soluble species like hydrogen cyanide or aerosols similar to tholins could play a dominant role. The models also predict the existence of chemically trimodal “bathtub rings.” Moreover, the research offers an explanation to the lack of evaporites in the south polar region of Titan and to the high radar reflectivity of dry lakebeds.”The existence of a hydrologic cycle based on methane and the presence of other, more complex, hydrocarbons open the possibility of the occurrence of evaporite formation at the surface of Titan,” Cordier told Phys.org.Titan hosts lakes and seas likely filled by liquid hydrocarbons containing some amount of dissolved atmospheric nitrogen and various organic compounds. The formation of any evaporite layer requires a sequence of wet and dry periods. During the wet episode, methane and ethane rains dissolve the solid organics encountered along their runoff at the ground, and then finally they fill the lacustrine depressions. The dry period produces the evaporation of the solvents and the formation of evaporates. The sequence of dry-wet periods can span over a single year if driven by Titan’s seasonal effect. Alternatively, the formation process of evaporites, described in previous papers by Jason Barnes and Shannon MacKenzie, could be the consequence of climate change over much longer timescales.”Very little is known about these structures. First of all, due to the thick atmosphere, it is very difficult to derive information about Titan’s surface composition. Jason Barnes and Shannon MacKenzie have found correlations between zones poor in water ice and dry lakebeds detected by the Cassini radar. Of course, the existence of a hydrologic cycle based on methane and the presence of other, more complex, hydrocarbons open the possibility of the occurrence of evaporite formation at the surface of Titan,” Cordier explained. © 2016 Phys.org More information: Structure of Titan’s evaporites, arXiv:1512.07294 [astro-ph.EP] arxiv.org/abs/1512.07294. arxiv.org/abs/1512.07294AbstractNumerous geological features that could be evaporitic in origin have been identified on the surface of Titan. Although they seem to be water-ice poor, their main properties -chemical composition, thickness, stratification- are essentially unknown. In this paper, which follows on a previous one focusing on the surface composition (Cordier et al., 2013), we provide some answers to these questions derived from a new model. This model, based on the up-to-date thermodynamic theory known as “PC-SAFT”, has been validated with available laboratory measurements and specifically developed for our purpose. 1-D models confirm the possibility of an acetylene and/or butane enriched central layer of evaporitic deposit. The estimated thickness of this acetylene-butane layer could explain the strong RADAR brightness of the evaporites. The 2-D computations indicate an accumulation of poorly soluble species at the deposit’s margin. Among these species, HCN or aerosols similar to tholins could play a dominant role. Our model predicts the existence of chemically trimodal “bathtub rings” which is consistent with what it is observed at the south polar lake Ontario Lacus. This work also provides plausible explanations to the lack of evaporites in the south polar region and to the high radar reflectivity of dry lakebeds. Cassini VIMS/RADAR hybrid image of filled and dry lakes south of Titan’s methane sea Ligeia Mare. The brightness of the image is determined by synthetic aperture radar which indicates roughness, and the colors by Cassini’s Visual and Infrared Mapping Spectrometer indicate composition. Some of the small lakes in the image are filled (cyan arrows). Other lakes show lacustrine morphology, but no evidence for liquids. Some of those dry lakes have the same composition as the surrounding terrain, but others show evaporites in bright orange. Image credit: NASA/JPL/UA (Phys.org)—Titan, the largest moon of Saturn, hosts many interesting geological features that could be evaporites. While the chemical compositions of these sediments have been studied by scientists, their structure still remains a puzzle. Now, an international team of researchers using available laboratory measurements has created models to explore the structure of evaporites on Titan. A paper detailing their study appeared online on Dec. 22 in the arXiv journal.
Chondrule evidence suggests ancient lowvelocity collisions between rocky planetesimals and icy bodies
An optical image showing magnetite-bearing chondrules studied in this work. Credit: Yves Marrocchi (Phys.org)—A small team of researchers with members from institutions in France and Japan has found evidence in chondrules that suggest their existence came from collisions between planetesimals in the inner part of the solar system and icy bodies on the periphery, approximately four and a half billion years ago. In their paper published in the journal Science Advances, the team describes their work with two previously found meteorites and their study of the chondrules within them. A picture of the thin section of the Vigarano meteorite that has been used in this study. Credit: Yves Marrocchi Journal information: Science Advances The team built a model to better understand how such collisions could occur and after studying their results theorize that such collisions could have come about due to the presence of a gas giant, such as Saturn or Jupiter—as it formed, it could have caused other objects to be jerked around, resulting in some being flung great distances.The findings by the team suggest that not all chondrules were made from primitive dusk present in the early solar system disk—some may have come about due to collisions. The team plans to continue their study of chondrules looking for evidence that shows that collisions are the main process driving their formation. Explore further Space scientists have been studying meteorites for many years as part of their attempt to understand the nature of the universe—one type, chondrites, hold glassy looking blobs of material in their interior, called chondrules, which come to exist, it is believed, when molten droplets of different types of material cool. Such objects are also believed to belong to a class of some of the oldest known materials in our solar system. In this new effort, the researchers focused on two chondrites named Kaba and Vigarano.In taking a close look at the chondrules within, the researchers found that they were made of sulfide-associated magnetites of magmatic origin, more commonly known as SAMs, something that had never been seen in a chondrule before. This, and their unique shape, the researchers contend, suggests that they could have been created only under conditions where oxidizing was occurring, and the only viable scenario where that could have occurred in the early solar system was where rocky planetesimals left the inner part of the solar system and ventured to its outer reaches where they eventually collided with icy bodies. Such collisions, the team notes, would have been relatively low velocity, because olivines were still present—a high speed collision would have caused such material to vaporize. © 2016 Phys.org More information: Y. Marrocchi et al. Early scattering of the solar protoplanetary disk recorded in meteoritic chondrules, Science Advances (2016). DOI: 10.1126/sciadv.1601001AbstractMeteoritic chondrules are submillimeter spherules representing the major constituent of nondifferentiated planetesimals formed in the solar protoplanetary disk. The link between the dynamics of the disk and the origin of chondrules remains enigmatic. Collisions between planetesimals formed at different heliocentric distances were frequent early in the evolution of the disk. We show that the presence, in some chondrules, of previously unrecognized magnetites of magmatic origin implies the formation of these chondrules under impact-generated oxidizing conditions. The three oxygen isotopes systematic of magmatic magnetites and silicates can only be explained by invoking an impact between silicate-rich and ice-rich planetesimals. This suggests that these peculiar chondrules are by-products of the early mixing in the disk of populations of planetesimals from the inner and outer solar system. Meteorite material born in molten spray as embryo planets collided Citation: Chondrule evidence suggests ancient low-velocity collisions between rocky planetesimals and icy bodies (2016, July 4) retrieved 18 August 2019 from https://phys.org/news/2016-07-chondrule-evidence-ancient-low-velocity-collisions.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Observations conducted with NASA’s Swift space telescope have provided more insights about the nature of a compact component of the transient low-mass X-ray binary named MAXI J1957+032. Results of these observations, available in a paper published April 1 on arXiv.org, suggest that the system hosts a neutron star. Explore further AstroSAT observations reveal quasi-periodic oscillations in the X-ray binary GX 5-1 Generally, X-ray binaries are composed of a normal star or a white dwarf transferring mass onto a compact neutron star or a black hole. Based on the mass of the companion star, astronomers divide them into low-mass X-ray binaries (LMXB) and high-mass X-ray binaries (HMXB).In the case of MAXI J1957+032 (other designation IGR J19566+032), the nature of its components is still debated, while some studies even propose that it is a triple star system. Previous observations have shown that MAXI J1957+032 is located in our Milky Way galaxy, with uncertain distance estimates (from 6,500 to 26,000 light years from the Earth), and classify it as a transient faint low-mass X-ray binary (LMXB), given that it sporadically experiences outbursts.MAXI J1957+032 was observed by the Swift telescope during its outbursts in 2015 and 2016. According to a team of astronomers led by Aru Beri of Indian Institute of Science Education and Research (IISER) in India, these observations, complemented by data from Monitor of All-sky X-ray Image (MAXI) instrument on the International Space Station, could answer the question whether the compact object in this system is a black hole or a neutron star.”In this paper we present the evolution of all the outbursts observed with MAXI and Swift, and we used the results obtained by Wijnands et al. (2015) to try to get more insights into the nature of MAXI J1957+032,” the researchers wrote in the paper. Studying outbursts in MAXI J1957+032 Beri’s team found that the spectra soften as the luminosity increases, noting also that the observed value of power-law index is at a level of about 2.5 near the end of the outbursts. In particular, they found that while the power-law index generally increases with time, the 0.5–10 keV absorbed flux decreases, which clearly shows an anti-correlation between power-law index and observed flux.The scientists added that the measured value of power-law index, when compared to other LMXBs with a neutron star, suggest the presence of such star in MAXI J1957+032 as in other faint systems of this type this value can be even as high as 3.0.Furthermore, the neutron star scenario is also supported by the observed thermal emission in the X-ray spectra of MAXI J1957+032. According to the paper, the obtained values of blackbody radius are very similar to that have been found in other systems proposed to contain neutron stars or in confirmed systems hosting such objects.All in all, the astronomers concluded that assuming the distance to MAXI J1957+032 to be around 13,000 light years, the results of the study suggests that the system indeed harbors a neutron star. However, they underlined that the currently available data does not allow them to conclusively state whether this system contains a neutron star or a black hole. The four outbursts observed in MAXI J1957+032. Credit: Beri et al., 2019. More information: Aru Beri et al. Unveiling the nature of compact object in the LMXB MAXI J1957+032 using Swift-XRT. arXiv:1904.00914 [astro-ph.HE]. arxiv.org/abs/1904.00914 Citation: MAXI J1957+032 contains a neutron star, Swift observations suggest (2019, April 11) retrieved 18 August 2019 from https://phys.org/news/2019-04-maxi-j1957032-neutron-star-swift.html © 2019 Science X Network This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Jennifer Eberhardt is a MacArthur “genius grant” winner and psychology professor at Stanford University who studies implicit bias. TIME spoke with her about her new book, Biased: Uncovering the Hidden Prejudice That Shapes What We See Think and Do, as well as her research, her work with police departments and how implicit bias can affect us all. Read the whole story: TIME
Jyoti Sharma’s Bhanuni was based on the theme of the regalia of a modern princess. The journey of the royal figure that has evolved over the years in heavy brocades, velvet, exquisite detailed embroderies and gold. Yes. The boldness of gold on the navy and the black. From ghagras to dhoti-peplums and short dresses heavy with thread details – Bhanuni celebrated the splendour of the fabric with details. Ornate and gorgeous as a whole – individual pieces were not much to celebrate about. Played it safe – that’s our verdict.
Daksh is a Delhi band that has been making all the right noises. It has Kavish Mishra on lead vocals, Mohit Vig on lead guitar, Kartik Dev on rhythm guitar, Rishabh Dev on violin, Neeraj Srivastava on drums, Partho on bass, Gaurav Sharma on keyboard and Kanoo Sahajwala on tabla. Millennium Post caught up with the band. Excerpts from the interview.What’s the story of your band? The name Daksh was given by Kavish Mishra. He is the founder and was instrumental in putting the band together. Daksh is Hindi for dexterous. It also means perfection. We always try to compose or play music that is perfect. We started out in 2007. We’re all Delhi based music professionals. We are not like other school bands out there who play music as an extracurricular activity, for monetary interest or popularity. We all wanted to carve a niche for ourselves as a professional music band or group. That’s the story behind Daksh. What brought us and kept us together unlike a lot of bands which fall apart is the fact that we were all from different backgrounds and music sensibilities, but one thing which is common in all of us is an awesome sense of Indian classical music. Also Read – ‘Playing Jojo was emotionally exhausting’Where are your influences?We are inspired by AR Rahman and his compositions. Various genres of western music get fused in a seamless manner with Hindustani music. And look at where he’s taken Indian music now. Any bands you follow or any particular genre you swear by?The bands we follow are Nirvana, Rahman and Pink Floyd. Our genre is Hindi-Rock but we also play Sufi.Tell us about your music?Uptill now we have five original compositions of our own. All of these have been written, composed and played by us. Two of these originals also have videos that have been shot to support the song. One is – Also Read – Leslie doing new comedy special with Netflix Hai Dua and the other song which has a music video is Din Dooba. Our other original compositions are Do Pal and Tu Hi. We have also composed an instrumental track taking our influence from. Rahman and that track goes by the name Tank. Where do you see this going five years down the line? Five years from now we see Daksh as an entity and individually marking our presence in the Hindi film music industry. Our aim is to see Daksh topping the Indian music charts. DETAILAt: Life Caffe, B-49, Inner Circle, Connaught Place When: 2 November Timings: 8.30 pm to 11.30 pm Phone: 43652222