Saturday 30 November 2019

Variable magma and carbon fluxes at mid-ocean ridges by David Rees Jones

21/11/2019
Eyjafjallajokull volcano in Iceland

Variable magma and carbon fluxes at mid-ocean ridges and on Iceland between glacial and interglacial cycles by David Rees Jones

Abstract:

David Rees Jones and his team looked at the influence of glacial and inter-glacial cycles on the melt production and composition at mid-ocean ridges. As the sea-level varies with the cycles the water column above the mid-ocean ridge varies thus leading is small changes in confining pressure. As the magma is formed by decompression melting the depth and percentage of melt is affected by variations of pressure thus responding to the glacial cycles. This studies tried to quantify this influence and it managed to achieve it for long periods but still overestimated it for shorter ones.

Main:

Mid-ocean ridge volcanis has been often looked at in the light of a driver of plate tectonics as well as a response to plate divergeance. Many previous studies have been realized in order to understand the processe involved with the formation of the magma at mid-ocean ridges and their compositions. Glacial and inter-glacial cycles are thought along wit hother factors to be a consequence of the variation in volcanic activity. The study conducted by David Rees Jones and his team composed of Nestor Cerpa, Rich Katz and John Rudge looks at the influence of the glacial and inter-glacial cycles on the formation and composition of the melt, which brings in a new feedback perspective.



David and his team already hade previous work experience in geodynamics. They looked at fluid mechanic related fields of work like the Earth's outer core magnetic processes or the understanding of glacial cycles. This gave them the background necessary to try and answer this question: "Does the glacial and inter-glacial cycles have an influence on magma production and composition at mid ocean ridges?".

First lets look at the basics of mid-oceanic ridge magma formation. The main process to create melt at mid ocean ridges is decompression melting. Decompression melting is the process where the asthenospher is hot enough to melt but the pressure keep it in solid state at greater depth. As it rizes towards the surface by convection the rock column above become smaller up to the point where the confining pressure drops enough to allow melt to form. Trough this process the fertile asthenosphere (lhertzolite) melts between 5% and 20% melt leaving depleted mantle rocks behind (hartzburgite). Because it is only partial melting there is a partition of elements between the solid (rock) and liquid phase (melt), which depends with the percentage of melt.

Then the climate cycles pattern needs to be understood. During the last 2 My there is a good record of sea surface temperatures illustrating glacial and inter-glacial cycles (from E.Lisiecki et al 2005 and Peter U. Clark et al 2006*). The cycles alternate with a rapid heating of the climate into an inter-glacial period with a slower cooling back into a glacial period. These cycles are on average 41 ky long before the Mid-Pleistocene where they switch to 100 ky cycles. These are thought to be linked with Milankivic cycles where ~40 ky corresponds to obliquity, and the ~100 ky cycles could correspond to the eccentricity having to most influence. But what could be the link with the mid-ocean ridge volcanism?
Variations of delta O18 from benthic foraminifera as a proxy for sea surface temperature during the last 3 Myr (from Peter U. Clark et al 2006). Up to ~1.5 Myr the cycles last ~41 ky but after the Mid-Pleistocene transition the cycles last ~100 ky. This is most likely caused by the relative inflence of Milankovic cycles.
Between ice ages the mean sea-level can have around 100m variations! This changes the water culumn that lies above the mid-ocean ridge which in turn changes the pressure gradient where decompression melting occur. During glacial periods the sea-level is lower, thus the pressure is also smaller which means that the decompression melting happens deeper, thus marginally increasing the melt %. The reverse would happen during an inter-glacial period. This would have an impact on the magma formation at the mid-ocean ridge thus creating a feedback on climate.
The melting by decompression only (without lowering the melting temperature with the help of water) is referred to as "dry melting". During dry melting the carbon gets concentrated in melt as it is incompatible. This results in volcanic gases that emane from the magma to be highly concentrated in CO2. This plays a direct role in climate regulation as it is a well known greenhouse gas.
By looking in the variations of CO2 composition of the melts, David and his team found that there was a 10 to 15 years lag between the sea-level change and the variation in the decompression melting depth response.
They made a model to quantify the melt production variation caused by sea-level change. This model seemed to be accurate for long periods but for shorter ones had the tendency to overstimate the melt production, up to twice the actual amount. This model thus sets a maximum to the melt production variation.

In Iceland, the same processes can be applied but the parameters are different as the water column change has less inluence now as the ridge is above sea-level but ice sheets now are an additional mass to take into account. An additional comprexity is the imput of the hot spot below Iceland. It is still difficult to quantify the influence of the glacial and inter-glacial cycles
on the melt production of iceland.

References and further readings:

*A Pliocene-Pleistocene stack of 57 globally distributed benthic D18O records by Lorraine E. Lisiecki and Maureen E. Raymo
The Middle Pleistocene transition: characteristics, mechanisms, and implications for long-term changes in atmospheric pCO2 by Peter U.Clark et al.

Saturday 19 October 2019

Fate and transport of nanotechnology in groundwater by Ian Molnar

10/10/2019
A crystal clear lake in Canada suffering from sulfur dioxide poisonning

Fate and transport of nanotechnology in groundwater by Ian Molnar

In short:

Ian Molnar is a new lecturer to Edinburgh University from Canada with a background in civil engineering. He is a contaminant hydrogeologist currently researching the transport of nanoparticles in groundwater. His study is based on the nanoparticle modelling problem of the currently used Happel sphere model. His experimental work found that 40% of the nanoparticle mass was omitted in this model thus leading to an overestimation of the filtering capacity of soil and sand. His experiment hopes it will help on the underway development of a new model for nanoparticle flow.

Main:

Ian Molnar is a Canadian hydrogeologist with a background in civil engineering. He joined as lecturer at Edinburgh University this summer is is thus the third Ian in the department. While still in Canada, his research as a contaminant hydrogeologist included the study of the crystal clear lakes North of Lake Huron. In this case, some lakes where crystal clear due to the lack of bio-activity caused by the leaching of sulfur dioxide from the nearby industry. In the quartzite regions the lakes where dangerously high in sulfure dioxide and their pH dropped down to values up to 5, whereas in the pink granite regions the buffering system was higher so the lakes where less affected. But now Ian Molnar is focusing on the transport of nanoparticles in groundwater.



The modelling of the flow and capture of nanoparticles in groundwater is important for multiple aspects like the modelling and tracing of contaminants in groundwater or the design water treatment installations. Nanoparticles are defined as any particle with at least one dimension smaller than 100nm. These particles are abundant in our modernized world for example titanium dioxide (a strong UV absorbant), carbon nanotubes, silver nanoparticles (antimicrobial) drug container and so forth. The average production of nanoparticles is averaged to around 320 000 tons/year (Keller 2013) which leads to the question of where do they land afterwards? Most of them get into a landfill (189 000 t/y), soil (51 600 t/y), groundwater (69 200 t/y), and the rest in surface water.

With such a mass of contaminant it is needed to me able to model the transport at a regional scale. Some preliminary experiments where done on a bench-scale experiment, the soil-column and then compare the flow of particles with the now used models. The result is that the model overpredicted the filtering potential for the nanoparticles which can be dangerous as for example water treatment plants could unknowingly release contaminated water. So Ian Molnar tried to improve the model as the Colloid Filtration Theory seemed to be good for micron scale colloids but not for nanoparticles.

The base model is based on the Happel shere in cell model. This model consist of modelling the pore space to a grain with a thin water shell around (proportional to pore space). Then it calculated the probability of a particle injected on one side to collide with the grain and then remain attached to it or to just flow inside the water shell to the other side.

In order to correct this model for nanoparticles, Ian and his team wanted to image the flow of nanoparticles during the soil column experiment by X-ray Tomography. The resolution of X-ray Tomography is of 10 µm which is too big for imaging 100nm particles. So to solve this problem they used absorption-edge imaging which involves a difference imaging on either side of the K-edge energy of the particles. This technique leads to a 3D map of nanoparticles concentration.

So to check to Happel sphere model they checked the concentration of nanoparticles around grains up to 15 µm close (cannot more due to refraction problem). So they found out that 40% of the nanoparticles are outside the Happel fluid envelope. As this mass is outside the fluid envelope it does not interact as predicted in the model leading to overestimation.

The design and testing of a new model is underway and this study hopes to help in the making of it.

Saturday 12 October 2019

What is Gemeology ? by Anu Manchanda

03/10/2019

What is gemology ? by Anu Manchanda

From the Gemology Institute of America (GIA), Anu Manchanda came presenting what gemology is. The GIA's aim is to promote knowledge about gems for companies, customers, graders, mines and sellers. Her short description of gemology would be that it is a blend of a few things: the study of gemstones as in applied crystallography, business as in industry and sales and markets, art for the jewellery aspect and finally a passion. She then described what it actually entails to be a gemologist.



Gemology is used by a range of professionals: retail sales professionals, appraisers, jewelry buyers, wholesalers, designers, manufacturers, trade journalists and research lagoratories for treatment of gemstones or synthetic manufacture. The production of synthetic gemstones is now very important in multiple field ranging from abraders in industry to scientific grade equipment.
At GIA, the gempstones are separated as diamonds and coloured gemstones groups as diamonds are regulated in a specific fashion. The GIA is involved with the trade market, gemstone identification, diamond and coloured gemstones grading, the basic optical and physical science relative to gemology and finally the sales techniques. She then discribed the steps they go through when they are handed in a gemstone.
First it is needed to identify the gemstone. The best tool for this first is tho use the naked eye where wth the right expertise ~50% of the work is aleady done. This is a prime quality for field and exploration gemologists. Then the similar gemstones are differentiated relative to their Refractive Index (RI) which is distinct per crystal. Then they are analysed through the microscope. These techniques permit to make the difference betweens look-alikes and synthetics, very important especially in the diamond trade.
The gemstones, if send for lucrative purposes, can be treated to overcome their shortcomming. Breaks or cracks can be filled by material with the same RI than the crystal of laser drilling can be used to remove impurities.
An important aspect as well is the grading of these gemstones, for coloured gemstones their colour, aside from their purity is a defining factor. Colour of the coloured gemstones are given a rank qua tone, hue and saturation which grately influence their final grade. The colours of cloured gemstones are directly linked to theire relative concentrations in impurities making certain tints very rare and thus very valuable.
For diamonds being transparant, other criteria are used for grading, notably the four C’s: colour, clarity, cut and carat. Depending on their Nitrogen content diamonds quand be slightly tinted. A slight tint would usually bring the value down of the product whereas a transparant of very deep clour will bring the price up. Clarity measures the proportion, shape and relief of the inclusions, focusing of the optic aspect. The cut is the man influence parameter and carat finally is relative to the size of the gem. The mine of origin can also have an impact on the price of the gem.
The exploitations of gemstones bring along multiple questions: the adventages and disadventages of industrial gems, the ethics in the mining, the environmental impact and the steps taken or yet to be taken to tackle those problem. Questions to which she failed to answer.

Saturday 5 October 2019

Hunting the Hadean magnetic field by Rich Taylor

26/09/2019
Jack Hills of Australia

Hunting the Hadean magnetic field by Rich Taylor

In short:

Rich Taylor and his team worked on measuring the paleaomagnetic field intensity in zircons of the Hadean after a controversial paper by Tarduno et al (2015). After selecting through 4000 detrital zircons from the Jack Hills in western Australia for very demanding criteria only 2 where initially thought to have been overprinted. Measuring their magnetic remanance lead to results similar to those of Tarduno et al. But after further analysis the magnetic remanace seemed to actually be from an later overprint, thus discarding those samples. This lead to the question as to whether Tarduno et al 's results were not overprints as well.So even though the presence of a Hadean magnetic field remains unproven, Rich's research improved the techniques in paleomagnetic measurements.

Main:

The Earth's magnetic field has been studied for many years ranging to uses in navigation, exploration, subsurface exploration (magnetic anomalies) and astronomy. It's remanance in the rock record is well known for the discovery of paleomagnetic inversions, used as dating method and further evidence of sea-floor spreading. It is then only natural that the question of when did the geomagnetic field start existing? It has been proven that the geodynamo was already functioning in the Archean, but when did it start? What are the timescales of it coming in action? Was it rapid? Was it's intensity stable or not? All of these question drove the research for the Hadean magnetic field. Rich Taylor and his team are trying to find evidence of it's existence and are trying to quantify it's intensity. It also is a follow up from the controversial paper from Tarduno et al in 2015 entitled "Paleomagnetism. A Hadean to Paleoarchean geodynamo recorded by single zircon crystals".



In order to find remanace of the Hadean magnetic field, Rich and his team first had to find samples dated of over 4 billion years old. Today, very few of this material is left and they used detrital zircons found in sedimentary rocks in the Jack Hills in western Australia. These zircons probably grew in magma chambers between 4.4 and 4 billion years ago and then got weathered out of their source rock and redeposited in the sedimentary rocks of the Jack Hills

Zircons are the best tools as they are easy to accurately date with uranium/lead dating and often have inclusions of ferromagnetic material, trapped during the growth of the crystal. These crystals are also very resistant to weathering allowing their study after such long time.

Resistant as they are, zircons still get damaged after such long timescales which lead to the first part of the work realized by a PHD student consisting of selecting which samples to use. This formed a huge part of the work as the criteria are very demanding. First of all, the zircons must not have too many cracks, which lead to the deposition of secondary minerals thus overprinting the actual data. Then the crystals need to be of the right age (U/Pb dating). They must not have a too strong iron zoning, too much damage from the radioactive decay, no thermal or hydrothermal overprint.. From 4000 initial zircons only 2 where deemed suitable and passed all the criteria. In the study from Tarduna et al not much was said about the zircon selection and they analyzed over 8 samples so this study was trying to figure out the accuracy of these results.

The remanant magnetisation of these 2 samples where then analyzed using a SQUID microscope and then with a Quantum Diamond microscope, allowing to "map" the charges in a section of the crystal quantifying it's remanant magnetic intensity. The results at this point seemed very similar to the ones from the paper by Tarduno et al in 2015, but the potential for magnetic overprint was still there.

Using an atom probe, Rich and his team mapped in 3D a small needle of a sample. In zircons the inclusions usually form in small packages in the crystal, concentrating impurities such as iron or uranium. This seemed to be useful as these groups could be dated separately with their U/Pb ration and remanant magnetization being together. These were found to be of younger age than the Hadean thus meaning it being a later overprint. The hypothesis is that these inclusions got connected through small cracks to the outside of the crystal caused by the radioactive decay of the uranium. So the 2 remaining zircons where finally discarded as having a later overprint and thus not reflecting the original magnetic field. As their results were similar to those of Tarduno et al and with Tarduno not having such an extensive study of the risk of overprint in the samples it questions if the magnetic field measured by him was not an overprint.

In conclusion the Rich and his team could not prove the presence of an Hadean magnetic field but lead to a better understanding of the overprinting criteria to consider when doing paleomagnetic research.

Saturday 28 September 2019

The stability of single-vortex domain state in the context of paleomagnetism by Wyn Williams

19/09/2019
Image of magnetic domains by R.Harryson 2005

The stability of single-vortex domain state in the context of paleomagnetism by Wyn Williams

In short:

On Thursday 19th of September 2019, Wyn Williams presented a new model in order to understand the state of the domain that retain paleomagnetism. His study tries to improve the previously accepted model of magnetic domain states relative to grain size. His model suggest replacing the Pseudo-Single Domain state (PSU) by the Single-Vortex state (SV), which may well be the main recorder of paleomagnetism. The main goal for this model is to improve the accuracy of paleomagnetism studies. As change in accepted models is often faced by resistance, his research may get controversially disputed once it will get published.

Main:

Figuring out how magnetism is recorded is nowadays more important than ever as it is used in a lot of different technologies. For example computer hard-drives already use magnetic domains similar to single-vortex domains to record information, or as carriers in the biomedical industry. The problem when looking at paleomagnetism is that it is an open system with the risk of time overprint. So how do rock actually record geomagnetic data?


The models follows the principle that above the closure temperature the ferromagnetic materials will align themselves with the main magnetic field, here assumed to be the Earth's magnetic field. Then as the rock cools down and the and goes below the closing temperature the geomagnetic information would be locked in the rock and its intensity will linearly proportional to the strength of the original magnetic field. This capacity of recording a magnetic field, the magnetic remanence, is strongly linked to grain size (for the same mineral here mostly magnetite).

In the accepted model the grain size effect is classified in 4 major categories: the very small grain sizes, the Single Domains (SD), the Pseudo-Single Domains (PSD) and the Multi-Domains (MD). For the very small grain size, the thermal excitation reorients the grains so their magnetic remanence is nonexistent. Going up in grainsize the SD have a very good magnetic remanence and correspond the the simplest of the magnetic domain theory. The grain is to small to contain multiple domains so it only has only one domain recording one direction of magnetic field. Then comes the less-well understood PSD which would contain a small amount of domains (typically 4 or less) with an overall magnetic vector very similar to a weaker single domain grain (as domains in opposite directions cancel each other out). Finally the MD are grains containing an great number of domains where most of them cancel each other out leading to low total magnetic remanence.

The SD and PSD having the most magnetic remanence it became necessary to understand them better. The SD being well understood the question was to research the PSD. By making a computer model of the magnetic field in magnetite grains relative to grain size and grain shape the study found that above 200nm and vortex structure in the magnetic field starts to appear.
This vortex grows and seem to link the uniformly magnetized domains. Above a grainsize of about 2µm it flips to multi-domain state. Here came the first suggestion to change the PSD into the Single-Vortex (SV) state.

The SV state model would be better at explaining the gradual drop of magnetic remanace observed relative to grainsize compared to the PSD. But then comes the question of grainshape and how it might affect the magnetic remanance. Wyn Williams and his team found that up to 30% elogation no real change was observed and that only small changes were observed between 50% to 70% elongation where the vortex seemed to twist as it grew bigger.

On average the magnetic remanence per particle of SV if about 1% of the magnetic remanence of SD so their contribution to paleamagnetism recording might seem smal at first glance. But as SV are way more abundant than SD the magnetic remanence per volume of rock of SV is about 100x the one of SD making it the main paleomagnetic signal.

Then came the big challenge of confronting the model with experimental data. The aim was to match the curve of the model with the experimental data, which at first failed meaning the model was still lacking in some important factors yet to be considered. But including a grainsize distribution factor the curve got a bit better but was still far off. Then adding a factor for the interaction between the magnetic particles improved the model a lot, but it still did not match. The main interaction is the habit of SV to form chains making them more SD like, increasing their remanance while keeping the SV in their original orientations. Finally when adding the complex interactions in 3D (crossing chains) the vortex behaves as in an elongated grain, getting twisted and thus increasing it's magnetic remanance. At this point the model curve fitted the experimental data very well.

Wyn Williams and his team hope that this model adding to the understanding of the magnetic remanence will help distinguish noise from signal in paleomagnetism measurements, thus increasing their accuracy.