Biosurfactant production was evident as a bright zone of de-wetted or raised oil droplets

The drop collapse assay was performed as according to Bodour and Miller-Maier. 2 µl 10W-40 Pennzoil® was applied to delimited wells on the lid of a 96-well plate and allowed to equilibrate at room temperature. Next, 5 µl of either diluted surfactant samples or supernatant from bacterial cultures or re-suspended bacterial colonies were pipetted onto the oil surface. Drops which retained a spherical shape were scored as negative for surfactant content, while drops which had a visibly-decreased contact angle with the oil and spread were scored as positive for surfactant content. The atomized oil assay was conducted as follows: Bacteria were evenly spotted onto KB agar plates using sterile toothpicks and grown overnight. Alternatively, if visualizing surfactant from broth culture, 1mL of 2-day-old broth culture was centrifuged at 10,000xg for 2 min, and 5 µl of supernatant was pipetted onto the plate and allowed to equilibrate for 30 minutes before assaying.An airbrush was used to apply a fine mist of mineral oil onto the plate with an air pressure between 15 and 20 psi.A collection of 377 bacterial strains isolated from a variety of terrestrial and aquatic sources were grown on agar plates and tested for biosurfactant production using the atomized oil assay in which an airbrushed mist of oil droplets was applied to culture plates.Additionally, cells of each strain suspended from plates into water as well as drops of broth culture supernatants were tested for drop collapse on an oil surface. A total of 41 of these strains exhibited biosurfactant production in at least one assay. The identities of these strains were determined from partial 16S RNA sequences,draining plant pots and all isolates were assigned to described taxa based on 98% BLAST sequence identity. Pseudomonas and Bacillus species were the most common genera identified, in line with previous reports of limited surveys. 

All biosurfactant producers were members of the Gammaproteobacteria or Firmicutes except for a single Rhizobium species. After eliminating duplicate taxa from the same sampling location, a total of 23 unique environmental strains that produced surfactant detectable in at least one assay were identified and further characterized. All 23 isolates produced surfactant detectable by the atomized oil assay, although only 16 isolates conferred drop collapse of either cells suspended from plates or of broth culture supernatants. Furthermore, cells of only 9 of these 16 isolates conferred drop collapse from both culture conditions. Most of the other 7 strains that conferred drop collapse only under one culture condition did so for suspended plate-grown cells. P. syringae strains were typical of this group; cells of four representative isolates conferred drop collapse when suspended in water from plate cultures but not the supernatant of planktonic cultures. While 16 strains of P. syringae, P. fluorescens, or B. subtilis produced biosurfactant that could be detected by both assays, the 7 strains that exhibited biosurfactant activity that was detectable only by the atomized oil assay mostly consisted of a diversity of other taxa. Although not appreciated in most biological studies, surfactants differ greatly in their chemical properties in ways that could influence their ability to be detected by various assays. For instance, a fundamental property of a surfactant is its relative solubility in water and oil, which can be broadly described by its hydrophilic-lipophilic balance value. Some important synthetic surfactants with low hydrophilicity are not readily dispersible in water, and thus have unique functions such as forming inverse emulsions of water into oil. If a bacterial strain produced a biosurfactant with such low water solubility this could account for its inability to reduce the surface tension of water sufficiently to collapse a water drop.

In order for drop collapse to occur on an oil surface, a minimum surface tension reduction at the water/air interface from 72 dyn/cm to around 43 dyn/cm is required. Although a surfactant may be present in a sample of interest, it might not bedetected by the drop collapse assay if it is produced in low quantities or has a property preventing it from lowering the surface tension of water. Because the atomized oil assay can detect 10- to 100-fold lower concentrations of surfactant than that of the drop collapse assay , it is reasonable to hypothesize that the atomized oil assay can detect surfactant production in weakly producing strains. Therefore, it was possible that the 7 strains that did not confer drop collapse may simply produce too little surfactant to be detected with this method. Indeed many of these strains exhibited small halos in the atomized oil assay , suggestive of low surfactant concentrations. However, a few strains such as Bacillus pumilis that did not cause drop collapse produced biosurfactants that conferred halos of de wetted oil droplets around colonies that were at least as large as many strains whose biosurfactants did confer drop collapse. This observation led us to suspect that the surfactant had properties which hindered its ability to be detected by the drop collapse assay. To address the features of biosurfactants that could be detected by the atomized oil assay but not the drop collapse assay, we distinguished the extent to which the hydrophobicity of the surfactants might limit their detection with the later method or whether the higher sensitivity of the atomized oil assay was responsible for their detection. As a test of the relative hydrophobicity of the surfactant produced by B. pumilis we suspended colonies of it as well as P. syringae strain B728a in water to identical concentrations, removed the cells by centrifugation, and then tested the supernatant for surfactant activity using the atomized oil assay. The water soluble material washed from cells of P. syringae B728a, which contains syringafactin and readily causes drop collapse , contained sufficient surfactant to produce a large halo of de-wetted oil droplets when placed on an agar surface. 

However, very little biosurfactant was apparently washed from cells of B. pumilis, since no zone of de-wetted oil droplets was observed. Similarly, the surfactants produced by Pantoea ananatis and Pseudomonas fluorescens strains which were detected only by the atomized oil assay also appeared to have low water solubility when assayed after washing of cells. However, the washings of four other strains that exhibited the ability to de-wet atomized oil droplets but not to collapse water drops,drainage gutter retained the ability to de-wet oil droplets. This suggests that these strains produced only small amounts of a water-soluble surfactant that could be detected by the drop collapse assay if present in higher concentrations. In support of this conjecture was the observation that these later strains exhibited only relatively small halos in the atomized oil assay. The low production of water soluble surfactants in these strains was verified for P. syringae strain PB54 using mass spectroscopy. This strain was observed to produce the same syringafactins as P. syringae B728a, albeit in much lower quantities, confirming that the detection of surfactants in strain PB54 by the drop collapse assay was compromised by its low level of production. In order to confirm our conjecture that the lack of detection of biosurfactant production in our B. pumilis strain in the drop collapse assay was due to its low water solubility, we characterized it using MALDI mass spectroscopy. The mass spectrogram of the material extracted from the cell free region surrounding colonies on the surface of plates revealed a series of prominent peaks in the range of 1050-1130. Several B. pumilis strains have previously been shown to produce a family of pumilacidins in this mass range. The mass spectrogram of our strain shares the same masses of a sample containing a mixture of pumilacidin A, B, C, and D. The masses observed in Fig. 2 are a combination of [M+Na]+ and the [M+K]+ adducts commonly seen in MALDI mass spectroscopy. Therefore, we conclude that our strain is producing a mixture of low water solubility pumilacidins that are capable of readily diffusing away from cells on the surface of an agar plate, but which are not sufficiently water soluble to impart drop collapse. In order to demonstrate pumilacidin’s surfactant capabilities, the surface tension of a broth culture of B. pumilis was measured using a highly sensitive pendant drop analysis. The surface tension of the broth culture supernatant was lowered by production of a surface active compound to 50 dyn/cm; this surface tension is just above the minimum threshold necessary to impart a drop collapse. Since the highly hydrophobic pumilacidins were detectable using the atomized oil assay, we further determined the efficiency with which other characterized synthetic surfactants differing in chemical properties could be detected by this method.

The assay was performed on synthetic surfactants that possessed a broad range of hydrophobicities. As seen previously, the atomized oil assay readily detected surfactants having more balanced hydrophilic and lipophilic groups, which were also detected by the drop collapse assay. On the other hand, the hydrophobic surfactants Span® 85 and Span® 80 each yielded large bright halos in the atomized oil assay, but given their low water solubility, could not be detected in the aqueous phase by the drop collapse assay. This is in agreement with our observation that hydrophobic pumilacidins were also only detectable by the atomized oil assay and not by the drop collapse assay. Curiously, the synthetic surfactants not only caused bright halos of de-wetted atomized oil droplets, but those with balanced hydrophilic and lipophilic groups also caused the oil droplets to migrate away from the source of surfactant, traveling at a speed of up to 0.1 mm/minute. Such expanding halos may result from a strong surfactant gradient, such as explored by Angelini et al. , although it is unclear why this should not be also conferred by the hydrophobic surfactants. This property was commonly observed around biosurfactant producing bacterial colonies and might be used to infer the water solubility properties of the biosurfactants. In addition to the surfactants that were only revealed by the atomized oil assay, we also found that many surfactants were detectable in the drop collapse assay only when cells had experienced a particular growth condition. Most prominent among strains exhibiting such growth condition-dependent production of surfactants were strains of P. syringae; cultures of this species never conferred water drop collapse when grown planktonically. The factors determining surfactant production in P. syringae pv. syringae B728a, typical of this species, was thus investigated. While culture supernatants of this strain did not cause water drop collapse on an oil surface, plate-grown cells suspended to the same concentration as the planktonic culture conferred water drop collapse. Suspension of a syfA– mutant blocked in production of syringafactin did not cause water drop collapse, confirming that the drop collapse is due to syringafactin. We thus postulated that enhanced expression of syringafactin production in cells grown on a surface was responsible. In order to link syringafactin production to surface-mediated increases in surfactant production, we examined the transcriptional regulation of syfA using a GFP-based bioreporter. Greater than a 10-fold increased expression of syfA was observed when cells were grown on an agar surface compared to planktonic growth in broth culture. As a control, a strain constitutively expressing GFP exhibited similar levels of fluorescence in both cultures. Since there have been reports that production of some surfactants are influenced by growth stage , we examined syfA expression at a variety of times for up to 3 days during the growth of both liquid and solid cultures of P. syringae. GFP expression was higher in cells recovered from agar plates than broth cultures at all times, indicating that this is not growth-stage dependent phenomenon. Additionally, some reports have documented that surfactant production is activated in more dense cultures by quorum sensing. However, the GFP fluorescence of P. syringae harboring the pPsyfA-gfp fusion in the wild-type and a quorum sensing deficient strain was similar both in liquid and solid cultures, indicating that syringafactin production is not dependent on quorum sensing. Although not previously connected to surfactant production, one of the ways by which bacteria sense surfaces is apparently through monitoring the viscosity of their environment. When PVP-360, a viscosifying agent, was added to broth medium, the expression of syfA was increased to levels similar to that of cells on agar plates. 

The outer gland surface is composed of a smooth capsule covered by a membrane

The mobile phase consisted of ACN and water containing 0.1% trifluoroacetic acid beginning with 65% and increasing to 100% over 11 min at 1 mL/min. A final wash at 65% ACN for 6 min eluted interfering materials. The tR values for HCHO-DNPH and benzaldehyde-DNPH were verified with analytical standards. Formaldehyde dehydrogenase was used to determine if the HCHO produced was bound or free. Microsomal incubations after 1 h were treated with 0.025 units FDH , 40 mM NAD+ and 80 mM reduced glutathione followed by a 20-min incubation at 37°C then analysis for HCHO as above. Some N-methylol metabolites such as diuron or monuron N-methylol can be detected after methylation to form a N-methoxymethyl derivative. Following the Suzuki procedure, TMX, dm-TMX or diuron-microsomal-NADPH incubations were extracted four times with ethyl ether which was then evaporated under N2 at 25°C to 1 mL and 100 µL of methanol and 20 µL of concentrated sulfuric acid were added. After shaking for 1 min at room temperature, reactions were extracted with ice-cold water and the ether layer recovered. The aqueous fraction was further extracted with ethyl ether. The ether fractions were combined, evaporated to dryness and analyzed as in section 2.2.5 for methylated N-methylol intermediates, but none were identified.TMX is a hepatotoxicant and hepatocarcinogen in mice and its metabolite, dm TMX, is also hepatotoxic in mice with its toxicity exacerbated by dm-CLO as an iNOS inhibitor. Importantly, these unfavorable toxicological features are not evident in rats,macetas 20 litros raising the question of whether mice or rats are the better model for humans. 

Comparative metabolism may be a factor since mouse liver microsomes form dm-TMX and dm-CLO much more efficiently than rat or human liver microsomes. TMX was initially categorized as a “likely human carcinogen” based on the mouse model but this was modified to “not likely to be carcinogenic to humans” based on species differences in metabolism. Initial preliminary studies focused on the in vitro inhibition of two isoforms of NOS by dm-CLO. Reducing the formation of nitric oxide via NOS inhibition is known to enhance hepatotoxicity of other toxicants and may explain the hepatotoxicity observed in TMX-treated mice. However, the tentative conclusion is that dm-CLO does not potently inhibit either iNOS or nNOS in vitro. Therefore the focus of further studies turned to analyzing the formation of other reactive metabolites from TMX and dm TMX. TMX is the only one of the seven commercial neonicotinoids to induce hepatotoxicity or hepatocarcinogenicity in mice or rats. The unique structural feature of TMX and its hepatotoxic metabolite dm-TMX is the oxadiazinane moiety, which is a potential source of HCHO and N-methylol metabolites. In this first study of HCHO as a neonicotinoid CYP metabolite we find that of all commercial neonicotinoids TMX and dm-TMX are the most efficient HCHO generators and much more so with mouse than rat or human CYPs. Our results on species differences in HCHO liberation fully agree with the findings of Green et al.on dm-TMX, CLO and dm-CLO formation from the rest of the molecule. The observed species differences in metabolism of TMX or dm-TMX are likely due to substrate specificity and expression differences of various CYP enzymes, with mice having the highest number of CYP genes compared to rats and humans.

The present study therefore confirms the preference for the rat over the mouse model in TMX human risk assessment. The hepatotoxicant/ hepatocarcinogen candidates from TMX and dm-TMX metabolism are HCHO and N-methylol intermediates. HCHO is a known human carcinogen. To test if free HCHO was formed, FDH was added to TMX- mouse microsomal incubation reactions. Based on FDH-induced HCHO loss, most of the HCHO formed by CYPs from TMX was free, but the remaining HCHO could be protein bound or released from N-methylol intermediates during analysis. A similar result was obtained for HCHO liberated from the NCH2OCH3 moiety of alachlor under comparable conditions. Attempts to detect N-methylol metabolites in the liver of TMX-treated mice were unsuccessful in vivoas well as in vitropossibly due to instability on formation and analysis. The same applies to in vivo detection of HCHO which is very bioreactive. Although not detailed here, white blood cells from the same groups of TMX-treated and control mice were analyzed for DNA-protein crosslinks induced by HCHO production from TMX using the method of Zhitkovich and Costa. These attempts were unsuccessful with the assay method having low sensitivity. Many compounds including pesticides proposed or established to be carcinogens yield N-methylol metabolites some of which may contribute to their toxicity. The inability to detect N-methylol metabolites from alachlor or HMPA in our studies agree with previous literature indicating the difficulty of their analysis due to high reactivity and short half-lives. The candidate N-methylol intermediates from TMX and dm-TMX were therefore synthesized by reaction of CLO and dm-CLO with HCHO and HCO2H. Although individual intermediates were not isolated and characterized, the CLO and dm-CLO reactions with HCHO in HCO2H appear to give two mono-N-methylols and three further addition compounds with m/z equivalent to the addition of two methylols per molecule as the reaction proceeds ultimately ending up as TMX and dm-TMX.

The proposed mono-N methylols from synthesis were found to be stable not only to extraction and LC/MS but also to an additional incubation with mouse liver microsomes. This creates an apparent contradiction wherein the synthetic N-methylols are adequately stable for study but the proposed CYP-formed N-methylols are not observed. This anomaly is rationalized by the different mechanisms and environment of N-methylol formation in the chemical synthesis and enzymatic CYP systems. Perhaps the N-methylols as they are enzymatically formed further react in the CYP active site containing ferrous iron,maceta 40 litros cysteine thiol and imidazole N-H. Cannabis sativa has a long history of cultivation for a variety of uses including food, fibre, medicine, and recreational drugs. Cannabis produces many different secondary compounds such as cannabinoids, flavonoids, stilbenoids, alkaloids, lignanamides, and phenolic amides. D9 -Tetrahydrocan nabinolic acid , a product of the cannabinoid class, is the primary psychoactive agent. This compound is produced as an acid in the glandular trichomes of in- flflorescence bracts and undergoes decarboxylation with age or heating to D9 -tetrahydrocannabinol. Cannabis cultivars differ substantially in economic traits that range from marijuana, arguably the most widespread illicit drug, to hemp fibre derived from the stems of the plant. Marijuana consists of the dried female inflorescences in which the quantity of THC exceeds that of cannabidiol, and potency varies among cultivars by several orders of magnitude. Marijuana cultivars are known to have THC levels exceeding 2–24% of inflorescence dry weight whereas hemp cultivars produce substantially less THC but rather high levels of CBD. THCA and CBDA share the same bio-synthetic pathway except for the last step in which THCA synthase and CBDA synthase produce THCA or CBDA, respectively. 

Recent evidence suggests that the genes encoding the two synthases are allelic. CBD and THC are enatiomers, but only THC elicits psychotropic effects, whereas CBD may mediate anti-psychotropic effects , a difference highlighting the stereo-selectivity of receptors in the human body that bind these compounds. Although classified as a drug without therapeutic value in the United States, ingestion of THC is widely regarded as having effects including pain relief and appetite stimulation, that may, among other things, increase the tolerance of cancer patients to chemotherapy. Dronabinol, a synthetic analogue of THC, is approved for use as an appetite stimulant in the United States as a Schedule III drug. Cesamet , another synthetic analogue, is used as an anti-emetic for patients undergoing cancer therapy. The natural product Sativex is approved for use in the UK and is derived from Cannabis cultivars containing both THC and CBD, and is used to treat pain symptoms associated with multiple sclerosis. Compounds from Cannabis sativa are of undeniable medical interest, and subtle differences in the chemical nature of these compounds can greatly influence their pharmacolog ical properties. For these reasons, a better understanding of the secondary metabolic pathways that lead to the synthesis of bio-active natural products in Cannabis is needed. Knowledge of genetics underlying cannabinoid bio synthesis is also needed to engineer drug-free and distinctive Cannabis varieties capable of supplying hemp fibre and oil seed. In this report, RNA from mature glands isolated from the bracts of female inflorescences was converted into cDNA and cloned to produce a cDNA library. DNA from over 2000 clones has been sequenced and characterized. Candidate genes for almost all of the enzymes required to convert primary metabolites into THCA have been identified. Expression levels of many of the candidate genes for the pathways were compared between isolated glands and intact inflorescence leaves.Seeds from the marijuana cultivar Skunk no. 1 were pro vided by HortaPharm BV and imported under a US Drug Enforcement Administration permit to a registered controlled substance research facility. Plants were grown under hydroponic conditions in a secure growth chamber yielding cannabinoid levels in mature plants as reported in Datwyler and Weiblen. Approximately 5 g of tissue was harvested from mature female inflorescences 8 weeks after the onset of flowering. Tissue was equally distributed into four 50 ml tubes containing 20 ml phosphate buffered saline as de scribed by Sambrook et al. , but made with all potassium salts and mixed at maximum speed with a Vortex 2 Genie for four repetitions of 30 s mixing followed by 30 s rest on ice, for a total of 2 min of mixing. Material was sieved through four layers of 131 mm plastic mesh and the flow-through was split into two 50 ml tubes and spun in a centrifuge for 30 s at 500 rpm. Supernatants were decanted and pellets were resuspended in PBS. The suspensions were combined into one tube and pelleted as before. The resulting pellet was diluted into 100 ll of PBS. Five ll were used for cell counting with a haemocytometer, and the total suspension was estimated to contain 70 000 intact glands. Plant residue was incinerated by a DEA-registered reverse distributor. Quantitative reactions were performed as described previously using primers listed in Supplementary Table 4B at JXB online. Equivalent quantities of RNA isolated from glands and inflorescence associated leaves were used to generate the respective single stranded cDNAs. qPCR reactions containing equal quantities of gland or leaf cDNA were run in duplicate along with reactions containing standards consisting of 100-fold se quential dilutions of isolated target fragments, on a Light cyler qPCR machine. Lightcycler software was used to generate standard curves covering a range of 106 to which gland and leaf data were compared. Two biological replicates were used to generate the means and standard deviations shown in Supplementary Table 4A at JXB online. These values were used to compute the gland over leaf ratios and P-values shown in Supplementary Table 4A at JXB online. Raw relative expression data, means, standard deviations, P-values from gland versus leaf t tests, qPCR primer sequences, and representative real-time qPCR tracings are shown in Supplementary Table 4A at JXB online.Anatomical study revealed that glands located on mature floral bracts of female plants are the site of enhanced secondary metabolism leading to the production of THCA and other compounds in Cannabis sativa. These glands are located on multicellular stalks and typically are composed of eight cells.The capsule contains exudates derived from the gland cells. The weakly attached glands can easily be separated from the bracts and purified as shown in Fig. 1E and F. An EST library was constructed using RNA isolated from purified glands. Over 100 000 ESTs were cloned. Plasmid DNA was isolated and sequenced from over 2000 clones. Because of the directed orientation of cDNA insertion, sequences are expected to represent the coding strand. After the removal of vector only, poor quality sequences, and sequences obviously originating from organelles or ribosomal RNA, the remain ing sequences were clustered into 1075 unigenes. Overall, 111 of the unigenes were contigs containing two or more closely related ESTs. Only 14 contigs lacked a similar sequence in the NCBI database. Nine hundred and sixty four of the ESTs were only found once and of these 710 were similar to sequences in the NCBI database. 

We conclude with ideas for future FEW nexus and governance research

The amount of energy consumed to pump water from Lake Havasu to Los Angeles in the Colorado River Aqueduct varies from year to year depending on the constancy of pumping. For example, while the lowest amount of water diverted to the Metropolitan Water District of Southern California was 0.55 MAF in 2005, the lowest amount of energy consumed was 1.3 million MWH in 2007. If water does not need to be pumped from Lake Havasu to Southern California consistently, the MWD is able to produce energy and sell it. Thus, the irregularity of the amount of energy used to transport water in the Colorado River Aqueduct correlates with the amount of energy bought or sold per fiscal year. The amount of water pumped to Southern California has, on average, increased from 2001 to 2016. In some years, such as in 2011 and 2012, MWD intentionally left water in Lake Mead so that it did not fall below “shortage conditions,”. The Central Arizona Project pumps 1.6 MAF of water up 2800 feet of elevation, 336 miles from Lake Havasu to Phoenix and Tucson. To do this, requires 2.8 million MWH of energy, which is supplied from a coal plant: Navajo Generating Station in Page, AZ. While this plant is not in the study area, it is considered a regional connection, and is important to consider given the large supply of energy it provides to the study area as the eighth-largest plant in the United States.In the LCRB, the dominant types of power generation are hydroelectricity and natural gas,macetas 7l with a yearly average energy production of 5.5 million MWH and 2.9 million MWH, respectively.

Hydroelectric production gradually decreased over the study time frame from roughly 6.6 million MWH in 2001 to 5.5 million MWH in 2016, while natural gas increased from 560,000 MWH in 2001 to 3 million MWH in 2016. Despite their individual trends over time, hydroelectricity and natural gas both follow a seasonal trend, with the highest net generation of energy occurring during the summer months, and natural gas peaking directly after hydroelectricity. The presence of solar in the region has grown since 2010, with the highest annual production in 2016 at about 860,000 MWH, and an average annual production of just 180,000 MWH. There is one wind power plant, but the amount of electricity this plant produces is negligible when compared to the other electricity sources. The price of electricity gradually increased from 8.7 cents/KWH in 2001 to 11.3 cents/KWH in 2016, while still following the same seasonal trend.The top six crops that took up at least 1% of production areas in at least one year of the study time frame, 2008 to 2015, included alfalfa , cotton , durum wheat , double-crop lettuce/durum wheat , lettuce , and citrus. In each year of food production analyzed, alfalfa had the highest percentage , followed by fallowed cropland, with average annual acreage of about 142,000, and 70,000, respectively, and a total average active cropland of about 271,000 acres. There was a slight overall upward trend of total production area from 294,000 acres in 2008 to 303,000 acres in 2015 of active cropland. This trend holds even when considering the drastic decrease of areas under production in 2009 at 142,000 acres down from 294,000 acres in 2008, and another small drop in 2015 by just 6000 acres. From 2012 to 2013, a drought year in the UCRB, areas under production increased from 292,000 acres to 295,000 acres.

The FEW nexus changes with environmental and economic stresses, depending on the flexibility of the governing and market systems. The scenarios were meant to be an example of how governance, market supply and demand, and climate vulnerabilities may impact the FEW nexus and create resource tipping points. In the study area, water governance particularly influences drought management and crop production strategies, giving the system less room to respond to climatic and economic changes.The first scenario depicts how the costs and supply of water, energy, and food production might change in an extreme drought situation. With a decrease in water availability, water and energy prices would increase, but agriculture production would roughly stay the same due to water governance in the region. This would have occurred, for example, if Lake Mead had decreased to 1030.75 ft. At or below an elevation of 1075 ft, Lake Mead is at a critical drought state. At 1050 ft, Lake Mead is below the capacity at which the Hoover Dam can produce hydroelectricity. If it were to stay at this elevation for an entire year, this would amount to a decrease of 36% of the average annual electricity generation from 2001 to 2016. Reduced water availability has previously been shown by Bain and Acker to result in higher operating costs, and higher prices of energy for hydroelectricity in the Colorado River Basin. Additionally, for those that pay for water from a utility, a drought of this magnitude could increase water prices. According to a report from the Public Policy Institute of California, the 2012-16 California drought resulted in an increase in water prices through drought surcharges due to increased supply and treatment costs for suppliers. However, those that rely on water rights for their water, such as on certain Indian Reservations, would continue to receive the same amount of water with no price increase. The Bureau of Reclamation could make a deal with Reservations to hold onto some of their water with some form of compensation. In this case, irrigation will decrease, which was assumed for the drought scenario.

However, a decrease in water available for irrigation does not necessarily mean production will decrease as seen in the increase of production of agricultural products during the drought year from 2012 to 2013, with a decrease in the total amount of water used for irrigation. However, higher energy costs to irrigate cropland coupled with higher water costs for farmers outside of Indian Reservations could potentially decrease the amount of production in areas that rely mostly on hydro power.Where the drought scenario depicts climate pressures on water availability, the global demand for alfalfa is a representation of demand for water. In this scenario, we look at the implications of the governance structure of the LCRB in supplying water in a static snapshot of global agricultural commodity markets. Specifically, we investigate the impact of a 3% increase in demand for alfalfa,planta frambuesa maceta the most widely cultivated crop in the study area. This scenario seems likely to occur due to the 160% increase in fodder exports from 2000 to 2010 from the United States, and 2.5% overall increase in fodder exports specifically from California, Nevada, or Arizona over the same 10-year period. Under this scenario, there would either be an increase in the overall total cropland from 271,500 acres to 279,600 acres, or a decrease in cropland for food crops. In either scenario, more water would be needed for crop production, assuming the current mode of production remains constant, with fodder production consuming the largest amount of water when compared to other crops. Demand on energy would increase for producing and transporting alfalfa and water, potentially meaning a higher demand for water to produce that energy. This increased demand on energy includes the energy needed to move water for irrigation, energy needed to export alfalfa, and increased demand for agricultural chemicals and machinery fuels.The goal of this study was to understand how the governance structure of the Colorado River constrains the utility of the nexus approach to deal with future stresses. A consideration of governance structures should be central to the development of food-energy-water nexus thinking to better understand and identify how stressing food, energy and/or water systems creates resource vulnerabilities and/or resource scarcities in all three sectors. To understand how food, energy, and water affect one another’s availability, individual sector units can be analyzed together to give a quantified picture of use.

In addition, price trends can be analyzed to look for correlations with other sector price trends or climatic changes. In the study area, we found that water is the limiting factor due to governance constraints, especially with predictions of increased drought in the future. The following sections describe the ways that governance constrains the possibility of implementation of FEW management strategies in the LCRB, and why therefore, it is a critical component of FEW nexus research. We also discuss how power, geopolitics, and institutional factors impact nexus implementation.First, policies limit the ability of resource systems to respond to market and climatic changes. In the LCRB in particular, we found that the Law of the River limits the prospect of responding to climatic changes such as increasing drought frequency and severity. With predictions by the IPCC that the southwestern United States is to become hotter and drier, a lack of adequate response due to rigid policy structures will impact all three sectors. The small response to drought in water used in agriculture in the study area was likely because much of the production occurs on Indian reservations, which have a high proportion of water rights in comparison to the rest of the Lower Basin states. While Metropolitan Water District leaving water in Lake Mead during drought years is a good example of a response to drought, it is a reactive response, similar to the DOI Interim Guidelines. Drought coupled with increased demand for water through more alfalfa production will strain water resources even further. Second, rigid policies in the most ‘geopolitical’ sector impacts the ability of that sector to respond to the needs of the other sectors. Management of the Colorado River is a complex geopolitical issue with many stakeholders, including governments of U.S. states and nation states, separated by rigid political boundaries. This directly impacts the ability of water managers to meld water allotments to current or predicted conditions. This is a very real concern as the IPCC has predicted that the Southwestern U.S. will experience higher temperatures and decreased precipitation. Since water is the life blood of this region, with over ¾ of the economy relying on its presence , drought will severely affect the region’s livelihood. In addition to drought, population growth could put more stress on the water system. Depending on the system analyzed, there will likely be a ‘limiting factor’. While some have argued against a focus on water , sector weights are context dependent. In semi-arid cases with access to a large amount of water, it is frequently a geopolitical issue, which often presents itself as transboundary conflict. Although we did not include Mexico in the analysis due to data constraints, it is well known that most years since 1960 the Colorado River has run dry before reaching the Sea of Cortéz. The impacts of this have presented themselves as a lack of access to a water resource in Northern Mexico that has resulted in social impacts such as a decline in the regional shrimp and fishing industries. The river’s riparian ecosystems were briefly restored through the implementation of Minute 319, a 2014 treaty between the U.S. & Mexico that authorized a pulse flow to the Gulf of California. This move “marked a sustainable reconciliation with the land and its people” by connecting communities back to a water source that was highly valued. Restoration projects such as this one harness the local to the international through governance, power, and the larger ecological and political systems at work. Entrenched in these cross-boundary water sources and restoration projects are politics of power at the international level. Through sub-national division of power at the state level, the U.S. monopolized control over the Colorado River, partly out of fear that “Mexico might lay claim to large quantities of the river’s flow,” a notion that maintains itself in the almost-century year-old Colorado River Compact. Transboundary politics therefore directly complicates the economic and hydraulic foundations of the nexus, specifically through divisions of power that persist through long-term sub-national agreements. While the Colorado River is an extreme form amongst global transnational boundaries, it should be considered at the regional-nexus level. Third, and similarly to the second, rigid policies in one of the sectors impacts the production and availability of resources in other sectors.

The impact of Phytophthora diseases on citrus production can be devastating

Tandem with a host of myriad of environmental matrices to affect the matrix effect, quantification of CECs is challenging both with GC-MS and LC-MS. Matrix effect exists due to the co-eluted, interfering compounds in the sample extract that have similar ions in the Mass Spectrometry or MS-MS segment. It may also arise from the interaction between the target analytes and those co-extracted matrix components during sample preparation and in the ionization chamber. The former is more common in GC-MS and GC-MS-MS analysis, and might be encountered in LC-MS and LC-MS-MS analysis. GC-MS and GC-MS-MS are still the commonly used techniques because of their wide availability in environmental laboratories. GC-MS or GC-MS-MS also suffers less from matrix effect that is more commonly observed in electrospray ionization -based LC-MS or LC-MS-MS. Environmental concentrations of CECs exist in the ng L -1 or µg L-1 ranges. Extraction is a necessary step to concentrate the analytes prior to instrumental analysis. Solid Phase Extraction is the most common technique applied sample preparation and purification in the analysis of CECs. SPE separation depends on the kind of solid stationary phase through which the sample is passed, and on the types of target compound. The target compounds adhere to the stationary phase,frambuesa o mora while impurities in the sample are washed away, obtaining a clear extract. This procedure uses a vacuum manifold and has the advantage that 12 or 24 solid phase extraction cartridges can be prepared simultaneously, thus minimizing time and effort for sample preparation. 

The target compounds are finally eluted from the stationary phase using an appropriate solvent. The effectiveness of solid phase extraction cartridges have been widely researched; the best being ENV+, Oasis HLB, Oasis MAXSPE, Oasis MCX, Strata-X, Lichrolut C18 and LiChrolut EN for pre-concentration in aqueous samples. Since most pharmaceuticals and personal care products are polar, non-volatile, and thermally labile compounds unsuitable for GC separation, derivatization is necessary after extraction and elution from the aqueous sample and prior to GC-MS analysis of polar compounds. Various derivatization agents have been applied to various CECs. However, this comes with inaccuracy of the method and it can affect the losses of analytes that cannot be fully derivatized. Also, derivatization uses highly toxic and carcinogenic diazomethane, or less frequently, acid anhydrides, benzyl halides, and aklylchloroformates. Derivatization can be incomplete, inhibiting completely the analysis of some compounds. Some compounds are also thermolabile and decompose during GC analysis. However, after derivatization, compounds improve in both volatility and thermal stability. The final step of sample preparation before elution is the clean up of the extract. This step is usually added to enhance the accuracy and reproducibility of the results by eliminating matrix effects and generally any impurities occurring in the final extract that can interfere with the analysis. The clean up step is usually performed with SPE cartridges, as described in. SPE is a step with a double goal: sample concentration and cleanup, and takes place before the derivatization. However, while sample clean up may help remove those interfering compounds, it is time consuming and runs the risk of losing analytes of interest, especially those that were polar to begin with.

Allowing better chromatographic separation allows the analytes to be eluted in an appropriate time interval, avoiding coeluting with matrix components. Nevertheless, matrix effect can hardly ever be eliminated. Initial method validation can help document and qualify the performance of the GC-MS to analyze the test compounds, as well as the pretreatment steps to concentrate and provide for injection into GCMS. Initial method validation provides method performance parameters such as method recoveries, precision, and matrix effect to deliver consistent estimation of the analyte concentrations. It is becoming crucial to properly assess the risk posed by the presence of CECs in the environment. This research has aimed to develop a multi-residue analytical method for GC-MS that has allowed for simultaneous monitoring of CECs. This provides the ease of evaluating different physical-chemical varieties of CECs simultaneously without having to undergo different processes for certain types of trace organic compounds. Since GC-MS has wide availability around the world, the multi-residue analytical method allows many researchers to gain a larger understanding of the derivatization and extraction processes possible for a multitude of contaminants. Thus, the occurrence, distribution, and fate of CECs will be better monitored and more efficiently regulated. In this study, we used N–Nmethyltriflouroacetamide to initially derivatize 50 compounds in GC-MS. This analytical method was developed using the approach by Yu and Wu such that 14 compounds in his study were derivatized and analyzed in GC-MS. In addition to his 14 compounds, this study has successfully included 1 additional anti-inflammatory drug, 2 cardiovascular drugs/beta blockers, 1 estrogen, 1 personal care product, 7 pesticides, and 4 plasticizers using MTBSTFA and GC-MS.

The work presented here consists of a meticulous and successful development of a method for 29 emerging compounds in tertiary treated greenhouse runoff water. High purity solvents such as Optima-LC/MS-grade MeOH, Optima-grade ethyl acetate , HPLC grade acetone and 37% hydrochloric acid were supplied by Fisher Scientific. Ethylenediaminetetraacetic acid disodium salt dehydrate was 99.7% from J.T. Baker Chemical Co.. N-tert-Butyldimethylsilyl-Nmethyltrifluoroacetamide, purity >97%, , was obtained from Sigma Aldrich. Pesticide grade glass wool was purchased from Supelco. Deionized water was in-house produced. Nitrogen 99.97% and helium 99.999% gases were purchased from Airgas. Both individual stock standard and isotopically labeled internal standard solutions were prepared on a weight basis in methanol. After preparation,producción macetas de 11 litros standards were stored at -20 °C in darkness. A mixture of all contaminants was then prepared by appropriate dilution of individual stock solutions in MeOH in volumetric flasks. For calculations of labeled diluted standards and internal standards see Supplementary Data. A 2-L aqueous solution at 400 µg L-1 , named as “spiking solution”, was freshly prepared in a volumetric flask every week during the project performance. A separate mixture of isotopically labeled internal standards and further dilutions, used for internal standard calibration, was similarly prepared in MeOH.After reviewing the scientific literature available and considering the analytes’ physical-chemical features and the type of target samples, the following extraction method protocol was used as a starting point. 1) Onehundred mL of deionized water was fortified at 200 ng L-1 of the target CECs in a volumetric flask. 2) In this study, we have chosen Waters Oasis HLB cartridge to pretreat polar and nonpolar compounds using the same extraction conditions. The resulting solution was then concentrated by SPE in a Waters Oasis HLB 60 mg, 3 mL cartridge , which was previously activated with 4 mL of methanol and then conditioned with 4 mL of deionized water. 3) Once the extraction was finished, the cartridge was dried under vacuum for 30 min to remove excess of water, and unless eluted immediately, samples were stored at -20 °C wrapped in aluminum foil. 4) The cartridge elution was carried out in 2×2 mL of MeOH. 5) Extract was then evaporated to dryness under a gentle nitrogen stream at room temperature and reconstituted in a 2 mL GC glass vial in a mixture of 900 μL of ethyl acetate and 100 μL of the derivatization agent MTBSTFA; and finally, 6) The resulting solution underwent 60 min at 70 °C to foster the derivatization reaction, and after, the extract was vortexed, cooled off, and then analyzed by GCMS.

Several parameters, such as concentration rate, sample size, and type of SPE cartridge were optimized. Sample pH adjustment and addition of chelating agents were also assessed for optimization. Each feature was tested in triplicate in the order described below. Once a parameter was optimized, it was incorporated in the method protocol for the optimization of the subsequent parameters. Sensitivity and accuracy were the criteria followed to select each parameter. In order to increase the method sensitivity, acquisition windows were established using the following criteria: 1) No more than 15 ions were monitored in each one; 2) The isotopically labeled internal standards were included in the same window as their corresponding analytes; and 3) The window had to be long enough to be trustworthy in case a change in the retention time took place. Having all this into consideration, two separate instrumental methods, Method 1 and Method 2, had to be created, both of them sharing the same chromatographic conditions. However, Method 1 and Method 2 differed in the acquisition windows as well as in the SIM ions monitored in each of them. Appendix A shows the target compounds and their SIM ions monitored for each of them recorded by Method 1 and Method 2 distributed in acquisition windows. One primary and two secondary ions, used for quantification and confirmation, respectively, were monitored in all cases except for 17β- estradiol, which presented a poor fragmentation, so only one secondary ion was registered. Acquisition stopped at min 29 and 25 for Method 1 and Method 2, respectively, to prevent damage and pollution of the MS detector. Eleven minutes of solvent delay were set in both methods to prevent damage in the filament. Therefore, each sample extract was intended to undergo two consecutive injections, one for Method 1 and then Method 2. Phytophthora species are Oomycete organisms within the Kingdom Stramenopila that can cause diseases on a wide variety of agricultural crops and non-cultivated plants. Worldwide, several species, including P. citrophthora, P. syringae, P. nicotianae, P. citricola, P. palmivora, and P. hibernalis, are pathogens of citrus. Within California, many of these including P. citrophthora, P. syringae, P. parasitica, and P. hibernalis have been recovered from citrus. These species are active at different times of the year with P. syringae and P. hibernalis present in the cooler seasons, P. parasitica during the summer, and P. citrophthora mostly causing disease during spring, fall, and winter. They are all capable of causing brown rot of citrus fruit in the orchard and after harvest in storage or during transit. P. citrophthora and P. parasitica will also cause root rot and gummosis in the orchard which can make the establishment of new plantings difficult, leading to slow tree decline and reductions in productivity once introduced.Epidemics have occurred as far back as 1863-1870 in Italy in which large numbers of lemon trees were destroyed due to gummosis caused by Phytophthora citrophthora and P. parasitica, along with additional outbreaks in nearby Greece where most of the lemon trees on the island of Paros were destroyed between 1869 to 1880. More recently, it was estimated that within California, Phytophthora root rot outbreaks can continue to lead to yield losses of up to 46% if left unmanaged. The California citrus industry is economically important for both the state and country. Fruit produced in California is primarily earmarked for fresh consumption, with the state producing roughly 59% of the total citrus grown within the United States, valued at around 2.4 billion dollars. Recently, P. syringae and P. hibernalis were designated quarantine pathogens in China, an important export country for the California citrus industry, following the detection of brown rot infected fruit shipped from California to Chinese ports. Both species were previously considered of minor importance, but in recent years, P. syringae has been commonly found causing brown rot of fruit during the winter harvest season in the major citrus production areas in the Central Valley of California. This subsequently led to the restriction of the California citrus trade and extensive monetary losses. As of 2016, California citrus exports to China, which is one of the top 15 export countries for California citrus, were valued at $133 million dollars , underlining the importance of preventing future trade restrictions to this important market due to phytosanitary issues caused by Phytophthora spp. The root and soil phases in the disease cycle of Phytophthora spp. are directly connected with the brown rot phase. Under favorable conditions, mainly wetness, high inoculum levels in the soil will cause root rot which can be especially detrimental in nurseries and in the establishment of new orchards. This is when disease management is most critical. It has been shown that trees grown in soil infested with P. parasitica or P. citrophthora prior to repotting to larger containers were later less afflicted with dieback or stunted growth when treated with soil applications of mefenoxam and fosetyl-Al than trees that were not treated. 

A transaction only succeeds if none of its reads are stale when the commit record is encountered

To optimize this process in cases where the view is small , the Corfuobject can create checkpoints and provide them to Corfu via a checkpoint call. Internally, Corfu stores these checkpoints on a separate shared log and accesses them when required on query_helper calls. Additionally, the object can forgo the ability to roll back before a checkpoint with a forget call, which allows Corfu to trim the log and reclaim storage capacity. The Corfu design enables other useful properties. Strongly consistent read throughput can be scaled simply by instantiating more views of the object on new clients. More reads translate into more check and read operations on the shared log, and scale linearly until the log is saturated. Additionally, objects with different in-memory data structures can share the same data on the log. For example, a name space can be represented by different trees, one ordered on the filename and the other on a directory hierarchy, allowing applications to perform two types of queries efficiently. We now substantiate our earlier claim that storing multiple objects on a single shared log enables strongly consistent operations across them without requiring complex distributed protocols. The Corfu runtime on each client can multiplex the log across objects by storing and checking a unique object ID on each entry; such a scheme has the drawback that every client has to play every entry in the shared log. For now, we make the assumption that each client hosts views for all objects in the system.

Later in the paper, we describe layered partitioning, which enables strongly consistent operations across objects without requiring each object to be hosted by each client,cultivo frambuesa and without requiring each client to consume the entire shared log. Many strongly consistent operations that are difficult to achieve in conventional distributed systems are trivial over a shared log. Applications can perform coordinated rollbacks or take consistent snapshots across many objects simply by creating views of each object synced up to the same offset in the shared log. This can be a key capability if a system has to be restored to an earlier state after a cascading corruption event. Another trivially achieved capability is remote mirroring; application state can be asynchronously mirrored to remote data centers by having a process at the remote site play the log and copy its contents. Since log order is maintained, the mirror is guaranteed to represent a consistent, system-wide snapshot of the primary at some point in the past. In Corfu, all these operations are implemented via simple appends and reads on the shared log. Corfu goes one step further and leverages the shared log to provide transactions within and across objects. It implements optimistic concurrency control by appending speculative transaction commit records to the shared log. Commit records ensure atomicity, since they determine a point in the persistent total ordering at which the changes that occur in a transaction can be made visible at all clients. To provide isolation, each commit record contains a read set: a list of objects read by the transaction along with their versions, where the version is simply the last offset in the shared log that modified the object.As a result, Corfu provides serializability with external consistency for transactions across objects.Corfu uses streams in an obvious way: each Corfu object is assigned its own dedicated stream.

If transactions never cross object boundaries, no further changes are required to the Corfu runtime. When transactions cross object boundaries, Corfu changes the behavior of its EndTX call to multiappend the commit record to all the streams involved in the write set. This scheme ensures two important properties required for atomicity and isolation. First, a transaction that affects multiple objects occupies a single position in the global ordering; in other words, there is only one commit record per transaction in the raw shared log. Second, a client hosting an object sees every transaction that impacts the object, even if it hosts no other objects. When a commit record is appended to multiple streams, each Corfu runtime can encounter it multiple times, once in each stream it plays. The first time it encounters the record at a position X, it plays all the streams involved until position X, ensuring that it has a consistent snapshot of all the objects touched by the transaction as of X. It then checks for read conflicts and determines the commit/abort decision. When each client does not host a view for every object in the system, writes or reads can involve objects that are not locally hosted at either the client that generates the commit record or the client that encounters it. We examine each of these cases: A. Remote writes at the generating client: The generating client – i.e., the client that executed the transaction and created the commit record – may want to write to a remote object. This case is easy to handle; as we describe later, a client does not need to play a stream to append to it, and hence the generating client can simply append the commit record to the stream of the remote object. B. Remote writes at the consuming client: A client may encounter commit records generated by other clients that involve writes to objects it does not host; in this case, it simply updates its local objects while ignoring updates to the remote objects. Remote-write transactions are an important capability.

Applications that partition their state across multiple objects can now consistently move items from one partition to the other. In our evaluation, we implement Apache ZooKeeper as a Corfu object, create a partitioned name space by running multiple instances of it,maceta cuadrada and move keys from one name space to the other using remote-write transactions. Another example is a producer consumer queue; with remote-write transactions, the producer can add new items to the queue without having to locally host it and see all its updates. C. Remote reads at the consuming client: Here, a client encounters commit records generated by other clients that involve reads to objects it does not host; in this case, it does not have the information required to make a commit/abort decision since it has no local copy of the object to check the read version against. To resolve this problem, we add an extra round to the conflict resolution process, as shown in Figure 5.3. The client that generates and appends the commit record immediately plays the log forward until the commit point, makes a commit/abort decision for the record it just appended, and then appends an extra decision record to the same set of streams. Other clients that encounter the commit record but do not have locally hosted copies of the objects involved can now wait for this decision record to arrive. Since decision records are only needed for this particular case, the Corfu object interface described in Section 5.1 is extended with an is Shared function, which is invoked by the Tango runtime and must return true if decision records are required. Significantly, the extra phase adds latency to the transaction but does not increase the abort rate, since the conflict window for the transaction is still the span in the shared log between the reads and the commit record. D. Remote reads at the generating client: Corfu does not currently allow a client to execute transactions and generate commit records involving remote reads. Calling an accessor on an object that does not have a local view is problematic, since the data does not exist locally; possible solutions involve invoking an RPC to a different client with a view of the object, if one exists, or recreating the view locally at the beginning of the transaction, which can be too expensive.

If we do issue RPCs to other clients, conflict resolution becomes problematic; the node that generated the commit record does not have local views of the objects read by it and hence cannot check their latest versions to find read-write conflicts. As a result, conflict resolution requires a more complex, collaborative protocol involving multiple clients sharing partial, local commit/abort decisions via the shared log; we plan to explore this as future work. A second limitation is that a single transaction can only write to a fixed number of Corfu objects. The multiappend call places a limit on the number of streams to which a single entry can be appended. As we will see in the next section, this limit is set at deployment time and translates to storage overhead within each log entry, with each extra stream requiring 12 to 20 bytes of space in a 1KB to 4KB log entry.The decision record mechanism described above adds a new failure mode to Tango: a client can crash after appending a commit record but before appending the corresponding decision record. A key point to note, however, is that the extra decision phase is merely an optimization; the shared log already contains all the information required to make the commit/abort decision. Any other client that hosts the read set of the transaction can insert a decision record after a time-out if it encounters an orphaned commit record. If no such client exists and a larger time-out expires, any client in the system can reconstruct local views of each object in the read set synced up to the commit offset and then check for conflicts. vCorfu presents itself as an object store to applications. Developers interact with objects stored in vCorfu and a client library, which we refer to as the vCorfu runtime, provides consistency and durability by manipulating and appending to the vCorfu stream store. Today, the vCorfu runtime supports Java, but we envision supporting many other languages in the future. The vCorfu runtime is inspired by the Tango runtime, which provides a similar distributed object abstraction in C++. On top of the features provided by Tango, such as linearizable reads and transactions, vCorfu leverages Java language features which greatly simplify writing vCorfu objects. Developers may store arbitrary Java objects in vCorfu, we only require that the developer provide a serialization method and to annotate the object to indicate which methods read or mutate the object, as shown in Figure 5.4. Like Tango, vCorfu fully supports transactions over objects with stronger semantics than most distributed data stores, thanks to inexpensive global snapshots provided by the log. In addition, vCorfu also supports transactions involving objects not in the run time’s local memory , opacity, which ensures that transactions never observe inconsistent state, and read-own-writes which greatly simplifies concurrent programming. Unlike Tango, the vCorfu runtime never needs to resolve whether transactional entries in the log have succeeded thanks to a lightweight transaction mechanism provided by the sequencer.Each object can be referred to by the id of the stream it is stored in. Stream ids are 128 bits, and we provide a standardized hash function so that objects can be stored using human-readable strings. vCorfu clients call open with the stream id and an object type to obtain a view of that object. The client also specifies whether the view should be local, which means that the object state is stored in-memory locally, or remote, which means that the stream replica will store the state and apply updates remotely. Local views are similar to objects in Tango and especially powerful when the client will read an object frequently throughout the lifespan of a view: if the object has not changed, the runtime only performs a quick check call to verify no other client has modified the object, and if it has, the runtime applies the relevant updates. Remote views, on the other hand, are useful when accesses are infrequent, the state of the object is large, or when there are many remote updates to the object – instead of having to playback and store the state of the object in-memory, the runtime simply delegates to the stream replica, which services the request with the same consistency as a local view. To ensure that it can rapidly service requests, the stream replicas generate periodic checkpoints. Finally, the client can optionally specify a maximum position to open the view to, which enables the client to access the history, version or snapshot of an object.

Consistency is a highly desirable property for distributed systems

As a result, CH4 production made up only 5% of net GHG emissions at Compost PAP and 48% of those from Compost CH. This finding suggests that aeration during the thermophilic phase of composting is critical in minimizing the GHG footprint of EcoSan systems. Waste treatment ponds produced anaerobic conditions that generate high levels of CH4 and very little CO2. The GHG contribution of waste stabilization ponds can be mitigated by the use of CH4 gas capture and electricity generation. Anaerobic digestion coupled to CH4 capture has been used to treat livestock manure and in some wastewater treatment plants for decades. However, many waste stabilization ponds throughout the world, including those sampled in this study, do not capture and reuse the CH4 generated during waste treatment. Market barriers – including the initial financial investment costs CH4 capture technology and electricity generation facilities – regulatory challenges, and lack of access to technology severely limit its widespread adoption in regions of the world that currently lack basic sanitation needs. Further, the efficiency of pathogen removal in waste stabilization ponds is highly variable , thereby limiting the effectiveness of waste stabilization ponds in regions of the world with limited technological and capital resources. Nitrous oxide is produced during the microbial-mediated processes of nitrification and denitrification, and can be produced in conditions with high to low levels of oxygen. Nitrification, the conversion of ammonium to nitrate through microbial oxidation, requires a source of ammonium and oxygen. During nitrification, N2O can form by the nitrate reductase enzyme in anaerobic conditions. Denitrification, the reduction of nitrate to dinitrogen through a series of intermediates, requires a source of nitrate, organic carbon, and limited oxygen.

Nitrous oxide can form as a result of incomplete denitrification to N2. Human waste contains organic carbon and a range of organic and inorganic forms of nitrogen. Therefore,vertical growing towers the oxygen conditions of a particular waste treatment pathway are a major control on N2O fluxes. In the anaerobic waste stabilization ponds, N2O was undetectable. In municipal wastewater treatment plants, measurements of N2O vary widely and can be mitigated by technologies that remove total nitrogen. Grass fields where waste was illegally disposed exhibited high and spatially variable N2O and CH4 fluxes. We observed a trade off between N2O and CH4 across sanitation pathways. Whereas waste stabilization ponds produced high levels of CH4 and no N2O, both EcoSan systems tended to have high fluxes of N2O. Nitrous oxide in compost piles can be produced by both nitrification and denitrification processes present along oxygen, moisture and C:N gradients within the pile. Reducing occurrences of anaerobic microsites could further limit N2O production from EcoSan compost, however, N2O production could still results from nitrification conditions. Despite this pollution swapping and taking into account the greater global warming potential of N2O, the largest contributor to GHG emissions from these systems is still CH4. Therefore, without systems in place to capture and oxidize CH4, the aerobic EcoSan system is a favorable system relative to waste stabilization ponds and illegal disposal on grass fields with respect to its impact on the climate.The management of aerobic biogeochemical conditions in compost piles plays a key role in minimizing CH4 and N2O losses. We observed large differences in CH4 emissions, and consequently in overall GHG emissions, across the two EcoSan systems in our study, implying opportunities for improved management. We tested this explicitly with a targeted comparison of GHG emissions above two piles, one with a permeable soil lining and one with an impermeable cement lining, at the Compost CH site and with a second comparison of GHG emissions before and after turning pile material. We found that CH4 emissions from the cement lined pile were approximately four times greater than from the soil lined pile, despite no significant temperature or CO2 emission differences.

This is evidence that higher CH4 emissions were driven by a larger methanogenic fraction , expressed as the amount of CH4 emitted per unit CO2, in the lined pile, indicating a greater prevalence of anaerobic conditions due to higher pile moisture. The cement-lined pile in the paired-pile experiment had no drainage mechanism and therefore likely represents a high endmember for wet pile conditions and high CH4 emissions. Notably, the standard design for cement-lined piles at the Compost CH4 site includes a lateral overflow PVC pipe, providing passive drainage, while at Compost PAP a soil lining is used without a PVC drain, and in both cases the CH4 emissions observed were much lower. The very high CH4 emissions from the undrained pile therefore likely reflect a very high moisture end-member for thermophilic composting. For future EcoSan implementations there are important trade offs to consider in pile design. The advantages of a PVC drain and associated storage tank are that potentially pathogenic liquid is contained, can be recycled to maintain optimal pile moisture levels under drier conditions and, if sanitized, the nutrient content of the leachate can be recycled. In contrast, a soil floor costs less, but it is important to consider, and monitor for, the potential leaching of pathogens, nutrients that can causes algalblooms, and trace metals that could contaminate drinking water when using a permeable floor. Future studies should further explore the quantity, composition, and timing of pile leaching, and assess the efficacy of soil as a filter to avoid contamination of groundwater alongside lowering GHG emissions. Though use of a permeable soil floor and/or PVC overflow drain showed potential to reduce EcoSan composting GHG emissions, the effects of turning the pile e even once e were even greater. Emissions of CH4 dropped two orders of magnitude, approaching zero, within one day after turning and stayed comparably low through the third day. Piles in the EcoSan second stage are turned every 7e10 days , therefore it is likely that CH4 emissions remain low throughout this entire phase, as originally evidenced by the >3-month time points in the initial measurements at Compost CH and Compost PAP. From these results, it may appear to be beneficial from a climate forcing perspective to reduce the time spent in the first static phase, however this must be balanced by the need to safely manage the pathogen burden at this early treatment stage, especially if piles are turned using manual labor.

Turning must only begin when pathogen abundance in the material has been reduced to a safe level, thus safeguarding the health of employees and local environment. Furthermore, though not observed in this study, past work has also shown that pile turning can increase N losses. Significant spikes in ammonia and N2O emissions follow mechanical turning of composting manure. It is therefore possible that within EcoSan composting there may be a trade-off between N2O and CH4 emissions between the initial static and later turned stages,container vertical farming similar to our observations across different sanitation pathways. Our gridded sampling scheme also allowed us to test the hypothesis that aeration drives CH4 emissions within piles. The results confirmed the utility of our model-based sampling design, with mean CH4 emissions four to five times higher from pile centers than pile corners or edges, regardless of the general drainage characteristics of the pile. An alternative to early turning may be the use of additional engineering to further aerate the middle of large piles where, even under well-drained pile conditions, we observed steep increases in CH4 emissions. One solution may be use of perforated PVC pipes for passive aeration of the pile at relatively low cost. Thermophilic composting is most effective under aerobic conditions. Understanding how management can best support aerobic conditions provides a win-win opportunity to increase the operational efficiency composting for treating waste while reducing the associated GHG emissions. The preliminary comparisons in this study captured significant effects of pile lining permeability and pile turning on GHG emissions during thermophilic composting, and helped us interpret the longer-term dynamics of GHG emissions during composting. Although our targeted measurements identify two of the management controls of GHG differences , robust estimates of emission factors for EcoSan composting requires a more comprehensive assessment of GHG dynamics, considering different management options, and with more extensive sampling throughout the composting operational stages. In sum these results support the potential for EcoSan composting to further reduce CH4 and overall GHG emissions associated with waste containment and treatment if piles are carefully designed and effectively managed to support aerobic metabolism.Strong consistency guarantees simplify programming complex, asynchronous distributed systems by increasing the number of assumptions a programmer can make about how a system will behave.

For years, system designers focused on how to provide the strongest possible guarantees on top of unreliable and even malicious systems. The rise of the Internet and cloud-scale computing, however, shifted the focus of system designers towards scalability. In a rush to meet the needs of cloud-scale workloads, system designers realized if they weakened the consistency guarantees they provided, they could greatly increase the scalability of their systems. As a result, designers simplified the guarantees provided by their systems and weaker consistency models such as eventual consistency emerged, greatly increasing the burden on developers. This movement towards weaker consistency and reduced features is known as NoSQL. NoSQL achieves scalability by partitioning or sharding data, spreading the load across multiple nodes. In order to maintain scalability, NoSQL designers ensured requests were not required to cross multiple partitions. As a result, they dropped traditional database features such as transactions in order to maintain scalability. While this worked for some applications, many of the developers with applications which needed this functionality were forced to choose between a database with all the functionality they needed, or to adapt their applications to the new world of the relaxed guarantees provided by NoSQL. Programmers found ways around the restrictions of weaker consistency by retrofitting transaction protocols on top of NoSQL, or by finding the minimum guarantees required by their application. Chapter 2 explores this pendulum away from and back towards consistency. This dissertation explores Corfu, a platform for scalable consistency. Corfu answers the question: “If we were to build a distributed system from scratch, taking into consideration both the desire for consistency and the need for scalability, what would it look like?”. The answer lies in the Corfu distributed log. Chapter 3 introduces the Corfu distributed log. Corfu achieves strong consistency by presenting the abstraction of a log – clients may read from anywhere in the log but they may only append to the end of the log. The ordering of updates on the log are decided by a high throughput sequencer, which we show can handle nearly a million requests per second. The log is scalable as every update to the log is replicated independently, and every append merely needs to acquire a token before beginning replication. This means that we can scale the log by merely adding additional replicas, and our only limit is the rate of requests the sequencer can handle. While Chapter 3 describes how to build a single distributed log, multiple applications may wish to share the same log. By sharing the same log, updates across multiple applications can ordered with respect to one another, which form the basic building block for advanced operations such as transactions. Chapter 4 details two designs for virtualizing the log: streaming, which divides the log into streams built using log entries which point to one another, and stream materialization, which virtualizes the log by radically changing how data is replicated in the shared log. Stream materialization greatly improves the performance of random reads, and allows applications to exploit locality by placing virtualized logs on a single replica. Efficiently virtualizing the log turns out to be important for implementing distributed objects in Corfu, a convenient and powerful abstraction for interacting with the Corfu distributed log introduced in Chapter 5. Rather than reading and appending entries to a log, distributed objects enable programmers to interact with in-memory objects which resemble traditional data structures such as maps, trees and linked lists. Under the covers, the Corfu runtime, a library which client applications link to, translates accesses and modifications to in-memory objects into operations on the Corfu distributed log. The Corfu runtime provides rich support for objects.

Cross peaks in the CP-PDSD experiment represent rigid dipolar

Examining the reorganization of the secondary plant cell wall polymers due to mechanical preprocessing is important for the development of the plant cell wall model and effective utilization of biomass without recalcitrance.Conversion from plant biomass to bio-product often necessitates mechanical preprocessing in deconstruction methods; milling times vary but can be as short as 2 min and can exceed 4 hours.Milling of stem tissue at 30 Hz for 2 min was selected to allow direct comparison to DMSO swelling studies employing the same milling time.Typically plant cell wall samples are milled at 30 Hz for 2 minutes at 30 Hz followed by up to 24 hours of milling at 10 Hz depending on the amount of material.As a result, these experiments report on cell wall structure after reorganization of the plant cell wall polymers that occur during mechanical preprocessing.For example, solid-state NMR measurements on maize biomass after mechanical and solvent processing methods support lignin association with the surface of hemicellulose coated cellulose fibers in the cell wall,61 a different result than those obtained from recent solid-state NMR measurements on less processed grass and other plant species biomass.CO2 labeling is challenging for large mature plants such as Poplar trees given labeling chambers would need to adapt over the life cycle of the organism,vertical farming companies and there are few commercial sources.In this current study, the results of mechanical processing on 13C sorghum data collected at common laboratory milling times are available for native and milled stems to contrast the tissue with the highest amount of secondary plant cell wall to boost sensitivity solid-state NMR recalcitrance markers.

Comparisons are limited to changes in signal intensity signal through the integrated comparison of peaks found in the control to the milled samples for the CP-PDSD experiment, CP-rINADEQUATE, and the rINEPT.Polymers are evaluated across the secondary plant cell wall structure by combining the CP based experimental observations which examine highly rigid components and the rINEPT highly dynamic components of the sample.This current work also highlights initial characterization of highly dynamic lignin and hemicellulose polymers of the plant cell wall matrix in the rINEPT for the first time.However, the sorghum stems milled for 15 minutes will be compared to previous work by Ling et al.2019 which shows that cotton crystallinity of cellulose decreases by an average of 40% over 13 techniques.This is important because cotton is a naturally pure form of cellulose 5,38 and results from milling cellulose in the secondary plant cell wall can be observed in the sorghum stems milled for 15 minutes.For simplicity, the results are ordered by polymer class presented in Figure 2B: cellulose fibrils, structural hemicellulose, and lignin.FE-SEM images of the control and milled samples in Figure 6 show the morphological structure of the cellulose fibrils in their bundled cylindrical form as lines.Cellulose fibrils are typically on the order of 1-2 μm in diameters for a scale reference,which is on the order of some milling efforts of softwood “fines”.In Figure 6A the cellulose fibrils are shown in their bundled structure to the right in Figure 6B of the stem vasculature which can be seen as the large main cylindrical shape with thin cellulose fibrils forming lines across.After milling, the vascular structures comprising the stem are lost , this makes sense as the macroscopic stem structure is homogenized into a paste upon milling.However, the cellulose fibril structure remains after 2 minutes and 15 minutes of milling.Contrasting the control and stems milled for 2 minutes at 30 Hz, the individual cellulose fibers within the sample are largely similar, with a slightly rougher texture.After 15 minutes of milling stems at 30 Hz, there is noticeable fraying of the cellulose fibril bundles within Figure 6E, the thinner cylindrical lines appear to be consistent with microfibrils.

Very different results appear after milling cellulose fibrils for 15 minutes at 30 Hz in this study with previous work on milled cotton.Here for sorghum, more intact fibril morphology is maintained when milling the cellulose fibrils within a second plant cell wall.In the Ling et al.2019 study, cotton cellulose was fractured to the point microfibril structures were obscured and fibril chunks remained.There was qualitatively less severe cracking on the surface of fibrils in Figure 6E in the plant cell wall sorghum sample in this study and general fibril shapes are still apparent.In this current study, initial morphology of the intact cellulose fibrils in the plant cell wall was approached with FE-SEM and other techniques which may be considered for future evaluation of cellulose fibril structure within the heterogeneous plant cell wall during deconstruction for sorghum stems.Other techniques may yield further information related to recalcitrance due to lowered accessible cellulose fibril surfaces available for digestion.In the future, one class of attractive microscopies is vibrational microscopy for verifying cellulose including crystalline and amorphous cellulose.However, implementing these techniques suffer from the complexity of an intact plant cell wall and cellulose fluorescence.The benefit of microscopy over spectroscopy for assessing cellulose in the plant cell wall is the arrangement of the polymer in cellulose fibrils which vary in orientation and direction so techniques which can focus on one cellulose fibril at a time are favorable.Techniques such as Confocal Raman Spectroscopy with a 785 nm or 1064 nm lasing source or AFM-IR would both require optical arrangements suitable for detection of cellulose signals between 3000-400 cm-1 for informing on the cellulose fibril structure and on lignin or hemicellulose on cellulose fibril surfaces relevant for recalcitrance.

Vibrational microscopy would also have the advantage of confirming cellulose as the cellulose fibril structure is obscured in deconstruction.However for the scope of this current study includes FE-SEM structures were confirmed using literature98 and the assertion of cellulose predominance in the plant cell wall.For sorghum secondary plant cell walls subjected to vibratory milling, recalcitrance would be supported by the correlated decrease of crystalline cellulose and an increase in rigid amorphous cellulose.Such details can be extracted experimentally on the cellulose fibril structure using 2D solid-state NMR.Molecular insights specific to the constitution and state of the cellulose fibrils was assessed with CP-PDSD experiments with a 1500 ms and 30 ms mixing times to assess crystallinity.Peaks in the CP-PDSD were assigned using previously characterized peaks for polymers in the sorghum secondary plant cell wall Gao et al.2020and signals consistent with the CP-rINADEQUATE experiment for each dimension.The CP based experiments filter for more rigid components of the secondary plant cell wall because the 1H-13C magnetization transfers are more efficient for rigid spins.The mixing times of the CPPDSD has a proportional timescale to the distance of the spins between 13C-13C through space.The downside of the experiment is the broader line shapes due to heterogeneous line broadening from spins in multiple orientations coupling at similar frequencies,vertical garden indoor but crucial information about the cellulose fibril morphologies can still be obtained.The 1500 ms mixing period for the 1500 ms 2D CP-PDSD experiment reports on the larger cellulose structure along fibrils and between fibrils.The 30 ms CP-PDSD 30 experiment has a mixing time of 30 ms where there is enough time for the magnetization transfers to pass between the carbons within the glucose sugar of the monomer of the cellulose fibrils in Figure 8.First examination of the cellulose fibril shows magnetization transfers between D-glucose monomers of cellulose polymers in the 1500 ms CP-PDSD experiment.Cellulose carbon 1 to carbon 2 transfers have the same chemical shifts across both amorphous and crystalline cellulose.

The reductions in overall signal intensity considering sample load is negligible after 2 minutes of milling and >88% after 15 minutes of milling.This makes sense as cellulose fibrils appear to be broken down into smaller microparticle fragments.According to the FE-SEM images there may be fewer dipolar coupling-based magnetization transfers available along and between cellulose polymers after 15 minutes of milling.The proportional intensity changes between amorphous and crystalline cellulose signals of carbon 1 to 4 and carbon 1 to 6 magnetization transfers were assessed in the CP-PDSD experiments to identify the conversion of crystalline cellulose to amorphous cellulose.The proportion of the crystalline to amorphous cellulose for the glucose carbon 1 to carbon 4 transfer appears to decrease more for the 2-minute milling period, to 99.70 ± 1.59% and 82.17 ± 0.88% respectively of the relative peak intensity before milling.The signal intensity for the crystalline cellulose and amorphous cellulose appear to be nearly equal for the glucose carbon 1 to carbon 4 after 15 minutes of milling.The cellulose carbon 4 is of particular relevance as the amorphous cellulose has a chemical shift around 84 ppm and the crystalline cellulose has a chemical shift around 89 ppm.The isolation of cellulose carbon 4 in solid-state NMR spectra makes it a more reliable marker for amorphous and crystalline cellulose because they have less overlap than other peaks.After 2 minutes of milling the crystalline cellulose content appears higher than the amorphous cellulose content and the trend appears to also hold true for the 15 minute period.The reduction of amorphous cellulose signal may be due to amorphous cellulose becoming more mobile, resulting in less efficient CP transfer necessary for the 1500 ms CP-PDSD experiment.However, Ling et al.2019 found that even within the 1D CP experiments a conversion from crystalline to amorphous cellulose was observable as part of their prediction: crystalline cellulose within cellulose fibrils becomes amorphous upon milling.The conversion of crystalline to amorphous cellulose observed after milling was not consistently observed with sorghum.The ratio of crystalline cellulose to amorphous remains the same in 1500 ms CP-PDSD experiments as larger fibril structures are broken down in the milling process.The proportional intensities of crystalline and amorphous cellulose signals for carbon 1 to 4 remained the same after milling consistently for 2 minutes and 15 minutes at 30 Hz.The hypothesis of cellulose increasing recalcitrance was not supported as demonstrated in unambiguous carbon 1 to 4 of cellulose peaks for crystalline and amorphous cellulose.The 1500 ms CP-PDSD carbon 1 to carbon 6 transfers provide similar insight.The proportion of the crystalline to amorphous cellulose for the glucose carbon 1 to carbon 6 transfer appears to decrease more for the 2-minute milling period, to 90.47 ± 0.90% and 87.38 ± 0.88% respectively.The signal intensity for the crystalline cellulose and amorphous cellulose appear to be nearly equal for the glucose carbon 1 to carbon 6 after 15 minutes of milling.Both the carbon 4 and carbon 6 regions highlighted in Figure 7A–B appear to be low in signal intensity and it is worth noting the superposition of the noise over weak, broad peaks could distort the integrations so careful interpretation is necessary.Although the stems milled for 15 minutes show rigid cellulose within the fibril has greater amorphous cellulose than crystalline cellulose , the low signal intensity makes this observation somewhat ambiguous.The lower overall signal intensity of the stems milled for 15 minutes means the noise is superimposed over the tops of the peaks, compounding the error in these results.This factor is particularly relevant for the sample milled for 15 minutes given the signal intensity decreases by at least 80% for all peaks.For this purpose, over interpretation may be a liability when assessing recalcitrance using carbon 6 signals of cellulose in the 2D CP-PDSD experiments where cellulose carbon 4 chemical shift changes provide more information on recalcitrance due to morphology changes from crystalline to amorphous cellulose.Where the 1500 ms 2D CP-PDSD can give some insight into the larger cellulose structure, the 30 ms 2D CP-PDSD experiment reports on the D-glucose subunit of the cellulose polymer.Similarly, the 30 ms CP-PDSD experiment showed an overall decrease in cellulose structures was negligible after 2 minutes and >86% after 15 minutes of milling.When signal intensity is severely reduced the interpretation of integrations is less reliable due to noise super imposed over the tops of peaks.For this study, the 2DPDSD experiments, interpretations of carbon 1 to 4 peaks are more reliable than the carbon 1 to 6 signals.The proportion of the crystalline to amorphous cellulose for the glucose carbon 1 to carbon 4 transfer appears to change more for the 2-minute milling period to 101.53 ± 1.69% and 87.91 ± 1.00%, respectively.The signal intensity for the crystalline cellulose and amorphous cellulose appear to be nearly equal for the glucose carbon 1 to carbon 4 after 15 minutes of milling.

The goal was to identify crop varieties that have high salt tolerance

More generally, on the time scale of centuries, marine transgression may cause rapid salinization of entire aquifers.In Western Europe Holocene transgressions of a few thousands of years have brought salt water of corresponding age to a depth of over 200 m.Nevertheless, at many places all around the world fresh and brackish waters have been found on the continental shelves.Numerical modeling by Post and Simmons illustrates how low-permeability lenses protect fresh water from mixing with downward invading overlying saline ocean waters with higher density.Van Duijn et al.gave a general, modern stability analysis of such density stratified flows below a ponded surface.Saltwater intrusion by tides in the mouths of rivers—The Zuiderzee Works and Delta Plan stopped salinization from tidal motion in the North.In the Southwestern Delta, tidal motion was only partly eliminated and no major freshwater reservoirs are available, like the Lakes IJssel and Marken for the northern provinces.Instead, fresh water supply in the southwest comes more directly from diversions of water from the major rivers.In the 20th century the quality of the Rhine water gradually deteriorated, until a series of international treaties brought improvement.The river water quality was further reduced by an inward directed flow of high-density saline water underneath the outward directed flow of lighter runoff water.Traditionally, the tides had free play and salinized the river water far inland,vertical farm particularly in periods of low river flows.As a result of this salinization, in the 1970s the surface water in the important West land greenhouse district between Rotterdam and the Hague was hardly suitable for use as irrigation water.

The growers themselves made it even worse using drainage return flows, resulting from high leaching fractions combined with high application of fertilizers.The RAND corporation did a policy analysis of water management for the Netherlands , balancing engineering ambitions and agricultural interests, specifically regarding the desired irrigation water quality for use in greenhouse horticulture.The Delta Works have provided some relief from saltwater intrusion in river mouths; however, conflicting agricultural and environmental interests continue to dominate the discussion about seawater blockage as related to the desire to maintain brackish aquatic ecosystems.Saltwater intrusion by inward flow of water to land below sea-level—Fig.32 shows the depth of the brackish-fresh interface in the coastal regions of the Netherlands.Similar maps are available for the coastal region of Belgium.Because fresh water is floating on top of saline groundwater in the dunes area along the west coast, saline intrusion is strongest in the North and Southwest, where coastal dunes are absent.At numerous locations in the dunes, fresh dune water is pumped as a source for preparing drinking water for the western part of the country, where the groundwater is too saline because of continued saltwater intrusion.For example, a dune area of 3400 ha along the western coast supplies fresh drinking water to Amsterdam, already since 1853.To keep the floating bodies of fresh water in the dunes intact, the freshwater pumping is compensated for by excess rainfall and infiltration of river water, partly after having been stored in the Lakes IJssel and Marken.Fresh water floating on top of salt water in agricultural fields—Recently fresh water lenses floating on top of saline groundwater have been fully recognized as being of great importance, not only in the dunes, but also in farmer fields along coastal regions where upward seepage of saline groundwater occurs.

These freshwater lenses can come from rain, melted snow, and increasingly also from irrigation of agricultural lands.Eeman et al.made a detailed analysis of the thickness of a freshwater lens and the transition zone between this lens and the up welling saline water.Starting from a fully saline condition between drains or ditches and assuming constant rates of saltwater up welling and freshwater infiltration, they showed that a freshwater lens will grow until it reaches a maximum size.Moreover, they concluded that the fresh/saline ratio of the drainage water will change from zero to the infiltration/upward seepage ratio.However, as shown by others , seasonal variations of infiltration and plant root withdrawal of fresh water will cause temporal fluctuations of the thickness of the lenses and the fresh-saline ratio of the drainage water.Salt tolerance in a generally humid and cool climate—Most salt tolerance data for field crops and flower species date from before 2000 and were reviewed by Van Bakel et al.and Stuyt et al..The latter compilation in Dutch is the most complete, providing salt tolerance thresholds for35 individual crops or groups of crops.Salt tolerance data for greenhouse horticultural crops were brought together by Sonneveld and Sonneveld and Voogt,and included interactions between plant nutrition and salinity.In the last decade, salt tolerance tests have been carried out at Salt Farm Texel.The 160 m2 experimental plots were irrigated, using eight replications of seven different salt concentrations, obtained by mixing saline seawater with fresh water.Because of the high hydraulic conductivity of the soil, it was possible to maintain the desired concentration throughout the root zone, irrespective of the weather in the growing season.Salt tolerance was tested for six crops: potato , carrot , onion , lettuce , cabbage , and barley.The data were analyzed using the Maas and Hoffman and Van Genuchten and Gupta models.

An alternative model based on the Dalton-Fiscus model for simultaneous uptake of water and solutes was explored by Van Ieperen.Salinization in the countries around the North Sea—In principle, the lowland coastal regions of Belgium, Germany, the Netherlands, Sweden, and the United Kingdom face similar threats from salinity as in the Netherlands.For example, there was widespread flooding of farmland along the UK east coast during the Southern North Sea storm of December 5, 2013.Due to different economic and political priorities, the responses to such events have varied.The Netherlands was saved potential disastrous flooding in 2013, thanks to the Delta Plan response to the 1953 Storm Flood.Gould et al.analyzed the impact of coastal flooding on agriculture in Lincolnshire, UK.They noted that flood risk assessments typically emphasize the economic consequences of coastal flooding on urban areas and national infrastructure and tend to omit the long-term impact of salinization of agricultural land.Considering this long-term salinization, they calculated financial losses ranging from £1366/ha to £5526/ha per inundation, which would be reduced by between 35% up to 85% by post-flood switching to more salt-tolerant crops.Egyptians have practiced irrigated agriculture for about 5000 years in the Nile River valley, using basin irrigation dependent on the rise and fall of flows in the Nile river.Since 3000BCE, the Egyptians used to construct earthen banks to form flood basins of various sizes, filled with the Nile water to saturate soils for crop production.Egyptian irrigated agriculture has been sustainable for thousands of years,nft vertical farming in contrast to other civilizations in Mesopotamia.Reasons were provided by Hillel , pointing to the annual natural flooding that deposited nutrient-rich soil material,annual cycles of rising and falling of the Nile river that created fluctuations of the groundwater table and yearly flushing of salts of its narrow irrigated flood plains, and the annual inundations that occurred in the late summer and early fall, after the spring growing season.With the construction of the Aswan High Dam, most of the land was converted to perennial irrigation and the irrigated area increased from 2.8 to 4.1Mha.The year-around irrigation and lack of leaching by annual pulsing of the Nile river triggered soil salinization.More than 80% of Egypt’s Nile water share is used in agriculture.Water- saving in agriculture is a major challenge because annual per capita water availability in Egypt is expected to decrease to 560m3 from a current level of 950m3.The salts of the Nile basin are either of intrinsic origin, sea water intrusion or from irrigation with saline groundwater.Since the climate of Egypt is characterized as arid with annual rainfall ranging from 5 to 200mm compared to evaporation rates of 1500–2400mm, crop production is not possible in most parts of Egypt without irrigation.Salinity problems in the irrigated areas are widespread and about 1 million ha are already affected.At present only 5.4% of the land resources in Egypt is of excellent quality, while about 42% is relatively poor due to salinity and sodicity problems.Soils in the Nile valley and the Delta are Vertisols, characterized by substantial expansion by wetting and shrinking by drying.In Egypt, productive lands are finite and irreplaceable and thus should be carefully managed and protected against all forms of degradation.Other countries of the Nile basin also have salinity problems.Kenya has about 5Mha of salt-affected lands.In Tanzania, about 30% area is characterized by poor drainage and soil salinity problems.The soil salinity problems in countries such as DR Congo, Uganda, Burundi, and Rwanda are less prevalent however soils are low in fertility.The salt-affected lands in South Sudan and Sudan are in the White Nile irrigation schemes.This area has hardly been utilized for agricultural production despite having great potential due to the availability of water from Nile.In other parts of South Sudan, low soil fertility and lack of good quality seeds for crops and forages are the major bottlenecks in the development of agriculture.

Ethiopia stands first in Africa in the extent of salt-affected soils with an estimated 11Mha of land exposed to salinity.This corresponds to 9% of the total land area and 13% of the irrigated area of the country.These soils are concentrated in the Rift Valley, Wabi Shebelle River Basin, the Denakil Plains and other lowlands and valleys of the country, where 9% of the population lives.Currently, soil salinity is recognized as the most critical problem in the lowlands of the country resulting in reduced crop yields, low farm incomes and increased poverty.The insufficient drainage facilities, poor-quality groundwater for irrigation and inadequate on-farm water management practices are usually held responsible for the increasing salinity problems.Despite the widespread occurrence of salt-affected soils, Ethiopia does not have an accurate data base on the extent, distribution, and causes of salinity development.Most of the saline soils are concentrated in the plain lands of the Rift Valley System, Somali lowlands in the Wabi Shebelle River Basin, the Denakil Plains and various other lowlands and valley bottoms throughout the country.The introduction of large-scale irrigation schemes without the installation of appropriate drainage systems have also resulted in the rapid expansion of soil salinity and sodicity problems in the lower Wabi Shebelle basin of Gode.The distribution of surface salinity in the four largest regions of Ethiopia is given in Table 5.Sudan has built four dams on the Nile during the last century to provide irrigation water to an additional 18,000 km2 of land.This has made Sudan the second most extensive user of the Nile river water, after Egypt.Despite these arrangements, Sudan has not achieved full production potential due to lack of water infrastructure for equitable water distribution among farmers, lack of farm inputs and low soil fertility conditions.In Egypt, about 85% of the available water resources are consumed by the agriculture sector.The completion of Aswan dam increased the intensity of irrigation, which created water logging problems in many parts contributing to the pollution of land and water resources.In Egypt, surface and subsurface drainage systems have been installed to control rising water tables and soil salinity.Besides, crop-based management is used to combat soil salinization.Farmers were encouraged to use agricultural drainage water to irrigate crops thereby reducing disposal problems.However, the unregulated application of drainage water for irrigation has reduced crop yields and polluted soil and water resources.In addition to agricultural chemical residues and salts, drainage waters include treated and untreated domestic wastewater.The use of organic amendments and the mixed application of farmyard manure and gypsum was useful in reducing soil salinity and sodicity.Recently, phytoremediation or plant-based reclamation has been introduced in Sudan, for example to reduce soil sodicity instead of using gypsum.In the absence of surface and subsurface drainage systems, farmers in Ethiopia continue to manage salt-affected soils by adopting traditional salt management solutions.These include:direct leaching of salts,planting salt-tolerant crops,domestication of native wild halophytes for agropastoral systems,phytoremediation,chemical amelioration, and the use of organic amendments such as animal compost.Farmers have also used various drainage designs, allowing salts to settle before its reuse for irrigation water.However, all such practices have failed to mitigate salinity problems in the long-term.Hence crop yields continue to decline, resulting in reduced farm incomes, food shortage and increased poverty.Many of the smallholder farmers are also working as daily laborers, causing unprecedented farmer migration to nearby urban areas and exacerbating prevalent problems of urban unemployment.The increasing demand for food for the rising population in Egypt , the country is trying to expand its irrigated agricultural area.

There is general evidence of reduced P uptake in salt affected soils

As pointed out in this review of the role of microorganism to mitigate abiotic plant stresses, their use can open new and emerging applications in agriculture and also provide excellent models for understanding stress tolerance, potentially to be engineered into crop plants to cope with abiotic stresses such as soil salinity.In another study by Marks et al.it was demonstrated that dramatic changes in salinity of salt marsh soils as caused by storm surges or freshwater diversions can greatly affect denitrification rates, which is especially relevant for nutrient removal management of eutrophic waters such as for the Mississippi delta.Rath et al.studied such dynamic conditions by the bacterial response to drying-rewetting in saline soils and concluded that increased soil salinity prolonged the time required by soil microbes to recover from drought, both in terms of their growth and respiration.Biochar is defined as organic matter that is carbonized by heating in an oxygen-limited environment.The properties of biochar vary widely, dependent on the feed stock and the conditions of production.Biochar is relatively resistant to decomposition compared with fresh organic matter or compost, and thus represents a long-term carbon store.Biochar stability is estimated to range from decades to thousands of years,ebb flow but its stability decreases as ambient temperature increases.It has been shown that application of biochar to soil can improve soil chemical, physical and biological attributes, enhancing productivity and resilience to climate change, while also delivering climate-change mitigation through carbon sequestration and reduction in GHG emissions.

Chaganti et al.evaluated the potential of using biochar to remediate saline–sodic soils in combination with various other organic amendments using reclaimed water with moderate SAR.Results showed that leaching with moderate SAR water was effective in reducing the soil salinity and sodicity of all investigated soils, irrespective of amendment application.However, it was shown that combined applications of gypsum with organic amendments were more effective to remediate saline–sodic soils, and therefore could have a supplementary benefit of accelerating the reclamation process.Akhtar et al.used a greenhouse experiment to show that biochar amendment for a different soil salinity levels could alleviate the negative impacts of salt stress in a wheat crop through reduced plant sodium uptake due to its high adsorption capacity, decreasing osmotic stress by enhancing soil moisture content, and by releasing mineral nutrients into the soil solution.However, it was recommended that more detailed field studies must be conducted to evaluate the long-term residual effects of biochar.The application of marginal waters to augment irrigation water supplies particularly has led to investigations to evaluate plant nutrient uptake impact of saline-sodic soils.It has been shown that soil salinity can induce elemental nutrient deficiencies or imbalances in plants depending on ionic composition of the soil solution, due their effect on nutrient availability, competitive uptake, transport, and partitioning within the plant.Most obviously, soil salinity affects nutrient ion activities and produces extreme ion ratios in soil solution.

As a result, for example, excess Na+ can cause sodium-induced Ca2+ or K+ deficiency in many crops.Nutrient uptake and accumulation by plants is often reduced under saline soil conditions because of competition between the nutrient in question and other major salt species, such as by sodium-induced potassium deficiency in sodic soils.Soil salinity is expected to interact with nitrogen both as competition between NO 3 and Cl ions in uptake processes as high chloride concentrations may reduce nitrate uptake and plant development , and indirectly through disruptions of symbiotic N2 fixation systems.Interactions with phosphorus vary with plant genotype and external salinity and P concentrations in soil solution, which are highly dependent on soil surface properties.Calcium magnesium and sulfur as well as micro-nutrients all interact with soil salinity, Na and one another.Imbalance of these elements cause various pathologies in plants including susceptibility to biotic stresses.Among potential alternative land uses of saline soils is their economic potential for biomass production using forestry plantations , as many tree species are less susceptible to soil salinity and sodicity than agricultural crops.A thorough review of the economic potential of bioenergy from salt-affected soils has been presented by Wicke et al..Using the FAO soil salinity database, they estimated that the global economic potential of biosaline forestry is about 53 EJy 1 , when including agricultural land, and to 39 EJ y 1 when excluding agricultural land.

Plantation forestry has been advocated to control dryland salinity conditions, with fast growing versatile Eucalyptus species to lower shallow groundwater tables, however, salinity/sodic stresses in the long-term prohibit significant economic returns.Much will depend on regional production costs.Studies have shown that biosaline forestry may contribute significantly to energy supply in certain regions, such as sub-Saharan Africaand South Asia, and has additional benefits of improving soil quality and soil carbon sequestration , thus justifying investigating biosaline forestry in the near future.Economic losses of productive land by salinization are difficult to assess, however, various evaluations have reported annual costs of US $250–500/ha , suggesting a total annual economic loss of US$30 billion globally.As pointed out by Qadir et al., a large fraction of salt-affected land is farmed by smallholder farmers in Asia and SSA, necessitating off-farm supplemental income activities, with others leaving their land for work in cities.Given that much of the projected global population growth is in those regions, prioritization of research and infrastructure investments to mitigate agricultural production impacts there is extremely relevant.A thorough analysis of the production losses and costs of salt-induced land degradation was done by Qadir et al., based on crop yield losses, however, they point to the need to also consider additional losses such as by unemployment, health effects, infrastructure deterioration, and environmental costs.Their calculations compared economic benefits using cost-benefit analysis of “no action” vs “action” for various case studies.A yield gap analysis by Orton et al.for wheat production in Australia showed that soil sodicity alone represented 8% of the total wheat yield gap, representing more than AUS $1 billion.In their sustainability assessment of the expanding irrigation in the western US, comparing real outcomes with those predicted by Reisnerin this book Cadillac Desert, Sabo et al.included an economic analysis of agricultural revenue losses as a result of the increased soil salinity for the western US.Using the USDA NRCS soil’s data base, and available crop salt tolerance information,greenhouse benches they estimated a total annual revenue loss by reduced crop yields of 2.8 billion US dollars.In all, land values of salinized lands depreciate significantly and incur huge economic impact, putting into question the sustainability of agricultural land practices that induce soil salinization.Australia is the world’s driest inhabited continent with an average annual rainfall of 420 mm with a high potential for the formation of salt-affected landscapes.Development of agricultural practices in Australia began after the European settlement and was widely adopted during 20th century.Earlier, the indigenous population found their food by hunting and foraging.They indirectly depended on soils for plant food, but they did so without soil management.The European settlers were unaware of the soil characteristics they had to work with.Salt has been accumulating in the Australian landscape over thousands of years through small quantities blown in from the ocean by wind and rain.In addition to mineral weathering, salt accumulation is also associated with parna, a wind-blown dust coming from the west and the south-west of the continent.

Many soils of the arid to sub-humid regions of Australia contain significant amounts of water-soluble salts, dominantly as sodium chloride.Their dense sub-soils are frequently characterized by moderate to high amounts of exchangeable sodium and magnesium , and are generally named duplex soils.Discussing the genesis and distributions of saline and sodic soils in Australia, Isbell et al.concluded that salts from a variety of sources have probably contributed to the present saline and sodic soils.In the early part of 20th century, the Australian government initiated a nation-wide soil survey with soil analysis.As early as the 1930s, soil surveys in the Salmon Gums district, Western Australia, found that salt accumulation in surface and subsoils occurred in more than 50% of the 0.25 million ha surveyed.These surveys also found that virgin areas had higher accumulations of salts in the upper meter than in vegetation-cleared areas for the major soil types.In one of his earlier observations in the Mallee region of Southern Australia, Holmes found a salt bulge that was more than 4 m below the surface in a virgin heath community.Northcote and Skene , examining numerous data relating to the morphology, salinity, alkalinity, and sodicity of Australian soils presented the areal distribution of saline and sodic soils in Australia, using the classification of salt-affected soils of Table 2.While 32.9% of the total area in Australia is salt-affected, sodic soils occupy 27.6% of this area.Hence, most of the research during the middle of the 20th century focused on sodic soils and their management.Northcote and Skene defined sodic soils as those having an ESP between 6 and 14, and strongly sodic soils as those having an ESP of 15 or more.The recent Australian soil classification defined “Sodosols”as soils with an ESP greater than 6.However, soils with ESP 25–30 were excluded from sodosols, because of their very different land-use properties.California’s natural geology, hydrology and geography create different forms of salinity problems across the state, ranging from sea water intrusion induced salinity along the central coast to concentration of salts in closed basins such as the Tulare Lake basin in the Central Valley.In addition, some of the most productive soils in California such as in the western San Joaquin Valley originate from ocean sediments that are naturally high in salts.Irrigation water dissolves that salt and moves it downstream or it isThe salinity in the Colorado river used for irrigation in the Imperial Valley is higher than that of surface water from the snow melt.Although salinity problems can be found in various locations around California as shown in Fig.22, historically the major salinity issues are found in the Western San Joaquin Valley and the Imperial Valley.A thorough review of the history of irrigation in California was presented by Oster and Wichelns.Today, California’s interconnected water system irrigates over 3.4 Mha of farmland.The Imperial Valley in southern California has experienced salinity problems for many decades, since the Colorado river was tapped for irrigation in the early 1900s.By 1918 salinity had forced approximately 20,234 ha out of production and damaged thousands more hectares.The rapidly deteriorating agricultural lands from salinization forced the Imperial irrigation District to construct open ditch drainage channels.However, due to high salinity in the Colorado river water, heavy soils and poor on-farm water management at the time, the drainage system did not prevent continued salinization of the Imperial Valley.To address the problem, partnerships between the federal government, and the Imperial irrigation district were formed in the early 1940s that resulted in installation of underground concrete and tile drainage on thousands of hectares of farms.The subsurface drainage system and improved on-farm water management led to a reduction in the rate of soil salinization, resulting in flourishing agricultural production in the Imperial Valley.The water from the subsurface drainage tiles was routed to the Salton Sea.However, agricultural runoff and drainage flows with high salt content have affected the elevation of Salton Sea and increased its salinity threatening various wildlife species.On the positive side, the salinity load coming into the Imperial Valley as measured by salinity levels at the Imperial dam have not increased as previously projected.A report from the US Bureau of Reclamation reported a flow weighted salinity of 680 mg/L in 2011 at the imperial dam and had remained constant for past decades.Another major region in California significantly impacted by salinity is the western San Joaquin Valley , comprising the southern half of the Central Valley.From the second half of the 19th century to the early 1900s the SJV experienced rapid development of irrigated agriculture, along with it came drainage and salinity problems.The salinity problems on the West side of the valley can be attributed to high water tables near the valley trough caused by an expansion of irrigated agriculture upslope from the valley,soils on the West side are derived from alluvium originating from coastal mountains and other marine environments, and degradation of water quality in the San Joaquin river.In 1951, some of the fresh water in the San Joaquin river was diverted to irrigate agricultural lands on the east side north of Friant dam.The diverted water was replaced with saltier water from the Central Valley project.These changes coupled with agricultural return flows led to increased salinity downstream of the San Joaquin river, the main conduit draining the valley.

Micro-irrigation systems are largely preferred when irrigating with more saline waters

They have been successfully used in orchards, vineyards, and vegetable crops in many regions around the world with salinity problems, including Australia, Israel, California, Spain, and China.They are well suited because of their use of high frequency irrigation, thereby preventing dry soil conditions so that soil solution salinities remain close to that of the irrigation water, especially in the vicinity of the emitters where root density is highest.The salt distribution that develops around a micro-irrigation system depends on system type, but typically salts concentrate on the periphery of the wetted bulb for a surface drip irrigation, whereas salt concentrations typically increase with soil depth for sprinkler systems.The upwards capillary movement of water from the wetted soil depth near the subsurface drip emitter results in soil surface salt accumulation as water is lost through root water uptake and soil evaporation.For conditions where seasonal rainfall is inadequate to push those salts near the soil surface further down, options include preseason flood irrigation or sprinkling, moving drip lines every so many years when replacing or change crop rows between seasons.However, anecdotal evidence in the San Joaquin valley orchards has shown that salinity around drip irrigation systems can limit the volume of the root zone thereby limiting nutrient uptake, particularly nitrogen.The residual nitrogen ends up being leached to groundwater either by excess irrigation or winter recharge causing environmental degradation of groundwater quality.The complex interactions between soil salinity stress and water and nitrate applications were discussed in a model simulation study by Vaughan and Letey.

Libutti and Monteleone suggested that since soil salinity management is bound to increase the leaching of N,hydroponic net pots best practices should optimize the volume of water needed to reduce salinity and that required to avoid or minimize NO3 contamination of groundwater.They suggested to “decouple” irrigation and fertigation.Abating this salinity-N paradox with coupled nutrient-salt management will requires site specific considerations.Because of the potentially high control of irrigation amount and timing, it has been shown by Hanson et al., that subsurface drip directly below the plant row can effectively be used for irrigation under shallow water table conditions as long as the groundwater salinity is low.They showed that converting from furrow or sprinkler to subsurface drip is economically attractive and can achieve adequate salinity control through localized leaching for moderately salt-sensitive crops such as processing tomatoes, eliminating the need for drainage water disposal if so relevant.Controlled drainage —Whereas conventionally drains are installed in conjunction with irrigation systems in arid regions, controlled drainage systems originate in humid regions by control of the field water table using more shallow depth drainage laterals and control structures in the drainage ditches or sumps.In controlled drainage systems, irrigation and drainage are part of an integrated water management system where the drainage system controls the flow and water table depth in response to irrigation.Depending on objectives of the CD system, it can reduce deep percolation and nitrate concentrations in drainage water, augment crop water needs by shallow groundwater contribution, and reduce drainage water volume and salt loads for disposal.

Use of marginal waters—When freshwater resources are limited, salt tolerant crops can be irrigated with more saline water to be reused, for example by treated wastewater or drainage water.Management options include to apply irrigation water that is a mixture of saline with fresh water or cycle saline water with fresh water depending on growth stage , by using crop rotations between salt sensitive and salt tolerant crops, depending on when more saline water is available or through the use of sequential cropping as described in Ayars and Soppe.In addition to reducing freshwater requirements, it decreases the volume of drainage water required for disposal or treatment.A series of articles that present use of marginal waters has been edited by Ragab.In general, research results in this issue demonstrate that waters of much poorer quality than those usually classified as “suitable for irrigation” can, in fact, be used effectively for the growing of selected crops under a proper integrated management system, as long as there are opportunities for leaching to prevent detrimental effects, such as by sodicity.Studies have shown that drip irrigation gives the greatest advantages, whereas sprinkling may cause leaf burn.Cycling strategies are generally preferred, but beneficial effects decreased under DI.In addition, blending does not require added infrastructure for mixing the different water supplies in the desired proportions.Precision agriculture is increasingly becoming an established farming practice that optimizes crop inputs by striving for maximum efficiencies of those inputs thus increasing profitability while at the same time reducing the environmental footprint of those improved practices.While farming has always been about maximizing yield and optimizing profitability, precision farming has allowed for differential application of crop inputs across the farmer’s field, leading to more sustainable management.PA became possible through the broad availability of global positioning system and geographical information system technologies with satellite imagery in the 1980s.

It was focused on achieving maximum yields, despite spatial variations in soil characteristics across agricultural fields.It enabled farmers to vary fertilizer rates across the field, guided by grid or zone sampling.Therefore, inherent to precision agriculture is the use and refinement of the field soil map, in combination with soil and/or plant sensors.Whereas early PA applications depended solely on the soil map and its refinement, more sophisticated approaches have been introduced because of the parallel development of on-the-go sensor technologies, allowing for real-time soil and/or plant monitoring during the growing season thus expanding PA toward spatio-temporal applications.For a review of a broad range of such on-the-go-sensors, we refer to Adamchuk et al., including electrical/electromagnetic and electrochemical sensors for soil salinity and sodium concentration measurements.Whereas specific electrode sensors are available to measure Na concentration in soil solution, most of the EM sensors were developed to indirectly measure soil moisture by correcting for salinity interference, or to measure bulk soil ECb.The sole exception is the porous matrix sensor that was originally designed by Richards and reviewed by Corwin, measuring directly the electrical conductivity of in-situ soil pore water through an electrical circuit with the electrodes embedded in a small porous ceramic element that is inserted in the soil.The EC measurement is solely a function of the solution salinity because the air entry value of the ceramic is such that it will not desaturate beyond 1 bar.Corrections are required for temperature and response time for ions to diffuse from the soil solution into the ceramic.In their synthesis of high priority research issues in PA, McBratney et al.addressed the need to consider temporal variations, as yields typically vary from year to year.For irrigation applications, knowledge of within season variations are critical for BMP’s that minimize crop water and salinity stress.This has led to the term and application of Precision Irrigation, adhering to the definition of PA but applied to irrigation practices.Whereas traditional irrigation management strives for uniform irrigation across the irrigated field, it is the goal of PI to apply water differentially across the field to account for spatial variation of soil properties and crop needs, thus to also minimize adverse environmental impacts and maximize efficiencies.Moreover, PI advances allows for temporal adjustments of irrigation during the growing season because of changing weather conditions,blueberry grow pot including accounting for rainfall.PI can adjust water/ fertilizer amounts because of differential tree/crop needs , by controlling both application rate and timing at the individual tree/crop level or for larger management units.PI uses a whole-systems approach, with the goal to apply irrigation water and fertilizers using the optimal combination of crop, water, and nutrient management practices.As defined by Smith and Baillie , precision irrigation meets multiple objectives of input use efficiency, reducing environmental impacts, and increasing farm profits and product quality.

It is an irrigation management approach that includes four essential steps of data acquisition, interpretation, automation/control and evaluation.Typically, data acquisition is achieved by sensor technologies, while data interpretation would occur by evaluating simulation model outcomes, e.g.of crop response and salt leaching.Control is achieved by automatic controllers of the irrigation application system using information from both the sensors and simulation models, whereas evaluation closes the loop through adjusting the PI system.In addition to electrochemical sensors such as specific electrodes, optical reflectance devices such as near- and mid-infrared spectroscopy methods have been developed to quantify specific soil ion concentrations, particularly soil nitrate content.Over the past 20 years or so, many new soil moisture and salinity sensors have come to market, most of them being able to be included in wireless data acquisition networks.Selected reviews and sensor comparisons include Robinson et al.and Sevostianova et al..Shahid et al.showed the field results of a real-time automated soil salinity monitoring and data logging system, tested at the ICBA Dubai Center for Biosaline Agriculture.Recently there has been increased use of geophysical techniques for delineation of PI irrigation zones and for in-season irrigation and soil salinity management.For example, Foley et al.demonstrated the potential of using ERT and EM38 geophysical methods for measuring soil water and soil salinity in clay soils although they emphasized the need for calibration.Whereas traditionally, one would consider only drip or micro-sprinkler irrigation as a PI method, the broader definition can apply to most pressurized irrigation methods.Specifically, Variable Rate Irrigation is applied to center pivot, lateral move, and solid set systems, as reviewed recently by O’Shaughnessy et al..Many of the aspects of PI equally apply to such sprinkler systems, however, it is noted that their inherent complexity has precluded the required development of user-friendly interfaces for decision support, lagging the engineering technology.Specifically, the need to fuse GIS, remote sensing, and other temporal information with the DSS, allowing management zones to change over the growing season.Recent evaluations on impacts of using VRI on crop yield, water productivity were presented by Barker et al.and Kisekka et al., showing potential improvements when using VRI or MDI , but that additional research is strongly advocated especially because of the significant increased investments required.Another limitation to date of adoption of PI is that large-scale VRI systems require many sensors which can be cost-prohibitive, whereas determining their placement and number of sensors needed is not straightforward.It is worth noting that PI can also be applied to surface irrigation systems as described in Smith and Baillie.For example, automated gates coupled with SCADA systems and real-time data analytics can be used to optimize flow rates, and advances times to ensure infiltration rates match variable soil conditions.The application of PI to maintain plant-tolerable soil salinity levels was introduced by Raine et al., identifying research priorities at the time that allows for PI to be effective and pointing out that the level of precision, water application uniformity and efficiencies of most irrigation practices is sub-optimal.Among identified knowledge gaps was the lack of agreement between field and model-simulated data, especially for multidimensional model applications such as required for drip irrigation and for spatially-variable salt and water distributions at the individual plant root zone scale.This puts into question the usefulness of computer modeling for soil salinity management purposes, especially if there is general absence of soil salinity measurements to validate model simulations.Another limitation of successful PI is the lack of information on crop root response to salinity when considering the whole rooting zone in multiple dimensions as well as on crop growth stage.A central component of a road map toward precision irrigation is moving from a single management point within an agricultural field toward defining management zones across the field and eventually close to a plant-by-plant level of resolution were appropriate.It requires cost-effective sensors, wireless sensing and control networks, automatic valve control hardware and software, real-time data analytics and simulation modeling, and a user friendly and visual decision support system.Many sensor types and technologies are being developed and are introduced for soil moisture sensing; however, few applications include soil salinity sensing in concert with soil moisture monitoring.For PI to advance further, there is great need for much improved and cost-effective multi-sensor platforms that combine measurements of soil salinity with soil moisture and nitrate concentration.For a recent review of contemporary wireless networks and data transfer methods, we refer to Ekanayake and Hedley , that includes the use of cloud-based databases with smart phone apps and web pages.