A large fraction of the QTL were identified for differential traits

A similar pattern was observed in Feldman et al. 2017, in which the authors identified QTL on chromosome 2 at 96cM, 5 at 109cM, 7 at 99cM, and 9 at 127 cM which were associated with water use efficiency using the same A10 x B100 RIL population . Though the processes of water and ion uptake are independent, there is a strong relationship between the two; ion homeostasis depends upon transpiration rate, active transport, and membrane permeability, all of which are affected by the water status of the plant. As drought and density of planting are both variables that impact the available water supply, it appears that alteration in the water status of Setaria substantially perturbs the ionome.This additionally suggests that, while the ionome can be interrogated with individual ions, a multi-elemental approach is more likely to identify regions of the genome with weak signal, or those that evince pleiotropy.Rb is a striking example of this, with 13 of its 15 QTL identified via differential analysis. Combined with this trait’s high heritability within each experiment and low repeatability across experiments, these data suggest that there is a strong genotype by treatment component in Rb content in this RIL population. This genotype by treatment effect was apparent in many elements that were assayed, with an average of 74% of the ion specific QTL identified in the differential traits. The preponderance of differential QTL was not universal,vertical farm as within treatment mapping allowed for the identification of 50% or more of the QTL found for Mo and Sr, suggesting that the homeostasis of these elements experiences a smaller degree of environmental perturbation. Several QTL identified in this study overlie genes known to be associated with the control of elemental concentration. One example is Sevir.

G251200, the ortholog of MOT2 in A. thaliana. MOT1, a MOT2 paralog, is responsible for a large fraction of the variation in Mo content in A. thaliana . Although MOT1 has an ortholog in S. viridis on chromosome 9, it was not identified by any of the QTL in this study. This finding suggests that either 1) the RIL population contains allelic diversity in MOT2 that is not present at the chromosome 9 locus or 2) the MOT2 locus in Setaria is responsible for more of the variation in Mo content in this species. Additionally Sevir.5G106900, imputed from the A. thaliana gene ESB1, underlay 10 QTL, the majority of which were identified in the PC QTL mapping. ESB1 is involved in the production of the casparian strip, with mutants in A. thaliana developing increased suberin levels and disordered casparian strips, as well as altered levels of many ions. The identification of QTL in this region in both the DN13 and DR14 experiments and for several different treatments, as well as in the first principal component for both DN13 and DR14 suggests that the Setaria ESB1 ortholog plays a role in a variety of conditions related to water status. The B100 haplotype for this region produces a decrease in Mo as compared to the A10 allele, which is consistent with the relationship between the WT and the esb1 allele seen in A. thaliana. Given the central role played by the casparian strip in water homeostasis, this gene is a good candidate for explaining the coincidence of WUE and ionomic QTL. The translocation of the short arm of rye chromosome 1 from the cultivar Petkus into the long arm of wheat chromosome 1B confers improved tolerance to several abiotic and biotic stresses. Although several genes for resistance to biotic stresses are no longer effective, the1RS.1BL translocation is still widely used because of its beneficial effects on grain yield and improved abiotic stress tolerance . We have previously shown that the presence of a short segment of wheat 1BS chromosome from cultivar Pavon in the distal region of the 1RS translocation was associated with reduced grain yield, biomass, and canopy water status relative to near-isogenic lines carrying the complete 1RS chromosome arm . Carbon isotope discrimination data showed that the lines with the complete 1RS chromosome arm achieve higher yields and better water status through increased access to water throughout the season, rather than through water conservation . A subsequent field study showed that the improved water status of the isogenic lines with the 1RS chromosome was associated with enhanced root density below 20 cm relative to the lines with the 1RSRW chromosome .

Changes in root architecture in the field were correlated with drastic changes in root development in hydroponic growth systems, where the 1RSRW line showed a regulated arrest of the seminal root apical meristem ∼2 wk after germination. By the same time, the 1RSRW plants displayed altered gradients of reactive oxygen species in the root tips and emergence of lateral roots close to the RAM . In this study, we performed exome captures for 1RS, 1RSRW, and its parental lines T-9 and 1B+40 . We show that, as a result of a distal inversion between 1RS and 1BS chromosome arms, T-9 and 1B+40 have duplicated 1BS and 1RS orthologous regions in opposite orientations and that a crossover between these chromosomes resulted in a duplicated 1RS region colinear to the inserted 1BS segment in 1RSRW. Using these genetic stocks, we demonstrate that the dosage of the genes in the duplicated region plays an important role in the regulation of the seminal root growth. We also describe a radiation mutant with a deletion in the inserted 1BS segment and the adjacent 1RS region that restored long roots, confirming the importance of the dosage of the genes in this region on root development. Finally, we identified 38 genes within this deletion and used published RNA-sequencing data and annotation to discuss their potential as candidates for the genes regulating seminal root elongation in wheat.Previous field studies demonstrated that cultivar Hahn carrying the standard 1RS.1BL translocation had longer roots, better access to water, and significantly higher grain yields than isogenic Hahn lines carrying the 1RSRW chromosome . Hydroponic studies confirmed that 2 wk after germination, the roots in Hahn-1RSRW showed a significant reduction in the elongation rate, altered gradients of reactive oxygen species, and the emergence of lateral roots close to the RAM . This earlier reduction in root growth rates in 1RSRW relative to 1RS was also observed in this study, even in experiments that showed variable overall root growth responses . We initially assumed that the 4.8 Mb 1BS segment in the 1RSRW chromosome arm was the result of a homologous recombination event between the overlapping 1BS segments of lines T-9 and 1B+40  and that, therefore, the 1BS wheat genes have replaced the orthologous 1RS rye genes.Given the known positive effect of the 1RS translocation on drought tolerance in wheat, we hypothesized that the lost 1RS genes were the cause of shorter roots in 1RSRW. However, the exome capture sequencing of 1RS and 1RSRW demonstrated that both the 1BS and its orthologous 1RS segment were still present in 1RSRW, disproving our original hypothesis.

Our second hypothesis was that wheat genes present in the 4.8-Mb 1BS segment inserted in 1RSRW could be responsible for the shorter roots. However, the characterization of the Hahn-T-18 line,nft vertical farming which carries a 17-Mb distal 1BS segment and has no identifiable duplications, provided evidence against this hypothesis. The roots of T-18 were slightly longer than those in 1RS at the initiation of the measurements but showed no significant differences in their root growth rates after that day . When the 1BS segment was combined with the 1RS segment in the Hahn-T-21 and Hahn-1B+40, the roots were significantly longer than the roots of 1RSRW and slightly, but not significantly, shorter than the roots in the control 1RS line . Taken together, these results provided conclusive evidence that the presence of the wheat genes in the 1BS segment alone was not responsible for the short roots 1RSRW and disproved our second hypothesis. Our third, and still current, hypothesis, is that the change in gene dosage generated by the duplications of the 1BS and 1RS colinear regions was responsible for the arrest in the seminal root growth. The lack of differences in root growth rate between T-18 and 1RS between 9 and 28 d suggest that the genes in the 1BS segment are not responsible for the reduced growth rate in 1RSRW during the same period . The 1BS-1RS duplication in T-21 and 1B+40 resulted only in a minor decrease in growth rate relative to 1RS and their final root lengths were significantly longer than in 1RSRW . As T-21 tended to be shorter than 1B+40 in both experiments, we cannot rule out the possibility that their different proximal regions may contribute to modulate the effect of the 2R+2B duplication on root length. These results suggest that adding duplicated 1BS genes has a smaller effect on seminal root growth than adding more copies of the 1RS orthologues. The stronger effect of the 1RS segment was evident in plants heterozygous for the 1RSRW chromosome , which showed seminal root length intermediate to that of 1RS and 1RSRW . Based on this result, we hypothesize that the duplication of the 1RS region in 1RSRW is the main driver for shorter roots in this line, but we do not entirely discard the idea that the genes in the 1BS segment may also contribute to the reduced root growth when combined with additional 1RS orthologues. The dosage hypothesis was reinforced by the hydroponic experiments with the radiation-mutants 1RSWW-del-8 and 1RSWW-del-10 back crossed independently to both to Hahn- 1RSRW and Hahn-1RS. In the hydroponic experiments using the back cross lines segregating for the deletions and 1RSRW, the roots of the deletion lines were significantly longer than those of the sister lines carrying at least one 1RSRW chromosome .

By contrast, in the lines segregating for the deletions and the 1RS chromosome, we observed no significant differences in root length between the homozygous deletions and their sister lines carrying at least one 1RS chromosome . The four consecutive back crosses of 1RSWW-del-8 and 1RSWW-del-10 into 1RSRW and 1RS minimized the chances of a possibly confounding effect of independent deletionsin other chromosomes of the radiation mutants. However, they did not rule out the possibility of a confounding effect of a linked deletion in 1RS. Using the exome capture, we did find a linked missing 1RS region corresponding to the orthologous rye region replaced by the proximal wheat segment in homozygotes for the 1RSWW chromosome. We have previously shown that the proximal wheat segment has no effect on root length and confirmed this result in the hydroponic experiments presented in this study . The exome capture data also allowed us to determine the length of the 1RS deleted segment in the deletion lines and to establish that the 1BS and 1RS deletions include mostly orthologous genes . Therefore, the homozygous 1RSWW-del-8 and 1RSWW-del-10 lines are expected to lose two gene copies in 1BS and two in 1RS, changing the gene dosage from 4R+2B to 2R. This hypothesis explains the identical seminal root size observed in the 1RS and the homozygous deletion lines . One limitation of the exome capture assays is that they are closed systems and some genes are not included, which resulted in annotated genes with no reads. We eliminated those genes for the analysis used to delimit the borders of the 1RS–1BS recombination events or of the duplicated 1RS region . This likely resulted in a slight overestimate of the size of the candidate gene regions and the number of potential candidate genes.Once we established conservative borders of the 1BS and 1RS deleted regions in 1RSRW-del-8/10, we considered all the annotated genes in these regions as candidates regardless of their presence in the exome capture. The 1RSAK58 genome is very close to the 1RS present in our lines, so it probably provides a good representation of the rye candidate gene region. However, the CS RefSeq 1.1 used as 1BS reference is not identical to the 1BS Pavon segment, and therefore, we cannot rule out the possibility of genes present in Pavon that are not present in the wheat reference.

A large amount of NPs were found aggregated at left bottom of the root

The poor translocation of the selected PPCP/EDCs from roots to leaves may be attributed to several factors. The compounds considered in this study have moderately high hydrophobicity with log Kow from 3.35 to 4.48 . Translocation of organic compounds within plants generally decreases with increasing hydrophobicity . Also, roots have higher lipid content than most other plant tissues, and neutral compounds have been shown to be preferentially distributed in tissues with high lipid content . In addition, the rapid conversion of 14C residue to the non-extractable form, as discussed above, may be another important factor for the negligible transfer from roots to other plant tissues.The use of 14C labeling, while giving unique information such as the total chemical accumulation in plant tissues, did not provide insights on the chemical composition of the accumulated residue. It is likely that some of the PPCP/EDCs were transformed in the nutrient solution before they were taken up by plants. The used nutrient solution from hydroponic cultivation was subjected to fractionation on HPLC to characterize the portions of 14C existing as parent compound and transformation products . It is evident that different PPCP/EDCs were transformed to different degrees in the nutrient solution and the presence of plants generally enhanced the transformation. In the no-plant control of DCL and NPX, the majority of 14C was in the form of the parent compound , while the percentage of 14C in the SPE aqueous filtrate or eluted on HPLC prior to the parent compound was very small . The presence of lettuce or collards did not increase the transformation of DCL or NPX, with the exception of the DCL-collards treatment,stackable flower pots where 93.8 ± 6.2% of the recovered activity was detected in the SPE aqueous filtrate.

In contrast, BPA and NP were extensively transformed, even in the absence of plants, and transformation was accelerated in the presence of a plant. For example, 50.3 ± 24.3% of the recovered 14C was identified as the parent in the BPA no-plant control, but collards and lettuce treatments had no detectable BPA. In the presence of a plant, 14C was detected in the aqueous filtrate and in HPLC eluent prior to the retention time for BPA . Extensive transformation of NP was also observed; all of the 14C from lettuce or collards cultivation was found in the aqueous phase of the extraction . The fraction of activity in aqueous phases may be attributed to transformation products that were not retained by the HLB cartridge or solvent phase during solvent extraction . Preliminary experiments showed that an average of 93.6% of 14C-BPA, 84.5% of 14C-DCL, and 92.0% of 14C-NPX were recovered from the HLB cartridges and 97.8% of the spiked 14C-NP was recovered in the solvent phase, while the activity in aqueous phases were below detection. Therefore, 14C in the SPE aqueous filtrate for BPA, DCL, and NPX, or in the aqueous phase for NP, was likely from polar transformation products containing the 14C label. The detection of transformation products in used solution suggests that some of the 14C found in plant tissues may be from transformation products formed in the nutrient solution prior to plant uptake. The demonstrated accumulation of PPCP/EDCs into leafy vegetables suggests a potential risk to humans through dietary uptake. To assess whether the concentrations detected in plant tissues in this study may present a potential human health risk, an individual’s annual exposure was estimated using values from the U.S. Environmental Protection Agency for average daily consumption of leafy vegetables . The annual exposure values ranged from 0.32 × 10−3 mg for BPA-lettuce to 2.14 × 10−2 mg for DCL-collards for an average, 70 kg individual residing in the United States.

To place these amounts in context, the values were then converted to either medical dose or 17β-estradiol equivalents. Both DCL and NPX are commonly available non-steroidal anti-inflammatory pharmaceuticals. Based on typical doses and the observed plant concentrations, an average individual would consume the equivalent of much less than one dose of these medicines in a year due to consumption of leafy vegetables, representing a very minor exposure to these PPCPs. However, it should be noted that DCL has proven ecotoxicity and NPX has shown toxicity in mixture with other pharmaceuticals , so a simple estimation may not encompass all possible human health effects. Both BPA and NP are industrial products known to have endocrine disrupting activity. Bonefeld-Jørgensen et al. calculated the Relative Potency of these compounds as compared to 17β-estradiol , an endogenous estrogen hormone, at activating estrogenic receptors. In Table 4, the exposure values of BPA and NP were estimated as E2 equivalents by dividing by their Relative Potency . When the calculated E2 equivalents of BPA and NP are compared with the Lowest Observable Effect Concentration for E2 , it is obvious that the even the highest expected annual exposure to these compounds by consuming leafy vegetables would not reach the LOEC. This rough calculation suggests that consumption of vegetables would be unlikely to influence an individual’s overall endocrine activity, though caution should be used when considering risk to susceptible population groups. Moreover, it must be noted that the use of hydroponic cultivation likely resulted in greater plant accumulation of these PPCP/EDCs, in relation to soil cultivation, due to the absence of chemical sorption to soil organic matter and minerals.

This likelihood, when coupled with the fact that most of the 14C in plant tissues was in the non-extractable form, implies that the actual plant accumulation of these PPCP/EDCs by leafy vegetables grown in uncontaminated fields irrigated with reclaimed water may be negligibly small. On the other hand, bio-solids have been shown to contain some PPCP/EDCs at much higher concentrations than treated wastewater and plant uptake from soil amended with may pose an enhanced human exposure risk. Also, given that many PPCP/EDCs may be preferentially distributed in plant roots as compared to above-ground tissues , the potential risk may be significantly greater for root vegetables such as carrots, radishes, and onions. The occurrence of these and other PPCP/EDCs in leafy and root vegetables should be evaluated in the field under typical cultivation and management conditions. Engineered nanoparticles have attracted great interests in commercial applications due to their unique physical and chemical properties. Increased usage of ENPs has raised concerns in the probability of nanoparticles exposure to environment and entry to food chain. The potential health and environmental impact of ENPs need to be understood. Plants are essential components of ecosystems and they not only provide organic molecules for energy but they can also filter air and water, removing certain contaminants. Definitively, plants play a very important role in uptake and transport of ENPs in the environment. Once ENPs are uptaken by plants and translocated to the food chains,tower garden they could accumulate in organisms and even cause toxicity and bio magnification. Nanoparticles are known to interact with plants and some of those interaction have been studied to understand their potential health and environmental impact, including quantum dots, zinc oxide, cerium oxide, iron oxide, carbon nanotubes, among others. The uptake of various ENPs by different plants was summarized in Table 1. Nanoparticles are known to stimulate morphological and physiological changes in several edible plants. Hawthorne et al. noted that the massof Zucchini’s male flowers were reduced by exposed to CeO2 NPs. Quah et al. observed the browner roots and less healthy leaves of soybean treated by AgNPs, but less effects on wheat treated under same condition. Qi et al. reported that the photosynthesis in tomato leaves could be improved by treated with TiO2 NPs at appropriate concentration. Yttrium oxide ENPs have been broadly used in optics, electrics and biological applications due to their favorable thermal stability and mechanical and chemical durability. One of the most common commercial applications is employed as phosphors imparting red color in TV picture tubes. The environmental effects of yttria ENPs have not been reported. Even though the effects of certain NPs have been studied on several plants, the uptake, translocation and bio-accumulation of yttria NPs in edible cabbage have not been addressed until this study.

This plant species was chosen and tested as part of a closed hydroponic system designed to study nanoparticles movement and distribution in a substrateplant-pest system as a model of a simple and controlled environment. The final test “substrate” used was plain distilled water , in which the tested NPs were mixed. In order to observe the translocation and distribution of ENPs in plants, transmission electron microscopy has been one of the most commonly used techniques to identify the localization at cellular scale in two-dimensions , because it can be used to observe all kinds of ENPs. On the other hand, ENPs with special properties, such as upconversion NPs and quantum dots with a particular band gap can be studied with a confocal microscope with alternative excitation wavelengths to trace the ENPs. Several synchrotron radiation imaging techniques exploiting high energy X-ray have become widely used in plant science, which can measure both spatial and chemical information simultaneously, like micro X-ray fluorescence and computed tomography. In this research, we use synchrotron X-ray microtomography with K-edge subtraction to investigate the interaction of yttria NPs with edible cabbage. By using the KES technique, the µ-XCT can not only detect the chemical and spatial information in 3D, but also analyze the concentration of target NPs. The uptake, accumulation, and distribution mapping of yttria NPs in both micro scale and relatively full view of cabbage roots and stem were investigated. We found that yttria NPs were absorbed and accumulated in the root but not readily transferred to the cabbage stem. Compared with yttria NPs, other minerals were observed along the xylem in both cabbage roots and stem. To the best of our knowledge, few reports have studied the impact of yttria NPs on cabbage plants. In addition, by using µ-XCT with KES technique, the distribution and concentration mapping of nanoparticles in full view of plant root have not been previously reported.The µ-XCT was carried out at Beamline 8.3.2 at the advanced light source, Lawrence Berkley National Laboratory. From scanning energies of 16.5 to 17.2 keV, below and above yttrium K-edge, the X-ray attenuation coefficient sharply increases by a factor of 5. Other elements decrease slightly in their attenuation coefficients over this energy range. The localization of yttria NPs can be identified by the subtraction between two reconstructed image datasets , shown in Fig. 2. The slices collected above and below the K-edge were set with same brightness and contrast settings to fairly compare with each other. The grayscale values of reconstructed slices represent the absorption coefficient; therefore, thebright regions in subtracted slice denote the localization of yttria NPs . Other elements appear dark in subtracted slice marked with a red “▲” . These are inorganic elements which support the growth of cabbage. Some biological structures suffered radiation damage during scanning, resulting in a small amount of shrinkage. The bright regions circled in Fig. 2c were caused by such shrinkage, resulting in a registration mismatch between the images above and below the edge. To identify and map the distribution of yttria NPs, an image segmentation protocol was employed that could highlight regions with yttria without finding these regions corresponding to sample shrinkage. The detailed segmentation process is given in the “Method” section.By using K-edge subtracted image technique with Monochromatic X-ray tomography, the translocation and distribution of NPs in the cabbage root is clear . Figure 3a and b were constructed by 17.2 keV and 16.5 keV reconstructed slice datasets, respectively. Their color maps were based on the transverse slice pixel values/absorption coefficients over the range from 0.2 to 17.8 cm−1 . An obvious difference between 17.2 and 16.5 keV visualization in absorption coefficient of yttria NPs was observed. The distribution of yttria NPs in root was segmented and colored in red . Since yttria NPs were not water-soluble, the water that contained them was kept in constant movement with an air pump working 24/7. However, it seems that the dense roots formed a web-like structure that made the suspended NPs to accumulate and aggregate among the roots.

The model’s applicability was demonstrated with a simple case study

The 2C model does not capture other observed dynamics, such as the down regulation of the Agr system and its role in biofilm formation.These hypotheses predict very different carrier and response probabilities. This is because the former assumes that the adapted state is unaffected by resident microflora while the latter assumes that the adapted and un-adapted states are equally affected. The truth is likely in the middle, i.e., SA in the adapted state are affected by resident microflora to a lesser extent than SA in the unadapted state. Such an approach was not pursued in the spirit of avoiding over-fitting given the limited data. We note that more data collected within the first day of inoculation will help judge the quality of the hypotheses presented in this study, which need to be evaluated on an absolute scale with a goodness of fit test. Confidence in the predictions of the model will improve as more data is gathered, either supporting or refuting the hypotheses. The mechanistic nature of the model enabled direct simulation of repeated exposures from the environment, without having to assume independence between exposure events. This paves the way for more involved modeling efforts such as accounting for healthcare workers and other hospital surfaces that contain MRSA. Such efforts can be challenging for two reasons: the availability of high quality data to model behaviors and the computational effort in simulating stochastic systems. However,aeroponic tower garden system they can supplement our understanding of the environment as a source of MRSA and help devise the most effective control measures in hospitals and the community.

New root phenotyping technological developments are needed to overcome the limitations of traditional destructive root investigation methods, such as soil coring or “shovelomics” . Mancuso and Atkinson et al. provide extensive reviews on the methodological advances on non-destructive root phenotyping, including Bioelectrical Impedance Analysis , planar optodes, geophysical methods, and vibrating probe techniques. These techniques aim to mitigate key limitations of traditional root phenotyping, especially addressing the need for a better and more convenient characterization of the finer roots and root functioning. Advances in non-invasive and in-situ approaches for monitoring of root growth and function over time are needed to gain insight into the mechanisms underlying root development and response to environmental stressors. Geophysical methods have been tested to non-destructively image roots in the field. Ground Penetrating Radar approaches have been used to detect coarse roots . Electrical Resistivity Tomography and Electro Magnetic Induction approaches have been used to image and monitor soil resistivity changes associated with the Root Water Uptake . Recent studies explored the use of multi-frequency Electrical Impedance Tomography to take advantage of the root polarizable nature . Despite these advantages, geophysical methods to date share common limitations regarding root characterization. Geophysical methods developed to investigate geological media: in the case of roots they measure the root response as part of the soil response, see Fig. 1a for the ERT acquisition. Because of the natural soil heterogeneity and variability the resolution and signal characteristics of geophysical methods strongly depend on soil type and conditions. As such, interpretation of the root soil system response is non-unique, hindering the differentiation between roots of close plants and the extraction of specific information about root physiology from the electrical signals. Unlike geophysical methods, the BIA for root investigation developed to specifically target the impedance of plant tissues, limiting the influence of the growing medium.

A practical consequence is that BIA involves the application of sensors into the plant to enhance the method sensitivity. BIA measures the electrical impedance response of roots at a single frequency or over a range of frequencies . The measured BIA responses have been used to estimate root characteristics, such as root absorbing area and root mass . Estimation of these root traits is based on assumptions on the electrical properties of roots . A key assumption is that current travels and distributes throughout the root system before exiting to the soil , with no leakage of current into the soil in the proximal root position . It is only in the former case, that the BIA signal would be sensitive to root physiology. Despite the physiological relevance of the BIA assumptions and the number of BIA studies, a suitable solution for the characterization of the current pathways in roots is missing. Thus far, only indirect information obtained from invasive and time-consuming experiments have been available to address this issue . Mary et al. , and Mary et al. 2020 tested the combined use of ERT and Mise A La Masse methods for imaging grapevine and citrus roots in the field. An approach hereafter called inversion of Current Source Density was used to invert the acquired data. The objective of this inversion approach is to image the density and position of current passing from the plant to the soil. The current source introduced via the stem distributes into “excited” roots that act as a distributed network of current sources . Consequently, a spatial numerical inversion of these distributed electric sources provides direct information about the root current pathways and the position of the roots involved in the uptake of water and solutes. The numerical approach used to invert for the current source density is a key component required for such an approach. Mary et al. used a nonlinear minimization algorithm for the inversion of the current source density.

The algorithm consisted of gradient-based sequential quadratic programming iterative minimization of the objective functions described in Mary et al. . The algorithm was implemented in MATLAB, R2016b, using the fmincon method. Because no information about the investigated roots was available, the authors based these inversion assumptions and the interpretation of their results on the available literature data on grapevine root architecture. Consequently, Mary et al. highlighted the need for further iCSD advances and more controlled studies on the actual relationships between current flow and root architecture. In this study, we present the methodological formulation and evaluation of the iCSD method,dutch buckets for sale and discussits applications for in-situ characterization of current pathways in roots. We perform our studies using laboratory rhizotron experiments on crop roots. The main goals of this study were: 1) develop and test an iCSD inversion code that does not rely on prior assumptions on root architecture and function; 2) design and conduct rhizotron experiments that enable an optimal combination of root visualization and iCSD investigation of the current pathways in roots to provide direct insight on the root electrical behavior and validate the iCSD approach; and 3) perform experiments to evaluate the application of the iCSD method on different plant species and growing medium that are common to BIA and other plant studies.The relationship between hydraulic and electrical pathways has been the object of scientific debate because of its physiological relevance and methodological implications for BIA methods . A key and open question concerns the distribution of the current leakage . The distribution of the current leakage is controlled by 1) the electrical radial and longitudinal conductivities , and 2) by the resistivity contrast between root and soil. With regard to σcr and σcl, when σcl is significantly higher than σcr, the current will predominantly travel through the xylems to the distal “active” roots, which are mostly root hairs. Based on the link between hydraulic and electrical pathways, this is consistent with a root water uptake process where root hairs play a dominant role while the more insulated and suberized roots primarily function as conduits for both water and electric current . On the contrary, if the σcr is similar to σcl, the electrical current does not tend to travel through the entire root system but rather starts leaking into the surrounding medium from root proximal portions. The coexistence of proximal and distal current leakage is in line with studies that suggest the presence of a more diffused zone of RWU, and a more complex and partial insulation effect of the suberization, possibly resulting from the contribution of the cell-to-cell pathways .

Soil resistivity can affect the distribution of the current leakage by influencing the minimum resistance pathways, i.e., whether roots or soil provide the minimum resistance to the current flow. In addition, soil resistivity strongly relates to the soil water content, which, as discussed, affects the root physiology. Therefore, information on the soil resistivity, such as the ERT resistivity imaging, has the potential for supporting the interpretation of both BIA and iCSD results. Dalton proposed a model for the interpretation of the plant root capacitance results in which the current equally distributes over the root system. Because of the elongated root geometry this model is coherent with the hypothesis of a low resistance xylem pathway . Numerous studies have applied Dalton’s model documenting the predicted correlation between root capacitance and mass . In fact, recent studies with wheat, soy, and maize roots continue to support the capacitance method . Despite accumulating studies supporting the capacitance method, hydroponic laboratory results of Dietrich et al. and other studies have begun to uncover potential inconsistencies with Dalton’s assumptions. In their work, Dietrich et al. explored the effect of trimming submerged roots on the BIA response and found negligible variation of the root capacitance. Cao et al. reached similar conclusions regarding the measured electrical root resistance . Urban et al. discussed the BIA hypotheses and found that the current left the roots in their proximal portion in several of their experiments. Conclusions from the latter study are consistent with the assumption that distal roots have a negligible contribution on root capacitance and resistance. Because of the complexity of the hydraulic and electrical pathways, their link has long been the object of scientific research and debate. For recent reviews see Aroca and Mancuso ; for previous detailed discussions on pathways in plant cells and tissues see Fensom , Knipfer and Fricke , and Findlay and Hope ; see Johnson and Maherali et al. in regard to xylem pathways. See Jackson et al. and Hacke and Sperry for water pathways in roots. Thus, above discrepancies in the link between electrical and hydraulic root properties can be, at least to some degree, attributed to differences among plant species investigated and growing conditions. Among herbaceous plants, maize has been commonly used to investigate root electrical properties . For instance, Ginsburg investigated the longitudinal and radial current conductivities of excited root segments and concluded that the maize roots behave as leaking conductors. Similarly, Anderson and Higinbotham found that σcr of maize cortical sleeves was comparable to the stele σcl. Recently, Rao et al. found that maize root conductivity decreases as the root cross-sectional area increases, and that primary roots were more conductive than brace roots. By contrast, BIA studies on woody plants have supported the hypothesis of a radial isolation effect of bark and/or suberized tissues . Plant growing conditions have been shown to affect both water uptake and solute absorption due to induced differences in root maturation and suberization . Redjala et al. observed that the cadmium uptake of maize roots grown in hydroponic conditions was higher than in those grown aeroponically. Tavakkoli et al. demonstrated that the salt tolerance of barley grown in hydroponic conditions differed from that of soil-grown barley. Zimmermann and Steudle documented how the development of Casparian bands significantly reduced the water flow in maize roots grown in mist conditions compared to those grown hydroponically. During their investigation on the effect of hypoxia on maize, Enstone and Peterson reported differences in oxygen flow between plants grown hydroponically and plants grown in vermiculite. The results reported above and in other investigations are conducive to the hypothesis that root current pathways are affected by the growing conditions, as suggested in Urban et al. . For example, the observations by Zimmermann and Steudle and Enstone and Peterson may explain the negligible contributions to the BIA signals from distal roots under hydroponic conditions . At the same time, the more extensive suberization in natural soil and weather conditions could explain the good agreement between the rooting depth reported by Mary et al. based on the iCSD and the available literature data for grapevines in the field.

A potential cause for concern in the model fit is the wide intervals

To develop an adequate model to predict viral transport in plant tissue, it is necessary to couple mathematical assumptions with an understanding of the underlying bio-geochemical processes governing virus removal, plant growth, growth conditions and virus-plant interactions. For example, although a simple transport model without AD could predict the viral load in the lettuce at harvest, it failed to capture the initial curvature in the viral load in the growth medium . An alternative to the AD hypothesis that could capture this curvature is the existence of two populations of viruses as used in Petterson et al. , one decaying slower than the other. However, a closer examination of the double exponential model revealed that it was not time invariant. This means that the time taken to decay from a concentration C1 to C2 is not unique and depends on the history of the events that occurred . Other viral models, such as the ones used in Peleg and Penchina faced the same issues. The incorporation of AD made the model time invariant and always provided the same time for decay between two given concentrations. This model fitting experience showcases how mathematics can guide the understanding of biological mechanisms. The hypothesis of two different NoV populations is less plausible than that of viral attachment and detachment to the hydroponic tank. While it appears that incorporating the AD mechanism does not significantly improve viral load prediction in lettuce shoot at harvest, this is a consequence of force fitting the model to data under the given conditions. Changing the conditions, for example,hydroponic fodder system by reducing viral attachment rate to the tank wall, will underestimate virus load in the lettuce shoot in the absence of AD .

Through this model fitting exercise, we also acknowledge that the model can be significantly improved with new insights on virus plant interactions and more data on the viral transport through plants.However, there is significant uncertainty in the data as well suggesting that the transport process itself is noise prone. Moreover, from the perspective of risk assessment, the variability between dose-response models is higher than the within dose-response model variability . Since within dose-response model variability stems from uncertainty in viral loads at harvest among other factors, the wide intervals do not exert a bigger effect than the discordance from different dose response models.Some parameters are identifiable to a good degree through model fitting, but there is a large degree of uncertainty in the viral transport efficiencies and the AD kinetic parameters. While this could be a consequence of fitting limited number of data points with several parameters, the viral load at harvest and risk estimates were well constrained. This large variation in parameters and ‘usefully tight quantitative predictions’ is termed the sloppiness of parameter sensitivities, and has been observed in physics and systems biology . Well designed experiments may simultaneously reduce uncertainty in the parameters as well as predictions , and therefore increasing confidence in predictions. One possible experiment to reduce parameter uncertainty is recording the transpiration and growth rate to fit Eq. 6 independently to acquire at and bt.An interesting outcome of our analysis is the strong association of risk with plant growth conditions. The health risks from consuming lettuce irrigated with recycled wastewater are highest in hydroponic grown lettuce, followed by soil grown lettuce under Sc2 and the least in soil grown lettuce under Sc1 . This difference in risk estimates stems to a large degree from the difference in AD kinetic constants . Increasing katt, s will decrease risk as more viruses will get attached to the growth medium, while increasing kdet, s will have the opposite effect , as more detached viruses are available for uptake by the plant.

The combined effect of the AD parameters depends on their magnitudes and is portrayed in Supplementary Fig. S5. This result indicates that a better understanding of the virus interaction with the growth environment can lead to an improved understanding of risk. More importantly, this outcome indicates that soil plays an important role in the removal of viruses from irrigation water through adsorption of viral particles. An investigation focused on understanding the influence of soil composition on viral attachment will help refine the transport model. The risk predicted by this dynamic transport model is greater than the EPA annual infection risk as well as the WHO annual disease burden benchmark. The reasons for this outcome are many-fold. First, there is a significant variability in the reported internalization of viruses in plants. In research of data for modeling NoV transport in plant, we filtered the existing data using the following criteria: 1) human NoV used as the seed agent, 2) quantitative viral results in growth medium and different locations of the plant. Based on these criteria, the data from DiCaprio et al. represent the best available data on viral internalization and transport in lettuce. However, it is also important to note that a similar study by Urbanucci et al. did not observe human NoV internalization in lettuce. This discrepancy could be due to the specific subspecies of the plant and growth conditions used in the studies. In addition, minor changes such as damages in roots or decrease in humidity of the growing environment can promote pathogen internalization . Alternatively, tracking viral transport through the growth medium and plant is challenging, which may yield false results due to reaction inhibitions in genome amplification and poor detection limit. The risk outcome of this study is conservative because it assumes an individual consumes the wastewater irrigated lettuce daily for an entire year. This assumption and the corresponding higher risk estimates are only applicable to a small portion of consumers, while most consumers in the U.S. are likely to have a more diverse diet.

While the model outcomes presented here represent the best attempt given the available data, it is also possible that the internalization observed by DiCaprio et al. is an extreme case and typical internalization is lesser. As previously discussed by others , risk estimates by different NoV dose-response models differed by orders of magnitude. This study primarily aims to introduce a viral transport model without advocating any one dose-response model. We hope the future refinement of pathogen dose-response models will reduce variability in risk estimates. The risk of consuming lettuce grown in soil as predicted by SalesOrtells et al. is higher than our predictions, although the results of DiCaprio et al. were used in both studies. This is a consequence of considering the greater adsorption capability of soil, which is not reflected when assuming a simple input:output ratio. Using different inoculating concentrations of NoV, body weight and consumption rate distributions also contributed to difference in the outcomes but to a lesser extent. Parameters for crisp head lettuce were obtained from several different sources, each possibly using a different sub-variety of crips head. Yet, global sensitivity analysis showed insensitivity of risk estimates to several assumed and fitted parameters , lending confidence to the approaches taken to parametrize the model. The importance of taking the dynamics of viral transport is underscored by the sensitivity to tli, h in hydroponic and tht, s in soil cases. This suggests that given no change in lettuce consumption, changes in irrigation schedule can affect risk outcome. Such arguments were not possible with the approach of Sales-Ortells et al. . In soil grown lettuce,fodder system the high sensitivity to kp indicate the role of plant specific processes in mediating risk outcome.In addition to a transport model predicting the NoV load in lettuce, we explore the strategies to reduce the risk of NoV gastroenteritis by increasing holding time of the produce after harvesting or using bigger hydroponic culture volumes. Although neither strategy could significantly alleviate the risks, the process highlights two strengths of modeling: 1) It provides mathematical support for arguments that would otherwise be less convincing; 2) It predicts outcomes of experiments without the physical resources required to perform them. For instance, the model can be used to explore alternate irrigation schedules to reduce the NoV internalization risk. Modeling also helps encapsulate our understanding of the system and generate hypotheses. For example, simple first order decay did not produce the trend observed in the water, which suggests that additional mechanisms are at play. We postulate the attachment of virus particles on the walls of the hydroponic system as one possible mechanism and examined the fit of the model. Although viral attachment to glass or other materials has been observed before , here it stands as a hypothesis that can be tested. In addition to generating and testing hypotheses, some of our model assumptions raise broader questions for future research. For example, it was assumed that viruses are transported at the rate of transpiration from the growth medium to the roots, yet not much is known regarding the role of roots in the internalization of viruses. Investigating the defense mechanisms of plants’ roots to passive viral transport, i.e. through rhizosphere microbiome interactions, may shed light on the broad understanding of plant and microbe interactions. The question of extending this model to other pathogen and plant systems draws attention to the dearth of data in enabling such efforts. While modeling another virus may not require changes to the model, understanding transport in other plants can be challenging.

Data required includes models for growth rate and transpiration, plant growth characteristic including density, water content, as well as internalization studies to determine transport efficiencies. However, from the perspective of risk management, lettuce may be used as the worst-case scenario estimate of risk in water reuse owing to its high consumption with minimal pathogen inactivation by cooking. This worst-case scenario can be used to set water quality standards for irrigation water for production of fresh produce eaten raw. The models can also be extended to include pathogen transport to the plant tissue from manure/ biosolids that are used as organic fertilizer.It is impossible to separate the management of nitrogen fertilizer from that of irrigation water in irrigated agriculture. Methods of application, timing, and amounts applied are key concerns both for fertilization and for irrigation water application. While many experiments have characterized crop and soil responses to one variable, relatively few have endeavored to study the interaction of fertilizer management sytems with irrigation management systems. The approach taken in this project was to examine interactions between these two centrally important components of agricultural production, with the ultimate objective of improving recommendations for the use of water and fertilizers in irrigated agriculture.Both greenhouse and field trials were established, at the Agricultural Field Stations at University of California, Riverside and at South Coast Field Station, Santa Ana. Greenhouse trials were undertaken to assess basic relationships between water and nitrogen supply and uptake, while field trials emphasized use of current agricultural production technology to test relationships in the fielk. Most of the research has been published, either as graduate theses or 1n scientific journals. These publications will be referenced throughout this report, and complete details of the research can be found within them.Greenhouse trials were first conducted to determine the relationship between minimum N03 -N concentration and N and water accumulation by tomatoes and lettuce . Over a wide range of solution N03-N concentrations, the ratio of N uptake to water uptake was constant, at approximately 100 fig N/L. This suggested that a constant, continuous supply of N in the irrigation water could supply the necessary nutrient without providing an excess. Experiments in soil columns with Romaine lettuce and Swiss Chard demonstrated that Chard could very efficiently decrease the solution N concentrations to near zero before water passed out the bottom of the column; lettuce was much less efficient. Tomatoes were grown in soil columns with sealed head spaces through which acid-scrubbed air was passed. Columns were irrigated frequently with water containing 0, 50, 100, or 200 fig NIL. Half of the treatments received irrigation water dripped on the surface at the base of the plant, and the other half received it 2.5 em below the soil surface. Urea-ammonium-nitrate was the source of all N. Less than 0.1% of the applied N was trapped as volatilized NH3 , even in the “most likelytt treatment .

The implementation of an ecosystem approach would not be perfect

This is a standards based approach, similar to that used by CARB to determine additionality for its current offset protocols, because it sets a single, uniform threshold that must be met for an offset credit to be issued. As long as the average rate of sequestration was accurate, variability between different projects may not matter because the effects of carbon dioxide are largely not localized and so a reduction in one location is just as good as a reduction in another location. Therefore, it may be administratively favorable and just as environmentally effective to calculate the average acreage required to sequester one metric ton of carbon dioxide, then calculate the offset credits to be issued to each offset project based on the acreage being offered for the offset project.Different averages could be calculated for different regions, as soil type varies greatly by region.However, this approach would prove misleading if only plots with lower-than-average sequestration ability engaged in the program. This may occur if soil with higher sequestration ability can also support higher value uses than offset credit generation. If this is the case, the actual soil sequestered under the program would fall below the program’s projected average. This may be indeterminable until the value of the offset credits is established in a market, although prices within a market can always fluctuate and thus not provide the desired stability that alternative uses may provide. Many of these same problems exist in other types of offset projects which have been approved and are currently in use despite their potential accounting inconsistencies.Additionally, by using this standards-based approach, it is very likely that some projects’ carbon sequestration abilities would be lower than the offset program’s average and would be issued more offset credits than would be issued if the projects’ actual carbon sequestration had been measured. This will always be an issue with a standards-based approach, as seen in the Citizens Climate Lobby litigation. Another option is to require a variable amount of acreage per offset credit under an offset project based on how much carbon that particular land is actually estimated to sequester,ebb flow cannabis based on factors that affect a particular land’s sequestration abilities.

This project-by-project option would theoretically give a more accurate estimate of the amount of carbon sequestered by each project, and would thus give managers a better idea of whether a project has actually offset a whole carbon credit. On the other hand, it would also take much more administrative resources and time to administer due to the variability of different land and the measurements required to calculate that variability, which could hinder the offset program’s implementation.Additionally, project-by-project approaches do not always ensure accuracy. The Kyoto Protocol’s Clean Development Mechanism utilized a project-by-project approach to determine additionality, and the results were reportedly rife with error and under- or over-exaggeration when it was convenient. Experience has shown that not even a project-by-project approach will result in perfect results. However, measures can be taken to try to avoid some of the shortcomings of a project-byproject approach. For example, some have suggested replacing opportunities for subjective determinations by the project proponent or host in the project approval or crediting process with objective criteria.This would decrease the opportunities for the project proponent or host to control the outcome of the project’s approval process. Some of the measurements and logistical work could be contracted out to independent third parties at the project proponent’s expense. This practice is used by certification programs, such as the Forest Stewardship Council and the Sustainable Forestry Initiative, in order to save the certification program the resources that would be required to perform the measurements and verifications themselves. Likewise, establishing similar requirements for an offset program would decrease the resources required from CARB and should be acceptable to the project host and proponent as long as the cost is not prohibitive.Whereas a standards-based approach may make sense for determining an offset project’s additionality for the Livestock Protocol due to the protocol’s relative simplicity, its measurability, and the uncommon use of BCSs without financial incentives, these factors are not as clearly present for a potential agricultural soil carbon sequestration offset program. The amount of methane captured by a BCS digester and subsequently destroyed under the Livestock Protocol is measured by a site-specific meter and thus does not present the same difficulties and variables that exist when measuring soil carbon sequestration.The court in Citizens Climate Lobby indicated that a standards-based approach to determine additionality made sense for the Livestock Protocol because the technology was so infrequently used without the financial incentives from the offset protocol.

This line of thinking may not so clearly comport to a possible agricultural soil carbon sequestration offset program due to the existing prevalence of cropland conservation practices.To determine whether this is true of whatever region would be included in the offset protocol, CARB could commission an outside group to analyze current prevalence as they did when formulating the Livestock Protocol.Even if it was discovered that these conservation practices were generally uncommon, as with BCSs, so that additionality could be satisfied by a standards-based approach, the complications associated with other issues may be so complex and variable that a case-by-case measurement process using an ecosystem approach may still be preferable to determine whether the project’s emission reductions are legitimate and lacking in egregious incidental effects.Increased herbicide use as an incidental effect to agricultural soil carbon sequestration offset projects is unique to this type of offset program, but may also be resolved if approached from an ecosystem approach on a case-by-case basis. First, it would need to be determined whether increased herbicide use is actually a threat for the type of land that is participating in the offset program. If so, the effects of herbicide on the local resources and the increased nitrous oxide emissions should be accounted in the project’s approval process. Actively finding and implementing alternatives to herbicide use that make sense for the particular project host would alleviate the effects of increased herbicide use. One option is to replace increased herbicide use due to not ill and conservation till practices with cover crops in combination with other agricultural practices. At least one study claims that cover crops can greatly reduce the need for herbicide.Unfortunately, it seems that it is difficult to naturally replace the benefits of herbicide, as higher crop yields are reported when using herbicide instead of cover crops.Because cover crops and other agricultural practices do not seem to replicate herbicide, it would be unlikely to see a voluntary decrease in herbicide use. If herbicide use was prohibited or limited under a future offset program and no-till or conservation tillage was a major part of the program, it is likely that farmers would not be interested in participating in the program due to the difficulty or impossibility of balancing these two requirements. A better alternative may be to consider implementing a pesticide management program within the agricultural soil carbon sequestration offset project.

The pesticide management program would differ by project, as different projects would likely have different crops with different surrounding environments and site-specific needs. The pesticide management program utilized by the USDA in the Missouri River Basin study observed a decrease in herbicide use when cropland conservation practices were implemented. These practices included prevention, avoidance, monitoring, and suppression strategies to reduce pesticide use. Prevention includes measures such as using seeds and transplants that are free of pests, preventing weeds from reproducing,ebb and flow tables eliminating hosts for pests and disease organisms, and scheduling irrigation to prevent disease development.Avoidance practices include crop rotation to avoid the pest or disease, planting seeds with genetic resistance to pests, choosing crops that will mature and be harvested before pests or disease develops, and not planting in certain parts of the field that are prone to crop failure from pests and disease.Monitoring includes testing to determine crop rotation selection and when suppression activities are required.Suppression includes cultivating and temperature management for weed control, traps and exclusion devices for pest control, biological control by disrupting mating, and more deliberate and informed use of pesticides as a last resort.A well-functioning ecosystem approach to management requires research and consulting with experts from many different disciplines to construct the program and to evaluate each project on a case-by-case basis. This approach requires resources and time above and beyond what would be required for a standards-based approach, the approach currently favored by CARB in its offset protocols. Even once the experts are secured, scientists may remain too narrowly focused on their specific disciplines to do a full or fair assessment for purposes of an ecosystem approach.An ecosystem approach will identify trade offs, which can create a whole separate discussion of priorities and values that may require an extended time frame for considering any program or project evaluated under the ecosystem approach. Additionally, the court in Citizens Climate Lobby identified problems with offset programs that utilize the project-by-project analyses implicated in an ecosystem approach. Aside from being considered expensive and slow, the case-by case analysis utilized under the Kyoto Protocol’s Clean Development Mechanism is often criticized for being inaccurate due to excessively narrow or broad framing of answers to questions that are supposed to determine whether a project is actually additional to a business-as-usual scenario.Despite these drawbacks, following an ecosystem approach when considering an agricultural carbon sequestration offset program and its subsequent projects would be more meaningful and accurate. It would force decision makers to discuss whether a proposed offset program or project would be causing more overall harm than would be caused without the offset program or project and what sort of trade offs would be made.

Currently, harmful externalities of existing agricultural soil carbon sequestration offset programs seem to be ignored in at least some of the programs that implement the offset projects. For example, monitors for one soil carbon sequestration offset project noted that although herbicides are applied without considering the environmental consequences, “these activities are not part of the project under discussion.”An ecosystem approach would ensure that herbicide use and other possibly harmful externalities would be included in discussions concerning the costs and benefits of offset programs and projects. This is especially important for offset programs and projects, which are in theory neutral—trading one ton of carbon in one location for one ton of carbon or carbon equivalent in another location—and helping to implement the environmental purpose of the carbon market. Certain offset programs would garner less support if it was clear that their overall effect on the environment was a net negative. Thus, the ecosystem approach can help decision makers understand and create an agricultural soil carbon sequestration offset program with acceptable trade offs and incidental effects.Naturally occurred and fertilization-induced soil salinity, and low nutrient use efficiency are significant constraints in modern agriculture production . At the cellular level, the extrusion of Na+ ions at the cell plasma membrane and the compartmentation of Na+ into vacuoles are efficient mechanisms to avoid excessive cytosolic Na+ concentration and maintain an adequate cytosolic K+ /Na+ ratio . The plasma membrane electrical potential difference of root cells is usually maintained around 120 mV, while the tonoplast potential is positive and around 20–40 mV . These potentials allow the root to acquire sufficient K+ via high-affinity transporter systems in K+ – limited soils or via low-affinity transporter systems at normal external K+ supplies. The presence of high external Na+ concentrations suppresses the K+ conductance through LATS and competes with K+ uptake through HATS , causing a decrease in intracellular K+ with a concomitant [K+ ]/[Na+ ] imbalance. The plant vacuolar Na+ /H+ antiporters were shown originally to mediate the electroneutral Na+ /H+ exchange, driving the excess cytosolic Na+ into the vacuole . The NHX proteins belong to the large mono-valent cation/proton transporters family, showing three distinct functional clades . In Arabidopsis, in addition to the plasma membrane-located NHX7 and NHX8, also known as SOS , intracellular NHXs sharing high sequence similarity are further divided into type-I and type-II, based on their subcellular location . Type-I NHXs are vacuolar-located, while type-II NHXs are found at endosome, Trans-Golgi Network /Golgi and prevacuolar compartments .

Climate projections are not included here, but could be included in future analyses

Fragmentation and loss of farmland causes farmers to lose benefits associated with being part of a large farming community, such as sourcing inputs, accessing information, sharing equipment,and supporting processing and shipping operations . This is further exacerbated by loss of agricultural land near the Sacramento River, either due to future flooding or to mitigation of habitat for wild species. Also, by fragmenting the landscape and consuming more land area in the floodplain, urbanization in the A2 scenario could work against the provision of ecosystem services related to water quality, biodiversity conservation, open space and its aesthetic and recreational value . Strengthening the urban community’s interest and support of farmland preservation is a key challenge for mitigation of GHG emissions, and the long‐term viability of agriculture in Yolo County. Historically, urban and suburban development has covered many regions within California that were formerly leading agricultural producers, including the Los Angeles Basin and Orange County, much of the San Francisco Bay Area, and areas of the Central Valley near Fresno, Modesto, Merced, Sacramento, and Stockton. Between now and the year 2050 much additional urbanization is likely near these metropolitan areas, as well as in locations that are at a considerable distance from existing major cities, such as the Salinas Valley and Ventura County. Strategies to preserve agricultural land from urbanization are likely to dovetail with strategies to adapt to climate change and mitigate greenhouse gas emissions, reducing the state’s overall vulnerability to climate change. For example, maintaining a strong greenbelt of agricultural land around existing urban areas and adopting compact urban development policies can greatly reduce GHG emissions , while preserving agricultural production and potentially enhancing ecosystem services. This section considers urbanization implications related to agriculture and climate change,strawberry gutter system based on statewide modeling of 2050 urban growth scenarios, using existing datasets regarding agricultural production, land use, and soils.

The actual complexities of urban‐agriculture interactions require a great deal of monitoring and interdisciplinary synthesis that is beyond our scope, e.g., urban heat island or ozone effects may lead to additional vulnerabilities for agriculture with climate change. Our aim here is instead to present an initial overview of potential agricultural adaptation and vulnerability effects related to urbanization, and to suggest directions for further research. The strong policy framework in California for GHG mitigation under AB 32, the Global Warming Solutions Act of 2006, has drawn attention to the fact that California’s urban planning framework is in a state of uncertainty and potential transition. SB 375, the Sustainable Communities and Climate Protection Act of 2008, requires Metropolitan Planning Organizations within the state to prepare “sustainable communities strategies” that show how each region will meet GHG‐reduction targets through integrated land use, housing, and transportation planning. As of 2011, MPOs are just beginning to develop such plans. SB 375 is widely seen as having the potential to usher in a new era of land use planning in California, in which regional “blueprints” will be adopted to manage and reduce urban and suburban expansion . However, it is by no means clear how the California Air Resources Board or the legislature will react to ensure that such potential is in fact met. In addition, as of 2010 every county and municipality in the state must now consider GHG emissions within their General Plans and associated Environmental Impact Reports . Since 2007, the state Attorney General’s office has frequently threatened legal action against those jurisdictions that do not include planning alternatives to reduce GHG emissions . The California Air Resources Board is also strongly encouraging local governments and large institutions to prepare Climate Action Plans and GHG emissions inventories, and many have already done so.

These actions mean that local governments are now more actively exploring land use planning alternatives to mitigate GHG emissions and adapt to climate change. Although political resistance to growth management will certainly continue, such trends mean that in the future the state’s local governments are more likely to consider growth management scenarios that respond to the twin goals of preserving agricultural land and responding to climate change. This institutional and political environment affects our analysis below, and will be referred to when appropriate.To analyze the impact of future urbanization scenarios on agricultural landscapes within California within the context of climate change, we relied on modeling done by the UC Davis Information Center for the Environment using UPlan software under a separate portion of this Climate Change Vulnerability and Adaptation Study for California. We then performed additional analyses on the UPlan projections for 2050, using statewide data on agriculture, land use, and soils. UPlan is a geographic information system ‐based land use allocation model developed by ICE and used for urban planning purposes by more than 20 counties in California, including a number of rural counties in the San Joaquin Valley . It is particularly useful for large‐scale urban growth scenarios in rural areas, and has been used in a research context to analyze urbanization effects on natural resources , urbanization effects on wildfire risk , and the effect of land use policies on natural land conversion . Using UPlan, researchers first develop a base of GIS information related to geographical features such as roads, rivers and streams, floodplains, parkland, and existing urban areas. They then supply demographic inputs within future urban growth scenarios. Researchers also specify geographical features that are likely to attract urban growth , discourage growth , or prevent growth , and assign weightings to each. For example, freeway interchanges may attract development, since builders desire the locational advantages.

Designation as prime farmland may discourage development, since local governments may take this factor into account within their zoning and growth management policy making, and farmers may participate in the Williamson Act or other programs designed to discourage urbanization. Acquisition of land as public open space will prevent urbanization altogether, thus making a “mask” designation appropriate within UPlan. Relying on the combined weightings for each 50‐meter grid cell, UPlan allocates the future population increase across four residential land use types , and several nonresidential land use types . The result is a spatial projection of future urbanization with designations for each land use type. ICE staff developed two main UPlan scenarios for statewide mapping within this project scenario of urban development. The other is a “smart growth” alternative that clusters development into nodes, specifies somewhat higher densities, and places more development within existing city borders. Such scenarios reflect growth management philosophies within the state during recent decades; many local and regional planning agencies have developed similar alternatives within their own planning processes. The ICE SG scenario is relatively conservative and does not assume any dramatic changes to current planning policies. In reality, over the past two decades,grow strawberry in containers development within the state near large metropolitan areas has become increasingly compact and focused on infill sites rather than greenfield locations. The California agricultural areas most affected by urbanization between now and 2050 will not necessarily be those with the greatest overall amount of new urban and suburban development. Rather, other factors will come into play. These include the amount of agricultural base remaining within the region, the extent to which urban development fragments agricultural landscapes, and the extent to which farmers benefit from increased access to urban markets. If there is relatively little agricultural base left, as is currently the case around some of the state’s large metropolitan areas, then it becomes more difficult for farmers to find suppliers, processors, and other agricultural support functions . This may affect farm operations on a crop‐by‐crop basis. For example, there is only one processor of apples left in Sonoma County, formerly home to extensive apple orchards, and if that facility closes, then production of classic varieties such as Gravensteins will become difficult . If urban development fragments agricultural land into isolated pockets separated by roads, subdivisions, office parks, and other urban facilities, then it becomes more difficult for farmers to move equipment from field to field, and conflicts may arise with new suburban residents over noise, odor, and potential spraydrift associated with farming operations. Fragmentation may also reduce the benefits farmers receive from being part of a large farming community, such as sourcing inputs, accessing information, sharing equipment, and supporting processing and shipping operations . Impacts on agriculture from urbanization will then be disproportionate to the land area covered. On the other hand, urbanization can benefit agriculture if it increases access to markets . This factor is likely to benefit some types of agriculture more than others. Specialty production of fruits, vegetables, meats, and dairy products for use by restaurants, distribution through high‐end grocery stores, and sale at farmers’ markets and through community‐ supported agriculture networks is likely to benefit. Conversely, production of grains and lower‐ value fruits and vegetables is not likely to see a boost from the presence of local markets, since farmers primarily sell these bulk commodities to large‐scale processing facilities for regional, national, or international distribution .Addressing climate change is a priority issue for Californians and involves individuals, businesses, and government.

The Global Warming Solutions Act of 2006 seeks to reduce the emission of greenhouse gases to 1990 levels by 2020. This legislation goes into effect gradually, so that people will have time to implement the necessary actions to come into compliance by the 2020 deadline. Some businesses, however, are proactive on climate change mitigation, and are signing up through mechanisms such as the Climate Action Registry to become leaders and early adopters of GHG emission‐reduction programs. By making progress toward carbon neutrality ahead of deadlines, these companies may qualify for incentive programs and be recognized as environmental leaders. Among such leaders are a number of wine companies that are managing their vineyard lands and adjoining forests that maximize biomass on the landscape and balance the emissions generated in their production processes. This paper is a case study about one such company, Fetzer/Bonterra Vineyards, who has set their objectives to reduce their GHG emissions and use renewable sources to meet much of its energy demands. As an environmentally conscious business, and a major grower and producer of wines, Fetzer/Bonterra attempts to achieve a balance between habitat conservation, ecologically based organic production, production goals, and financial profit. When the company purchased ranches for growing grapes in Mendocino County, a decision was made to maintain a large fraction of that land in natural habitat without livestock grazing. This was based on an environmental ethic to combine wine production with conservation of the landscape’s natural integrity. This approach also included a series of sustainability measures . To learn more about the carbon storage and dynamics on its land, Fetzer/Bonterra collaborated with researchers at the University of California Davis to conduct an assessment of the distribution and magnitude of carbon stored across the vineyard‐woodland landscape. The main goal was to find a way to assess carbon stocks to determine the absolute and relative amounts of carbon stored in different vegetation and land use types. Fetzer/Bonterra’s rationale behind the assessment was to identify the relative value of the different vegetation types on their land in terms of contributing to the positive, or offset, side of their carbon budget. Because the study also collected data on the different woody plant species, information on the diversity of plant communities was obtained. The species and community diversity data make it possible to assess the relationship between carbon stocks and biodiversity, and to show how habitat type affects the magnitude of C stocks. This approach will allow vineyard managers to prioritize non‐vineyard land for carbon storage, biodiversity and habitat conservation, and eventually other types of ecosystem services, such as keeping steep slopes and stream corridors forested to protect against erosion and sediment loading in waterways. Greater carbon stocks in forests is to be expected, but it is significant to recognize that Fetzer/Bonterra uses a management approach for a combination of perennial woody crops and conserved habitat that maximizes the contribution of the heterogeneous landscape to total carbon stocks. Using this Fetzer/Bonterra case study experience as an example, this paper showcases the important role that California agricultural landscapes can play in climate change adaptation and mitigation strategies.

All biocuration is time consuming and requires assistance from expert biologists

Analyses of single census years provide wildly varying estimates of the effect of landscape simplification on insecticide use. It is evident that the relationship between landscape simplification and insecticide use is spatially and temporally context-dependent, and that there are a number of ways that context could be determined. Although it remains unclear what underlying mechanisms are providing the context, it is abundantly clear that the relationship between landscape simplification and insecticide use observed in 2007 does not hold for previous census years. It is time to move beyond simply asking whether landscape simplification drives insecticide use and instead focus on what factors may explain the variability in this relationship over time and space.We are in an exciting time in Biology. Genomic discovery on a large scale is cheaper, easier and faster than ever. Picture a world where every piece of biological data is available to researchers from easy-to-find and well-organized resources; the data are accurately described and available in an accessible and standard formats; the experimental procedures, samples and time points are all completely documented; and researchers can find answers to any question about the data that they have. Imagine that, with just a few mouse clicks, you could determine the expression level of any gene under every condition and developmental stage that has ever been tested. You could explore genetic diversity in any gene to find mutations with consequences. Imagine seamless and valid comparisons between experiments from different groups. Picture a research environment where complete documentation of every experimental process is available,dutch bucket for tomatoes and data are always submitted to permanent public repositories, where they can be easily found and examined.

We ‘can’ imagine that world, and feel strongly that all outcomes of publicly funded research can and should contribute to such a system. It is simply too wasteful to ‘not’ achieve this goal. Proper data management is a critical aspect of research and publication. Scientists working on federally funded research projects are expected to make research findings publicly available. Data are the lifeblood of research, and their value often do not end with the original study, as they can be reused for further investigation if properly handled. Data become much more valuable when integrated with other data and information . For example, traits, images, seed/sample sources, sequencing data and high-throughput phenotyping results become much more informative when integrated with germplasm accessions and pedigree data. Access to low-cost, high-throughput sequencing, large-scale phenotyping and advanced computational algorithms, combined with significant funding by the National Science Foundation , the US Department of Agriculture and the US Department of Energy for cyber infrastructure and agricultural-related research have fueled the growth of databases to manage, store, integrate, analyse and serve these data and tools to scientists and other stakeholders. To describe agricultural-related databases, we use the term ‘GGB database’. GGB databases include any online resource that holds genomic, genetic, phenotypic and/or breeding-related information and that is organized via a database schema, and contained within a database management system , or non-relational storage systems. GGB databases play a central role in the communities they serve by curating and distributing published data, by facilitating collaborations between scientists and by promoting awareness of what research is being done and by whom in the community. GGB databases prevent duplicated research efforts and foster communication and collaboration between laboratories .

As more and more organisms are sequenced, cross-species investigations become increasingly informative, requiring researchers to use multiple GGB databases and requiring that GGB databases share data and use compatible software tools. Use of common data standards, vocabularies, ontologies and tools will make curation more effective, promote data sharing and facilitate comparative studies . The AgBioData consortium was formed in 2015 in response to the need for GGB personnel to work together to come up with better, more efficient database solutions. The mission of the consortium, comprised of members responsible for over 25 GGB databases and allied resources, is to work together to identify ways to consolidate and standardize common GGB database operations to create database products with more interoperability. The AgBioData consortium joins the larger scientific community in embracing the Findable, Accessible Interoperable, and Reusable data principles, established by stakeholders from the scientific, publishing and library communities . FAIR principles have rapidly become standard guidelines for proper data management, as they outline a road map to maximize data reuse across repositories. However, more specific guidelines on how to implement FAIR principles for agricultural GGB data are needed to assist and streamline implementation across GGB databases. Members of the AgBioData consortium convened in Salt Lake City, UT on 18 & 19 April 2017 to describe challenges and recommendations for seven topics relevant to GGB databases—Biocuration, Ontologies, Metadata and persistence, GGB database platforms, Programmatic access to data, Communication and Sustainability. Preceding this workshop, a survey was sent out to all AgBioData members regarding the seven topics, in order to identify concerns and challenges of AgBioData members. The results were used to focus and foster the workshop discussions. Here we present the current challenges facing GGBs in each of these seven areas and recommendations for best practices, incorporating discussions from the Salt Lake City meeting and results of the survey.

The purpose of this paper is 3-fold: first, to document the current challenges and opportunities of GGB databases and online resources regarding the collection, integration and provision of data in a standardized way; second, to outline a set of standards and best practices for GGB databases and their curators; and third, to inform policy and decision makers in the federal government, funding agencies, scientific publishers and academic institutions about the growing importance of scientific data curation and management to the research community. The paper is organized by the seven topics discussed at the Salt Lake City workshop. For each topic, we provide an overview, challenges and opportunities and recommendations. The acronym ‘API’ appears frequently in this paper, referring to the means by which software components communicate with each other: i.e. a set of instructions and data transfer protocols. We envision this paper will be helpful to scientists in the GGB database community, publishers, funders and policy makers and agricultural scientists who want to broaden their understanding of FAIR data practices.Bio-curators strive to present an accessible, accurate and comprehensive representation of biological knowledge . Bio-curation is the process of selecting and integrating biological knowledge, data and metadata within a structured database so that it can be accessible, understandable and reusable by the research community. Data and metadata are taken from peer-reviewed publications and other sources and integrated with other data to delivera value-added product to the public for further research. Biocuration is a multidisciplinary effort that involves subject area experts, software developers, bio-informaticians and researchers. The curation process usually includes a mixture of manual, semi-automated and fully automated workflows. Manual biocuration is the process of an expert reading one or several related publications, assessing and/or validating the quality of the data and entering data manually into a database using curation tools, or by providing spreadsheets to the database manager. It also encompasses the curation of facts or knowledge, in addition to raw data; for example, the role a gene plays in a particular pathway. These data include information on genes, proteins, DNA or RNA sequences, pathways, mutant and nonmutant phenotypes, mutant interactions, qualitative and quantitative traits, genetic variation, diversity and population data, genetic stocks, genetic maps, chromosomal information, genetic markers and any other information from the publication that the curator deems valuable to the database consumers. Manual curation includes determining and attaching appropriate ontology and metadata annotations to data. This sometimes requires interaction with authors to ensure data is represented correctly and completely,blueberry grow pot and indeed to ask where the data resides if they are not linked to a publication. In well-funded large GGB databases, manually curated data may be reviewed by one, two or even three additional curators. Manual biocuration is perhaps the best way to curate data, but no GGB database has enough resources to curate all data manually. Moreover, the number of papers produced by each research community continues to grow rapidly. Thus, semi-automated and fully automated workflows are also used by most databases. For example, a species-specific database may want to retrieve all Gene Ontology annotations for genes and proteins for their species from a multi-species database like UniProt . In this case, a script might be written and used to retrieve that data ‘en masse’. Prediction of gene homologs, orthologs and function can also be automated. Some of these standard automated processes require intervention at defined points from expert scientist to choose appropriate references, cut off values, perform verifications and do quality checks. All biocuration aims to add value to data. Harvesting biological data from published literature, linking it to existing data and adding it to a database enables researchers to access the integrated data and use it to advance scientific knowledge. The manual biocuration of genes, proteins and pathways in one or more species often leads to the development of algorithms and software tools that have wider applications and contribute to automated curation processes.

For example, The Arabidopsis Information Resource has been manually adding GO annotations to thousands of Arabidopsis genes from the literature since 1999. This manual GO annotation is now the gold standard reference set for all other plant GO annotations and is used for inferring gene function of related sequences in all other plant species . Another example is the manually curated metabolic pathways in Ecocyc, MetaCyc and PlantCyc, which have been used to predict genome-scale metabolic networks for several species based on gene sequence similarity . The recently developed Plant Reactome database has further streamlined the process of orthology-based projections of plant pathways by creating simultaneous projections for 74 species. These projections are routinely updated along with the curated pathways from the Reactome reference species Oryza sativa . Without manual biocuration of experimental data from Arabidopsis, rice and other model organisms, the plant community would not have the powerful gene function prediction workflows we have today, nor would the development of the wide array of existing genomic resources and automated protocols have been possible. Biocurators continue to provide feedback to improve automated pipelines for prediction workflows and help to streamline data sets for their communities and/or add a value to the primary data.Current efforts in machine learning and automated text mining to pull data or to rank journal articles for curation more effectively work to some extent, but so far these approaches are not able to synthesize a clear narrative and thus cannot yet replace biocurators. The manual curation of literature, genes, proteins, pathways etc. by expert biologists remains the gold standard used for developing and testing text mining tools and other automated workflows. We expect that although text-mining tools will help biocurators achieve higher efficiency, biocurators will remain indispensable to ensure accuracy and relevance of biological data. Well-curated GGB databases play an important role in the data lifecycle by facilitating dissemination and reuse. GGB databases can increase researchers’ efficiency, increase the return on research funding investment by maximizing reuse and provide use metrics for those who desire to quantify research impact. We anticipate that the demand for biocurators will increase as the tsunami of ‘big data’ continues. Despite the fact that the actual cost of data curation is estimated to be less than 0.1% of the cost of the research that generated primary data , data curation remains underfunded .Databases are focused on serving the varied needs of their stakeholders. Because of this, different GGB databases may curate different data types or curate similar data types to varying depths, and are likely to be duplicating efforts to streamline curation. In addition, limited resources for most GGB databases often prevent timely curation of the rapidly growing data in publications.The size and the complexity of biological data resulting from recent technological advances require the data to be stored in computable or standardized form for efficient integration and retrieval. Use of ontologies to annotate data is important for integrating disparate data sets. Ontologies are structured, controlled vocabularies that represent specific knowledge domains .

A rich range of secondary metabolites is predicted for the genomic bins

We next examined the genomic bins using antiSMASH 2.0. The majority of the clusters overall were uncategorized , followed by saccharides and fatty acids. non-ribosomal peptide synthases, bacteriocins, and terpenes, and polyketide synthases were also common. Arylpolyenes, lasso- and lantipeptides also were predicted as was one instance each of a siderophore and butyrolactone. MW5 had 229 clusters in 33 bins. MW6 had 371 clusters in 22 bins. DOM had 10 clusters in 158 bins. Notably, the CPR genomes that dominate the water samples have few predicted secondary metabolites on average. Because MW5 was dominated by these genomes, its density of clusters is correspondingly lower. However, some of the individual CPR bins are dense with bio-synthetic clusters . Thus while poor representation of CPR in existing databases may reduce utility of this approach, some of the genomes certainly have detectable clusters. Grouping the genomes phylogenetically , the most clusters occur in the Planctomycetes OM190 . A range of cluster densities was apparent in the rest of the bins. Notably, ladderane biosynthesis, a hallmark of the Planctomycetes,was detected by antiSMASH in all eight of the Planctomycete assemblies , confirming that these are all true Planctomycete genomes. AntiSMASH results show a rich diversity of secondary metabolites in the anammox genomes. Specifically enriched are fatty acids, saccharides, bacteriocins, and terpenes. The OM190 genome was additionally enriched in non-ribosomal peptide synthases, and anatoxin production was predicted. While anatoxin is known to come from cyanobacteria and not from Planctomycetes, its known bio-synthetic pathway invovles polyketide synthases,nft growing system of which 18 are predicted by antiSMASH in this genome. Thus, while this cluster does not likely encode a cyanotoxin, the biosynthetic potential of this genome could certainly produce toxic secondary metabolites.

Indeed, a large number of the predicted secondary metabolites are biologically active molecules that may target other cells in the microbial community and could potentially have side effects on mammals. We saw evidence of rich secondary metabolite biosynthetic potential in several other genomes as well. including representatives of OP3, OP11, Acidobacteria, Bacteroidales, Chlorobi, Chloroflexi, Domibacillus, Entotheonella, Leptonema, Nitrospira, Sphingomonas, Spirochaetes, and from DOM were enriched. Notably, we assembled an incomplete genome that appears to be related to cyanobacterial toxin producers. Its best RAPSEARCH hit was to a Planktothrix aghardii genome. The 500 kb fragment is rich in non-ribosomal peptide synthases, which are another toxin production system in the cyanobacteria and can poison humans. In order to confirm whether this might be a toxin producer, we built a BLAST database of microcystin genes found on NCBI and compared to the genome fragment using TBLASTX. We found numerous hits > 300 bp throughout the fragment, but the percent identity was roughly 40%, indicating that the sequences are diverged. Overall, antiSMASH predicts an enrichment in biosynthetic clusters with antimicrobial activity including bacteriocins, non-ribosomal peptide synthases, polyketide synthases, and lassopeptides. While many antibiotic compounds may have broad targets or even non-antagonistic effects, bacteriocins usually have very specific antibiotic activity, often against closely related strains. The prevalence of predicted bacteriocins in the genomes suggests direct competition between genomes. For example, the Brocadiaceae Planctomycete genomes which co-occur in MW6 are predicted to have on average one bacteriocin per genome, which could be used to compete with the related strains.Overall we find that the metagenomic communities present in groundwater reflect the measured chemical conditions: we measured high nitrogen and DOC as well as a microbial community largely dominated by nitrifier, denitrifier, and anammox bacteria .

Our analysis revealed strain-level variation within key members of this community as well as the potential for rich biosynthetic capacity. We also found evidence for niche specialization based on analysis of the genetic pathways present . Such niche specialization between species in an anammox community was recently reported for a partial nitritation anammox reactor in a wastewater treatment plant. We find evidence that a similar microbial community is present in shallow, nitrate rich groundwater, and there are multiple anammox strains within a single well. The prevalence of the anammox genomes at over 10% abundance suggests that these bacteria are major drivers of the natural geochemistry of this environment. An implicit consequence is conversion of ammonium and nitrate into nitrite and N2 gas. Additionally, nitrite-dependent anaerobic oxidation of methane may be coupled to anammox in this community, reducing potential greenhouse gas emissions.An important aspect of the present study is that the source of the nitrate is cow manure, which also carries a considerable carbon load that supports microbial metabolism. Nitrates derived from synthetic fertilizers do not carry a carbon source and thus may be associated with a considerably different microbial community. Thus, different sources of nitrate could have different potential for bioremediation. Furthermore, we must consider the source of the microbial community in the environment. The Central Valley of California was once an extensive wetland, and wetland-associated microbial communities perform nitrifier, denitrifer, n-damo, and anammox reactions. If the source of the community were different, we might expect to see a different set metabolic processes with different implications for water quality and greenhouse gas emissions.

An overlap in anaerobic nitrogen and sulfur redox reactions was shown by Canfield et al in the oxygen minimum zone of the ocean. Our metagenomic data and chemical data indicate the potential for a similar overlap in nitrogen and sulfur cycles in groundwater, with OP11 Microgenomates specifically involved through assimilatory sulfur reduction . As shown previously , nitrate levels were highest in MW5 , and lower in MW6 and DOM . The sulfate levels follow a similar trend: MW5, 68.8 ppm; MW6, 15.3 ppm; DOM 2.3 ppm. The microbial abundances and corresponding chemical pathway analysis suggest that these pathways overlap in organisms that exist in the appropriate nutrient conditions. Furthermore the presence of Candidatus Methylomirabilis with the anammox communities in MW6 and DOM supports the findings of Shen et al that denitrification may be coupled to methane oxidation, reducing potential methane emissions of degrading manure.The high abundance of anammox and associated nitrifier and denitrifier bacteria in the nitrate-rich samples suggests that excess nitrate and ammonium in groundwater may be naturally remediated [or mineralized] to N2 by the endogenous microbiota. The presence of a natural microbial community that closely resembles the nitritation-anammox active sludge community for sewage wastewater denitrification could also be taken as an indication that the shallow groundwater in the Central Valley is recharged from sources similar to sewage wastewater. Based on extensive, controlled studies of this community, e.g, it appears possible that simply by decreasing the input of manure into the groundwater, the nitrogen pollutants could decrease below harmful levels. This implication holds true in the shallow groundwater as well as in the deep groundwater, where we still see evidence of the nitritationanammox community despite lower levels of nitrate . The nitrate:DOC ratio is similar between MW5, MW6, and DOM , although the total DOC and nitrate levels are an order of magnitude different between each of the samples with MW5>>MW6>>DOM, presumably due to different levels of dilution of the manured water with recharge from the adjacent, unmanured fields. The abundance of a similar nitrifer/denitrifier and anammox microbial community in all three samples appears to mirror the total DOC and nitrate, supporting the notion that bio-remediation of nitrate and DOC scales with nutrient abundance both through direct nutrition and through community metabolism. With increased sampling,nft hydroponic system observed differences in microbial communities may aid in forensic “fingerprinting” approaches to detect sources of nitrate in groundwater.The metagenomes also indicate a potential concern, which is that the same organisms that remediate the nitrogen also produce bio-active secondary metabolites that pose potential health risks and are more difficult and expensive to remove from drinking water. Thus, as groundwater becomes a scarcer and more valuable resource, quantifying the downstream risks of organic manure fertilizer contamination in groundwater becomes a more important priority. There has been speculation about how slow growing anammox bacteria can maintain a competitive advantage over faster growing bacteria. The high abundance of secondary metabolite gene clusters in their genomes may give us a clue.

Our analysis annotated a diverse array of these gene clusters as various antimicrobials, which could of course help the slow growing anammox cells maintain their dominance in the community. Groundwater microbiomes are unique communities and their metagenomes have not been extensively mined for new biosynthesis pathways. Using anti-SMASH we computationally identified many bio-synthetic gene clusters that could produce pharmacologically interesting compounds, such as butyrolactone and antibiotics. We suggest the combination of this pharmacological diversity and the unique cell biology of anammox bacteria could make them a fruitful resource for drug discovery.While short read metagenome data can potentially provide insights into taxonomic identities of organisms, we found greatly improved taxonomic inference and functional pathway inference by using partial assembly of the short reads. For instance, while MetaPhlAn analysis gave us a good depiction of the taxonomic similarity between samples , the accuracy of assignments was not sufficient to guide the choice of reference genomes for assembly of the whole metagenome deep sequencing reads, indicating that our particular samples have a taxonomic distribution that is poorly represented in the available databases that MetaPhlAn uses. Assembly of 16S rDNA from short reads is known to be chimera-prone due to the high homology across the tree of life. Solely using EMIRGE to assemble 16S genes and then aligning to SILVA gave us a much more accurate depiction of the phylogenetic diversity in our samples. However, connecting the 16S taxonomy to the genomic bins was problematic. When we tried to link these genes to contigs in the bins using targeted assembly , we found that multiple 16S genes assembled to a given genomic bin. While we could make good guesses at which 16S gene belonged to which genomic bin, we could not make these links in an unbiased manner. Therefore, we have omitted them here. While our analysis reveals only a fraction of the inherent long-tailed distribution of taxa that occur in the groundwater, because we are interested in the major factors shaping water chemistry, the most abundant taxa are the most important to sample. Thus a sequencing depth of ~50 million PE 101 bp reads per sample is quite adequate for assessing the functional geochemistry of groundwater. However, as discussed earlier, a high amount of strain-level variation is present that our current methodologies can only address at a superficial level.We found evidence for strain-level variation in the anammox community both across samples and within bins . While making further distinctions between strains is beyond the scope of this paper, future investigations into the ecological factors that support anammox strain variation with apparently overlapping niches would help define the biology of this globally important denitrifying community. Here we find evidence that at least three related Brocadiaceae strains can coexist .We find many , highly diverse, nano-prokaryote genomes , and the abundance of these genomes amounts to over 50% of the community in MW5 . Because these organisms have been shown to lack major parts of central metabolism, this observation emphasizes the question posed by Brown et al, which is, to what extent do nano-prokaryotes exist as separate cellular entities versus spatially localized to and metabolically dependent upon other cells? Of note is the presence in the small genomes of many partial pathways that affect cellular decision-making . In particular, most of the small genomes encode homologs of flagellar chemotaxis components, which we speculate could serve to modify the cellular decision-making behavior of larger cells. We note that the greater diversity of Chloroflexi, CPR, and DPANN taxa in MW5 versus MW6 and DOM corresponds to a greater presence of nitrate, sulfate, and DOC, which is contrary to macroecological theory and empirical results that demonstrate loss of diversity with increased nutrients. Future studies could address whether these phylogenetic abundance patterns are directly tied to particular nutrients or an indirect consequence of trophic community metabolism, which could aid in optimizing ecology of wastewater treatment bioreactors.Because of the employment opportunities and economic multipliers it creates, especially during the early stages of development, agriculture has long been at the center of discussions about poverty reduction and economic development .

The minimum number of years of coverage required to receive a full pension was also increased

When some form of compensation is not offered, the reform is almost certain to be defeated. Thus, the eco tax was had little chance of success, given that farmers were not offered any compensation in exchange for this new cost being imposed on them. In June 2014, the Hollande government unveiled the final version of the eco tax plan, now called “truck tolls”. The new plan applied only to trucks weighing 3.5 tons or more and included just 4,000 kilometers of road, as against 15,000 kilometers in the original plan. In addition, all proposed roads in Brittany, the epicenter of the protests, were exempted from the tolls. Trucks carrying agricultural goods, milk collection vehicles, and circus related-traffic were also exempted. As a result of the transportation exemptions and significantly smaller area of coverage, the toll is expected to generate only a third of the revenue of the original plan.The French eco tax example shares much in common with CAP reform, particularly in the area of environmental policy. Proposed environmental policies in the CAP often mean that new costs will be imposed on farmers who are forced to conform to stricter standards and modify their farming methods in some way. These attempted reforms are virtually always modified by farmers in one of two ways: by extracting a new or additional form of compensation for meeting these rules or by compelling reformers to adopt exemptions,hydroponic grow kit often so extensive that barely any farmers are subjected to new rules.

In the case of the French eco tax, farmers followed the latter course: when faced with a tax that would have imposed new financial burdens on producers, they successfully compelled the government to completely exempt agriculture. The victory is all the more significant since these exemptions cost the government badly needed tax revenue at a time of austerity. The successful campaign against the eco tax highlights some of the new sources of power that farmers have developed. Organizations were one important source of power. The FNSEA demonstrated the ability to coordinate its membership and to rely on regional branches to place pressure on both national and local politicians. In the fight against this tax, the FNSEA deployed multiple tactics to exert influence on the policy making process, mobilizing members for public demonstrations while simultaneously lobbying local and national political officials. The protesting French farmers also benefited from a sympathetic public that did not begrudge the massive disruptions and disturbances caused by demonstrations and blockades. While French farmers were able to use their powerful organizations to avoid a new, uncompensated tax, the same cannot be said of other groups. At virtually the same time farmers were thwarting a new tax, a series of austerity-driven pension reforms went ahead. Unlike the case of the eco tax, protests did nothing to stop the reforms, and the policy changes were adopted despite widespread civil unrest. In 2010, then-president Nicolas Sarkozy proposed a series of reforms to the French pension system. The reforms included raising the retirement age from 60 to 62 along with increasing the age at which one qualifies for a full pension from 65 to 67.

In addition, the number of years of required social security contributions increased from 40.5 to 41.5 years. In response to the proposed reforms, nearly 3 million people took to the streets, with plane and train travel severely disrupted and other sectors of the economy virtually shut down as the major unions called for strikes. Fuel shortages were a perpetual problem during the protests, as dock workers went on strike, leaving petrol stranded at ports. In addition, schools, ports, and airports were blockaded by demonstrators. In this case, however, coordinated protest was not able to compel the government to roll back reforms. Just a few years later, in 2014, Sarkozy’s successor, François Hollande enacted further reform to the French pension system. Contribution rates for both employers and employees were raised, a previously tax-exempt supplement for retirees who raised three or more children was made subject to taxation, and the number of years of required social security contributions was increased from 41.5 to 43 years. While France is generally viewed as farmer-friendly, the French case is not an outlier. Looking at other Western European countries, a similar pattern emerges. Pensions cuts were imposed, while national discretionary agricultural spending remained virtually untouched. Indeed, across Europe, pensions were significantly reformed in the wake of the 2008 financial crisis, placing new financial burdens on the average worker. This contrast between pension policies and agricultural expenditure is all the more glaring when the broader context is taken into account: less than two percent of the population benefits from agricultural support policies while all citizens are current or future pensioners. Current spending levels are not a good indicator of reform, since much pension spending is locked in by decisions made decades ago. In the case of pensions, cuts are best identified by increases in the minimum retirement age or downward cost of living adjustments. Such reforms occurred in each of the four country cases, as summarized in Table 7.1.Germany reformed its pensions in 2007, just before the onset of the financial crisis, raising the retirement age from 65 to 67. In the UK, reforms raised the retirement age from 66 to 67.

New reforms also increased the minimum number of years of contributions to qualify for a full pension from 30 to 35 years. A 2013 Dutch pension reform raised the minimum retirement age to 65 for workers currently under the age 55.While pensions were being cut across Europe, farmers were spared. At the EU level, in the first CAP reform after the financial crisis, spending on the CAP was not cut, and instead money was taken out of other areas in order to channel more support to farmers. Indeed, this reallocation of funds back into farming happened despite a stated objective of directing more money away from agriculture and into other objectives, like improving the provision of high speed internet. Spending on farmers was also preserved at the domestic level. European national governments spend some money on agriculture outside the CAP. National financing of agriculture comes via three main avenues: top-ups of Pillar 1 direct income payments; cofinancing of Pillar 2 programs ; and additional state aid payments to farmers by their national governments. Figure 7.1 tracks national agricultural expenditure as reported by the European Union in its annual statistical yearbook. Farmers in Japan have enjoyed great success in imposing their policy preferences due in part to their homogeneity and highly organized representative associations. Small farmers dominate the agricultural sector, which makes it easy for farmer associations to promulgate a single, coherent message. In addition, a strong union that is well organized nationally, regionally, and locally, represents Japanese farmers. Finally, unlike Europe and the United States, there is little if any pressure from sectoral organizations. The main farming organization in Japan is Japan Agriculture, referred to as JA or Nōkyō. The JA is a three-tiered organization, with national, prefectural, and local-level cooperative groups. The JA commands near universal membership of the Japanese farming community in large part due to the services and benefits it offers. It claims to have nearly 10 million members . Its main businesses are banking, insurance, hydroponic indoor growing system agricultural retail and wholesaling, and supply of farming materials. In addition to these benefits and services, which are not uncommon among agricultural cooperatives, the JA’s scope of business includes real estate, travel agencies, supermarkets, and even funeral homes . Essentially, “within the villages, the JA is a one-stop service. Farmers and everyone else in the village use JA services” . An LDP politician explained that the JA has far-reaching influence and is a cornerstone of rural society, with even non-farmers depending on the JA for services, “No other organizations in Japan are like the JA with so much local organization and influence. The JA is crucial in local community because of the infrastructure it provides. As a result, even non-farmers in rural areas need and depend upon the JA” . Ultimately, this wide range of services means that the JA can forge a relationship with farmers and the broader rural community that extends beyond just agriculture. Indeed, the JA can assist rural communities in all their needs, even those that come after death.

Along with high membership levels, much of the JA’s power derives from the fact that it has been in an official corporatist relationship with the state since it was formally created via legislation in 1947. This close relationship with the state has been quite beneficial for the JA, with the government at times heavily regulating and protecting the JA’s banking and insurance businesses, even going so far as to bail out JA banking multiple times, both after 1980s economic bubble burst and again in 2008. For example, Norin-Chukin a major agricultural cooperative bank had invested extensively in real estate during the 1980s boom. When the bubble burst and the real estate market collapsed, JA affiliated banks, Norin-Chukin chief among them, sustained heavy losses. As a result of political lobbying, the JA was able to reach an agreement where it was only responsible for ¥530 billion out of a total of ¥5.5 trillion in losses . The state has also granted the JA exceptional status in antitrust law, which has afforded the JA monopolies on the supply of agricultural inputs to farmers . Further exceptions are made for the insurance wing of the JA, “which is allowed to sell multiple kinds of insurance whereas other firms are traditionally limited to providing only one type of insurance” . As these examples suggest, farmers and the JA have been quite successful in their efforts to influence agricultural policy making. An important area of success for Japanese farmers has been in shaping Japan’s trade negotiations, pressing for protectionism even when other groups seek greater trade liberalization. In these negotiations, Japanese agriculture is able to impose its preferences despite pressure from the Japanese business lobby, Keidanren, which stands to gain far more from liberalization than agriculture would ever lose. These victories for Japanese farmers have come at both the GATT/WTO and in Japan’s bilateral trade agreements. The GATT Uruguay Round sought to reduce if not eliminate agricultural subsidies and remove tariffs and trade barriers in an effort to liberalize agricultural trade. In these negotiations, Japan’s position was largely defensive and was grounded in a desire to make as few concessions as possible. Its objectives were shaped primarily by the special position of rice producers and also by the overall high level of protection of agriculture. The LDP, whose political position was vulnerable at the time, promised farmers that no amount of foreign rice would be allowed to enter the domestic market . Fundamental incompatibility between GATT objectives and the policy preferences of major negotiating parties, including Japan and the European Community, resulted in the round grinding to a halt. In the end, although reducing tariffs was a major goal of the negotiations, a modification was negotiated specifically for Japan to allow it to delay tariffication of rice in exchange for accepting more imports of agricultural products, but only in sectors that were unimportant to Japanese agriculture such as dairy production. In addition, farmer subsidies were protected, despite the GATT UR goals of eliminating them. By the end of the GATT UR negotiations, Japanese farmers walked away with an agreement that protected their core commodities and allowed them to largely avoid the removal of tariffs for key products, while also maintaining a system of income support for farmers. Farmers have seen similar success in Japan’s bilateral trade negotiations. In September of 2003, Japan was in the final stages of a free trade agreement with Mexico, which had been delayed by agricultural opposition. Frustrated with the delays, Prime Minister Junichiro Koizumi ordered his trade negotiators to “get it done” . In the end, a tripartite coalition of agricultural representatives was able to extract considerable concessions for agriculture that finally allowed the agreement to move forward. The concessions included a reduction in the level of tariffs that had to be removed and special protection arrangements for “politically sensitive” commodities including pork, beef, chicken, oranges, and orange juice . Although this free trade agreement was concluded with Mexico, agriculture continued to block any progress on other free trade agreements Japanese officials desired at the time with the Philippines, Thailand, and South Korea.

Mandatory cross-compliance could also attenuate the image of the farmer as a polluter

As with Fischler’s strategy in articulating agricultural policy reform, keeping the proposals secret allows the reformers to conduct research and compile data and evidence to justify the proposed changes. These data, on issues such as expected savings, distribution of benefits across groups, or overall change in total support, allowed reformers to support their proposals with evidence, while those who oppose the initiatives are caught flatfooted. Finally, secrecy prevents welfare state beneficiaries from marshalling opposition to reform before the proposals can be fully presented and explained.The plan developed by Fischler and his associates contained three core elements: decoupling of income payments, modulation, and cross-compliance. The first element called for the full decoupling of direct supports to farmers. Payments are coupled when the amount of money a farmer receives depends on how much he or she produces. Coupled payments were the original backbone of the CAP. When the CAP was created, Europe was struggling through its post-war recovery and food shortages were still a concern. Incentivizing production was essential for overcoming these challenges. Over time, production-based payments got out of hand, resulting in the milk lakes and butter mountains that plagued the CAP in the late 1980s and early 1990s. As previous chapters explain, Ray MacSharry was able to take major steps toward managing the surplus problem by implementing, among other reforms, a partial decoupling of payments from production. In the 2003 reform, Fischler sought to complete the work that MacSharry had started, and completely decouple payments from production. Full decoupling was also a crucial and necessary step towards strengthening the EU’s bargaining position in the next round of WTO discussions. Coupled payments are market distorting and thus an object of particular ire within the WTO. In addition,drainage pot for plants coupled payments were exacerbating problems that Fischler feared would diminish public support for the CAP.

Production-based payments incentivized farmers to produce at all costs, with no concern for resulting damage to the land. Production-based payments also skewed the distribution of support. The largest farmers were able to produce the most, ensuring that they got the most money. Finally, production-based subsidies, particularly in times of surplus, were wasteful and increasingly costly. Prices remained high for the consumer, yet expensive, massive stockpiles existed that the EU had to spend a considerable amount of money to buy, store, and dump. By fully decoupling the payment scheme, Fischler would be able to address two of the biggest challenges facing the CAP: compliance with WTO rules and the persistence of unequal and environmentally harmful CAP policies. Decoupling would allow CAP payments to be classified as in the WTO’s green box. The EU would thus be able to offer a key concession of sorts in the Doha Round negotiations by moving its agricultural subsidies into the box that is least trade distorting. As a result, the EU would be in position to press its interests across all sectors, as opposed to being targeted by other counties for having an agricultural policy that did not comply with WTO rules and regulations. Decoupling would also reduce the gap in support across the farming community, and diminish incentives for environmentally damaging farming practices. Payments tied to production had resulted in a dramatic disparity in how income support funds were distributed, with less than one fifth of farmers receiving fourth fifths of the support. The largest farmers continued to produce more and more, widening the gap between themselves and smaller or less productive farmers. Moreover, production-based payments incentivize farmers to produce as much as possible, no matter the costs or consequences for the environment. When these payments are decoupled, the income gap between the most productive and the rest can be contained, and the environment is spared the harmful effects of farmers who attempt to grow as much as possible. Most farmers would not stand to lose much in this change from coupled to decoupled payments.

The Single Farm Payment , calculated with reference to size of holding and historical yields, would replace the coupled payment system. Specifically, the amount of aid received between 2000-2002 would be divided by the number of hectares actively farmed during that reference period. The resulting figure would be the farmers’ new income payment under the SFP. Under this system, farmers would also gain “complete farming flexibility”, allowing them to grow any crop they desired . Receipt of the full SFP would be subject to meeting stringent environmental, food safety, and animal welfare standards . In the end, decoupling was a way of paying farmers the same money, but from a new pot. This outcome is consistent with one of my core claims, that it is difficult if not impossible to cut support for farmers. The second component of the proposal was dynamic modulation. Modulation is a mechanism employed by the European Union whereby income payments are gradually reduced and the funds collected are distributed to support other initiatives. In other words, this program “modulates” or modifies and controls the flow of funds to farmers and uses the savings to increase spending on other programs or member states. The program was called dynamic because the redeployment of funds was not fixed but instead could respond to those areas most in need of additional financial support. The policy entailed not only a gradual reduction of income payments, but also the redeployment and distribution of the funds collected under the program. Most of the revenue collected would be retained by member states, but earmarked for rural development programs. The rest was to be redistributed to other member states in an effort to reduce existing disparities in the allocation of CAP support. Fischler’s dynamic modulation proposal entailed a progressive reduction in direct income payments, beginning with 3% in 2005 and increasing in 3% increments annually, until it reached 20%. Exemptions were to be made for farms that received less than €5,000 annually. Farms that were labor intensive, thus providing jobs in the local community, could have up to €8,000 exempted from dynamic modulation, at the member state’s discretion.

Though this program seemed to be cutting overall levels of spending, the money garnished from farmer income payments was not leaving the CAP but rather being redirected into other CAP programs. Member states would keep a portion of the money for rural development and environmental programs, while the rest would be re-distributed among member states “on the basis of agricultural area, agricultural employment, and prosperity criteria to target specific rural needs” . Through this system of redistribution, and by garnishing the payments of the farmers who earned the most, dynamic modulation would contribute to achieving the twin goals of reducing the disparity in payments between large and smaller farmers and improving the distribution across member states. Dynamic modulation is an example of using the welfare state tactic of turning vice into virtue in the context of agricultural policy reform. Specifically, the dynamic modulation reform revised an existing program , reorienting this CAP program to operate more equitability. As with vice into virtue in the world of the social welfare state, an existing program that was operating inefficiently and inequitably was corrected through reform, rather than eliminating the policy entirely and attempting to replace it. Payments for all farmers above a certain threshold would be reduced, and collected funds would be redeployed to other areas of need. This objective of reducing the disparity in payment levels within and across countries was taken increasingly seriously,garden pots ideas as inequality in the operation of CAP support payments was beginning to garner attention beyond EU technocrats. The Commission noted that dynamic modulation would “allow some redistribution from intensive cereal and livestock producing countries to poorer and more extensive/mountainous countries, bringing positive environmental and cohesion effects” . The redirection of funds from income payments to rural development programs was also a tangible way for EU officials to signal a stronger commitment to the CAP’s social and environmental objectives. These social and environmental objectives had been identified by the public via Eurobarometer surveys as both the most important objectives of the CAP and areas where the CAP was failing to meet existing expectations. Also included in the dynamic modulation package was a proposal to cap the amount of direct aid any individual farmer could receive at €300,000 a year. This proposal was motivated by the desire to prevent large farms from receiving what many considered to be exorbitant sums of money. Specifically, it would address public concerns over the inequality in the operation of CAP payments. The payment cap was also intended to help correct the problem of an inequitable distribution of support within and across countries. This limit would reduce the overall gap between the largest and smallest recipients. In addition, it would begin to correct for payment imbalances among member states, as most of the farmers who would be subjected to the income cap were concentrated in a few member states. The inclusion of a cap on income payments is another example of CAP reformers employing the vice into virtue technique, which has been similarly used by welfare state reformers to correct welfare programs that are operating inefficiently or producing unequal outcomes. The third and final reform was mandatory crosscompliance. In Agenda 2000, cross compliance was adopted only in voluntary form. In the MTR, Fischler sought to make this program compulsory. Under cross-compliance, direct payments could be made conditional on achieving certain environmental goals. The income payment could, for example, be reduced if a farmer failed to comply with a given environmental rule.

Farmers who met the standards would receive the full amount of direct payments for which they were eligible, but would not receive a bonus for full compliance. Farmers who received direct payments would be required to maintain all of their land in good agricultural and environmental condition; if not, payment reductions were to be applied as a sanction . The inclusion of cross-compliance in Agenda 2000 positioned Fischler to make further reforms in the MTR, because he had already softened the ground in the previous agreement. As Fischler noted, “all the components of cross compliance [in the MTR proposal] were things that were already in place since Agenda 2000, but the member states had been responsible for implementing them. However, most members didn’t do it, or did a lousy job of implementing them” . Leading Commission officials argued that the member states had already approved and accepted the concept of cross compliance, so there was no reason that it should be rejected during the MTR. In reality, the vast majority of member states had chosen not to implement any of the standards or rules because cross compliance was an optional program. Still, Fischler was able to put them on the defensive for “failing” to implement Agenda 2000. As Fischler explained, “farmer ministers were put in a hard spot because now they had to account for failure to implement all of these measures in the past. They couldn’t oppose the concept of cross-compliance because they had already agreed to it, so they made the usual complaint that it would hurt farmers, but that’s always their line” . Fischler saw cross compliance as a legitimacy-boosting technique because it tied eligibility for support to compliance with environmental conditions and standards . Cross-compliance would help address public criticism of the CAP by strengthening the greening component and further developing the image of the farmer as a provider of not just food, but broader public goods and services.Fischler’s proposal for the MTR was sent to the College of Commissioners for formal discussion, revision, and approval. The proposal was well received by the Commission overall. Fischler was respected within the Commission as an agricultural expert and a reformer . The way for his proposal was further smoothed, thanks to an October 2002 agreement engineered by Chirac and Schröder at the Brussels European Council meeting, which guaranteed that the agricultural budget for direct-market supports would not be cut before 2013, when a new budget would be drafted . Even though Commission President Romano Prodi had previously expressed a desire to cut the CAP by up to 30%, the ChiracSchröder deal prevented him from doing so, despite the fact that he was supported by other Commissioners who hoped to use these CAP cuts to direct more support into their own portfolios.