Utilizing TLCs may result in greater clinical flexibility and effectiveness and less role strain

The kinase activity of both CK1δ and CK1ε is inhibited by autophosphorylation of an intrinsically disordered inhibitory tail that follows the kinase domain to set these isoforms apart from other members of the CK1 family. Because the full-length kinase autophosphorylates and slowly inactivates itself in vitro, most biochemical studies exploring the activity of CK1δ/ε on clock proteins utilize the truncated, constitutively active protein, although new studies are finally beginning to explore the consequences of autophosphorylation in more detail. However, not much is known yet about how the phosphorylated tail interacts with the kinase domain to inhibit its activity; several autophosphorylation sites were previously identified on CK1ε at S323, T325, T334, T337, S368, S405, S407 and S408 using limited proteolysis and phosphatase treatment or through Ser/Thr to Ala substitutions in vitro, although it is currently not known which of these sites are important for kinase regulation of the clock. One potential interface has been mapped between the kinase domain and autoinhibitory tail through cross linking and mass spectrometry to suggest that the tail might dock some phosphorylated Ser/Thr residues close to the anion binding sites near the active site. This study also provided evidence that the tail may be able to regulate substrate binding, and therefore control specificity of the kinase,vertical plant rack by comparing the activity of CK1α, a tailless kinase, with CK1ε on two substrates, PER2 and Disheveled. Understanding the role of tail autophosphorylation and its regulation of kinase activity is sure to shed light on control of circadian rhythms by CK1δ/ε. Some sites within the C-terminal tail of CK1δ and/or CK1ε are known to be phosphorylated by other kinases, such as AMPK, PKA, Chk1, PKCα, and cyclindependent kinases.

PKA phosphorylates S370 in CK1δ to reduce its kinase activity; consistent with this, mutation of S370 to alanine increases CK1-dependent ectopic dorsal axis formation in Xenopus laevis. Chk1 and PKCα also reduce CK1δ kinase activity through phosphorylation of overlapping sites at S328, T329, S331, S370, and T397 in the tail of rat CK1δ. Phosphorylation of CK1δ T347 influences its activity on PER2 in cells, and was found to be phosphorylated by proline-directed cyclin-dependent kinases rather than autophosphorylation. CDK2 was also found to reduce the activity of rat CK1δ in vitro through phosphorylation of additional sites at T329, S331, T344, S356, S361, and T397. Unlike the other kinases listed here, phosphorylation of S389 on CK1ε by AMPK increases the apparent kinase activity on the PER2 phosphodegron in cells; consequently, activation of AMPK with metformin increased the degradation of PER2. Therefore, the phosphorylation of CK1δ and/or CK1ε tails by these other kinases therefore has the potential to link its regulation of PER2 and the circadian clock to metabolism, DNA damage response, and the cell cycle. There is now strong evidence that the C-terminus of CK1δ plays a direct role in regulation of circadian period. Recently, tissue-specific methylation of CK1δ was shown to regulate alternative splicing of the kinase into two unique isoforms, δ1 and δ2, that differ only by the extreme C-terminal 15 residues. Remarkably, expression of the canonical δ1 isoform decreases PER2 half-life and circadian period, while the slightly shorter δ2 isoform increases PER2 half-life and circadian period.Further biochemical studies revealed that these two variants exhibit differential activity on the stabilizing priming site of the PER2 FASP region––the δ1 isoform has a lower activity than δ2, which also closely resembles the C-terminus of the ε isoform.

These data suggest that a very short region at the C-terminal end of the tail could play a major role in regulation of CK1δ and the PER2 phosphoswitch to control circadian period. This is bolstered by the discovery of a missense mutation in the same region of the CK1ε tail at S408N in humans that has been associated with protection from Delayed Sleep Phase Syndrome and Non-24-hr Sleep-Wake Syndrome. Further studies will help to reveal biochemical mechanisms behind regulation of kinase activity and substrate selectivity by the C-terminal tail of CK1δ and CK1ε to determine how they play into regulation of circadian rhythms. The central thesis of this article is very simple: Health professionals have significantly underestimated the importance of lifestyle for mental health. More specifically, mental health professionals have underestimated the importance of unhealthy lifestyle factors in contributing to multiple psychopathologies, as well as the importance of healthy lifestyles for treating multiple psychopathologies, for fostering psychological and social well-being, and for preserving and optimizing cognitive capacities and neural functions. Greater awareness of lifestyle factors offers major advantages, yet few health professionals are likely to master the multiple burgeoning literatures. This article therefore reviews research on the effects and effectiveness of eight major therapeutic lifestyle changes ; the principles, advantages, and challenges involved in implementing them; the factors hindering their use; and the many implications of contemporary lifestyles for both individuals and society. Lifestyle factors can be potent in determining both physical and mental health. In modern affluent societies, the diseases exacting the greatest mortality and morbidity— such as cardiovascular disorders, obesity, diabetes, and cancer—are now strongly determined by lifestyle. Differences in just four lifestyle factors—smoking, physical activity, alcohol intake, and diet— exert a major impact on mortality, and “even small differences in lifestyle can make a major difference in health status” .

TLCs can be potent. They can ameliorate prostate cancer, reverse coronary arteriosclerosis, and be as effective as psychotherapy or medication for treating some depressive disorders . Consequently, there is growing awareness that contemporary medicine needs to focus on lifestyle changes for primary prevention, for secondary intervention, and to empower patients’ self-management of their own health. Mental health professionals and their patients have much to gain from similar shifts. Yet TLCs are insufficiently appreciated, taught, or utilized. In fact, in some ways, mental health professionals have moved away from effective lifestyle interventions. Economic and institutional pressures are pushing therapists of all persuasions toward briefer, more stylized interventions. Psychiatrists in particular are being pressured to offer less psychotherapy, prescribe more drugs, and focus on 15-minute “med checks,” a pressure that psychologists who obtain prescription privileges will doubtless also face . As a result, patients suffer from inattention to complex psychodynamic and social factors, and therapists can suffer painful cognitive dissonance and role strain when they shortchange patients who need more than what is allowed by mandated brief treatments . A further cost of current therapeutic trends is the underestimation and underutilization of lifestyle treatments despite considerable evidence of their effectiveness. In fact, the need for lifestyle treatments is growing,growing vegetables in vertical pvc pipe because unhealthy behaviors such as overeating and lack of exercise are increasing to such an extent that the World Health Organization warned that “an escalating global epidemic of overweight and obesity— ‘globesity’—is taking over many parts of the world” and exacting enormous medical, psychological, social, and economic costs.Lifestyle changes can offer significant therapeutic advantages for patients, therapists, and societies. First, TLCs can be both effective and cost-effective, and some—such as exercise for depression and the use of fish oils to prevent psychosis in high-risk youth—may be as effective as pharmacotherapy or psychotherapy . TLCs can be used alone or adjunctively and are often accessible and affordable; many can be introduced quickly, sometimes even in the first session . TLCs have few negatives. Unlike both psychotherapy and pharmacotherapy, they are free of stigma and can even confer social benefits and social esteem . In addition, they have fewer side effects and complications than medications .

TLCs offer significant secondary benefits to patients, such as improvements in physical health, self-esteem, and quality of life . Furthermore, some TLCs—for example, exercise, diet, and meditation—may also be neuroprotective and reduce the risk of subsequent age-related cognitive losses and corresponding neural shrinkage . Many TLCs—such as meditation, relaxation, recreation, and time in nature—are enjoyable and may therefore become healthy self-sustaining habits . Many TLCs not only reduce psychopathology but can also enhance health and well-being. For example, meditation can be therapeutic for multiple psychological and psychosomatic disorders . Yet it can also enhance psychological well-being and maturity in normal populations and can be used to cultivate qualities that are of particular value to clinicians, such as calmness, empathy, and self-actualization . Knowledge of TLCs can benefit clinicians in several ways. It will be particularly interesting to see the extent to which clinicians exposed to information about TLCs adopt healthier lifestyles themselves and, if so, how adopting them affects their professional practice, because there is already evidence that therapists with healthy lifestyles are more likely to suggest lifestyle changes to their patients . There are also entrepreneurial opportunities. Clinics are needed that offer systematic lifestyle programs for mental health that are similar to current programs for reversing coronary artery disease . For societies, TLCs may offer significant community and economic advantages. Economic benefits can accrue from reducing the costs of lifestyle-related disorders such as obesity, which alone accounts for over $100 billion in costs in the United States each year . Community benefits can occur both directly through enhanced personal relationships and service and indirectly through social networks. Recent research demonstrates that healthy behaviors and happiness can spread extensively through social networks, even through three degrees of separation to, for example, the friends of one’s friends’ friends . Encouraging TLCs in patients may therefore inspire similar healthy behaviors and greater well-being in their families, friends, and co-workers and thereby have far-reaching multiplier effects . These effects offer novel evidence for the public health benefits of mental health interventions in general and of TLCs in particular. So what lifestyle changes warrant consideration? Considerable research and clinical evidence support the following eight TLCs: exercise, nutrition and diet, time in nature, relationships, recreation, relaxation and stress management, religious and spiritual involvement, and contribution and service to others.Exercise offers physical benefits that extend over multiple body systems. It reduces the risk of multiple disorders, including cancer, and is therapeutic for physical disorders ranging from cardiovascular diseases to diabetes to prostate cancer . Exercise is also, as the Harvard Mental Health Letterconcluded, “a healthful, inexpensive, and insufficiently used treatment for a variety of psychiatric disorders.” As with physical effects, exercise offers both preventive and therapeutic psychological benefits. In terms of prevention, both cross-sectional and prospective studies show that exercise can reduce the risk of depression as well as neurodegenerative disorders such as age-related cognitive decline, Alzheimer’s disease, and Parkinson’s disease . In terms of therapeutic benefits, responsive disorders include depression, anxiety, eating, addictive, and body dysmorphic disorders. Exercise also reduces chronic pain, age-related cognitive decline, the severity of Alzheimer’s disease, and some symptoms of schizophrenia . The most studied disorder in relation to exercise to date is mild to moderate depression. Cross-sectional, prospective, and meta-analytic studies suggest that exercise is both preventive and therapeutic, and in terms of therapeutic benefits it compares favorably with pharmacotherapy and psychotherapy . Both aerobic exercise and nonaerobic weight training are effective for both short-term interventions and long-term maintenance, and there appears to be a dose–response relationship, with higher intensity workouts being more effective. Exercise is a valuable adjunct to pharmacotherapy, and special populations such as postpartum mothers, the elderly, and perhaps children appear to benefit . Possible mediating factors that contribute to these antidepressant effects span physiological, psychological, and neural domains. Proposed physiological mediators include changes in serotonin metabolism, improved sleep, as well as endorphin release and consequent “runner’s high” . Psychological factors include enhanced self-efficacy and self esteem, interruption of negative thoughts and rumination , and perhaps the breakdown of muscular armor, the chronic psychosomatic muscle tension patterns that express emotional conflicts and are a focus of somatic therapies . Neural factors are especially intriguing. Exercise increases brain volume , vascularization, blood flow, and functional measures . Animal studies suggest that exercise-induced changes in the hippocampus include increased neuronogenesis, synaptogenesis, neuronal preservation, interneuronal connections, and BDNF . Given these neural effects, it is not surprising that exercise can also confer significant cognitive benefits . These range from enhancing academic performance in youth, to aiding stroke recovery, to reducing age-related memory loss and the risk of both Alzheimer’s and non-Alzheimer’s dementia in the elderly . Multiple studies show that exercise is a valuable therapy for Alzheimer’s patients that can improve intellectual capacities, social functions, emotional states, and caregiver distress .

The digest was considered semi-specific and up to 3 missed cleavages were allowed

Similar results were observed for EGFR degradation, with no major proteome-wide changes occurring and EGFR being virtually the only proteins significantly down regulated in CXCL12- Ctx treatment compared to control in both the surface-enriched and whole cell proteomics . Interestingly, a previously published proteomics dataset of LYTAC-mediated EGFR degradation identified additional proteins significantly up- or down-regulated following LYTAC treatment.Comparing to our experiment in the same cell line suggests that KineTACs are more selective in degrading EGFR. As there is large overlap in peptide IDs observed between the two datasets, the observed greater selectivity is not due to lack of sensitivity of the KineTAC proteomics experiment . CXCR4 and CXCR7 peptide IDs were not altered in the surface-enriched sample, and CXCR4 IDs were also unchanged in the whole cell sample, indicating that treatment with KineTAC does not significantly impact CXCR4 or CXCR7 levels. Furthermore, protein levels of GRB2 and SHC1, which are known interacting partners of EGFR4 , were also not significantly changed. Together, these data demonstrate the exquisite selectivity of KineTACs for degrading only the target protein and not inducing unwanted, off-target proteome wide changes. To elucidate whether KineTAC-mediated degradation could impart functional cellular consequences, cell viability of HER2 expressing cells was measured following treatment with CXCL12-Tras. MDA-MB-175VII breast cancer cells are reported to be sensitive to trastuzumab treatment, and as such serve as an ideal model to test the functional consequence of degrading HER2 compared to inhibition with trastuzumab IgG.To this end, cells were treated with either CXCL12-Tras or trastuzumab IgG for 5 days,vertical tower planter after which the cell viability was determined using a modified MTT assay. Reduction in cell viability was observed at higher concentrations of CXCL12-Tras and was significantly greater than trastuzumab IgG alone .

These data demonstrate that KineTAC-mediated degradation has functional consequences in reducing cancer cell viability in vitro and highlights that KineTACs could provide advantages over traditional antibody therapeutics which bind but do not degrade. Finally, we asked whether KineTACs would have similar antibody clearance to IgGs in vivo. To this end, male nude mice were injected intravenously with 5, 10, or 15 mg/kg CXCL12- Tras, which is a typical dose range for antibody xenograft studies. Western blotting analysis of plasma antibody levels revealed that the KineTAC remained in plasma up to 10 days post-injection with a half-life of 8.7 days, which is comparable to the reported half-life of IgGs in mice .Given the high homology between human and mouse CXCL12, we tested whether human CXCL12 isotype could be cross-reactive. Human CXCL12 isotype binding to mouse cell lines MC38 and CT26, which endogenously express mouse CXCR7, was confirmed . Together, these results demonstrate that KineTACs have favorable stability and are not being rapidly cleared despite cross-reactivity with mouse CXCR7 receptors. Since atezolizumab is also known to be cross-reactive, CXCL12-Atz ability to degrade mouse PD-L1 was tested in both MC38 and CT26. Indeed, CXCL12-Atz mediates near complete degradation of mouse PD-L1 in both cell lines . Thus, PD-L1 degradation may serve as an ideal mouse model to assay the efficacy of KineTACs in vivo. Having demonstrated the ability of KineTACs to mediate cell surface protein degradation, we next asked whether KineTACs could also be applied towards the degradation of soluble extracellular proteins. Soluble ligands, such as inflammatory cytokines and growth factors, have been recognized as an increasingly important therapeutic class.

Of these, vascular endothelial growth factor and tumor necrosis factor alpha represent the most targeted soluble ligands by antibody and small molecule drug candidates, highlighting their importance in disease.Thus, we chose VEGF and TNFa as ideal proof-of-concept targets to determine whether KineTACs could be expanded to degrading extracellular soluble ligands . First, we targeted VEGF by incorporating bevacizumab , an FDA approved VEGF inhibitor, into the KineTAC scaffold . Next, HeLa cells were incubated with VEGF-647 or VEGF-647 and CXCL12-Beva for 24 hr. Following treatment, flow cytometry analysis showed a robust increase in cellular fluorescence when VEGF-647 was co-incubated with CXCL12-Beva, but not bevacizumab isotype which lacks the CXCL12 arm . To ensure that the increased cellular fluorescence was due to intracellular uptake of VEGF-647 and not surface binding, we determined the effect of an acid wash which removes any cell surface binding after 24 hr incubation . We found that there was no significant difference in cellular fluorescence levels between acid and normal washed cells. This data suggests that KineTACs successfully mediate the intracellular uptake of extracellular VEGF. Similar to membrane protein degradation, KineTAC-mediated uptake of VEGF occurs in a time-dependent manner, with robust internalization occurring before 6 hrs and reaching steady state by 24 hrs . Furthermore, the levels of VEGF uptake are dependent on the KineTAC:ligand ratio and saturate at ratios greater than 1:1 . We next tested the ability of CXCL12-Beva to promote uptake on other cell lines and find that these cells also significantly uptake VEGF . Moreover, the extent of uptake is correlated with the transcript levels of CXCR7 in these cells . These data suggest that KineTACs directed against soluble ligands can promote broad tissue clearance of these targets as compared to glycan- or Fc-mediated clearance mechanisms. To demonstrate the generalizable nature of the KineTAC platform for targeting soluble ligands, we next targeted TNFa by incorporating adalimumab , an FDA approved TNFa inhibitor, into the KineTAC scaffold . Following 24 hr treatment of HeLa cells, significant increase in cellular fluorescence was observed when TNFa-647 was coincubated with CXCL12-Ada compared to adalimumab isotype .

Consistent with the VEGF uptake experiments, acid wash did not alter the level of cellular fluorescence increase observed, and uptake was dependent on the KineTAC:ligand ratio . Thus, KineTACs are generalizable in mediating the intracellular uptake of soluble ligands, significantly expanding the target scope of KineTAC-mediated targeted degradation.In summary, our data suggest that KineTACs are a versatile and modular targeted degradation platform that enable robust lysosomal degradation of both cell surface and extracellular proteins. We find that KineTAC-mediated degradation is driven by recruitment of both CXCR7 and target protein, and that factors such as binding affinity, epitope, and construct design can affect efficiency. Other factors, such as signaling competence and pH dependency for the protein of interest, did not impact degradation for CXCL12 bearing KineTACs. These results provide valuable insights into how to engineer effective KineTACs going forward. Furthermore, we show that KineTACs operate via time-, lysosome-, and CXCR7-dependence and are exquisitely selective in degrading target proteins with minimal off-target effects. Initial experiments with an alternative cytokine, CXCL11, highlight the versatility of the KineTAC platform and the exciting possibility of using various cytokines and cytokine receptors for targeted lysosomal degradation. KineTACs are built from simple genetically encoded parts that are readily accessible from the genome and published human antibody sequences. Given differences in selectivity and target scope that we and others have observed between degradation pathways,lettuce vertical farming there is an ongoing need to co-opt novel receptors for lysosomal degradation, such as CXCR7, that may offer advantages in terms of tissue selectivity or degradation efficiency. Thus, we anticipate ongoing work on the KineTAC platform to offer new insights into which receptors can be hijacked and to greatly expand targeted protein degradation to the extracellular proteome for both therapeutic and research applications.SILAC proteomics data were analyzed using PEAKSOnline . For all samples, searches were performed with a precursor mass error tolerance of 20 ppm and a fragment mass error tolerance of 0.03 Da. For whole cell proteome data, the reviewed SwissProt database for the human proteome was used. For surface enriched samples, a database composed of SwissProt proteins annotated “membrane” but not “nuclear” or “mitochondrial” was used to ensure accurate unique peptide identification for surface proteins, as previously described.Carbamidomethylation of cystine was used as a fixed modification, whereas the isotopic labels for arginine and lysine, acetylation of the N-terminus, oxidation of methionine, and deamidation of asparagine and glutamine were set as variable modifications. Only PSMs and protein groups with an FDR of less than 1% were considered for downstream analysis. SILAC analysis was performed using the forward and reverse samples, and at least 2 labels for the ID and features were required. Proteins showing a >2-fold change from PBS control with a significance of P<0.01 were considered to be significantly changed.Cell viability assays were performed using an MTT modified assay. In brief, on day 0 15,000 MDA-MB-175VII cells were plated in each well of a 96-well plate. On day 1, bispecifics or control antibodies were added in a dilution series. Cells were incubated at 37ºC under 5% CO2 for 5 days. On day 6, 40 µL of 2.5 mg/mL thiazolyl blue tetrazolium bromide was added to each well and incubated at 37ºC under 5% CO2 for 4 hrs. 100 µL of 10% SDS in 0.01M HCl was then added to lyse cells and release MTT product.

After 4 hrs at room temperature, absorbance at 600 nm was quantified using an Infinite M200 PRO plate reader . Data was plotted using GraphPad Prism software and curves were generated using non-linear regression with sigmoidal 4PL parameters. Male nude nu/nu mice were treated with 5, 10, or 15 mg/kg CXCL12-Tras via intravenous injection . Blood was collected from the lateral saphenous vein using EDTA capillary tubes at day 0 prior to intravenous injection and at days 3, 5, 7, and 10 post injection. Plasma was separated after centrifugation at 700xg at 4ºC for 15 min. To determine the levels of CXCL12-Tras, 1 µL of plasma was diluted into 30 µL of NuPAGE LDS sample buffer and loaded onto a 4-12% Bis-Tris gel and ran at 200V for 37 min. The gel was incubated in 20% ethanol for 10 min and transferred onto a polyvinylidene difluoride membrane. The membrane was washed with water followed by incubation for 5 min with REVERT 700 Total Protein Stain . The blot was then washed twice with REVERT 700 Wash Solution and imaged using an OdysseyCLxImager . The membrane was then blocked in PBS with 0.1% Tween-20 + 5% bovine serum albumin for 30 min at room temperature with gentle shaking. Membranes were incubated overnight with 800 CW goat anti-human IgG at 4ºC with gentle shaking in PBS + 0.2% Tween- 20 + 5% BSA. Membranes were washed four times with tris-buffered saline + 0.1% Tween-20 and then washed with PBS. Membranes were imaged using an OdysseyCLxImager . Band intensities were quantified using Image Studio Software .The concept of targeted degradation has emerged in the last two decades as an attractive alternative to conventional inhibition. Small molecule inhibitors primarily work through occupancy-driven pharmacology, resulting in temporary inhibition in which the therapeutic effect is largely dependent on high potency. On the other hand, PROteolysis TArgeting Chimeras utilize event-driven pharmacology to degrade proteins in a catalytic manner.Traditionally, PROTACs are heterobifunctional small molecules composed of a ligand binding a protein of interest chemically linked to a ligand binding an E3 ligase. The recruitment of an E3 ligase enables the transfer of ubiquitin onto the protein of interest, which is subsequently polyubiquitinated and recognized by the proteasome for degradation . In many cases, PROTACs have proven efficacious over the small molecule inhibitors alone, and several candidate PROTACs have progressed to clinical trials for treating human cancers and other diseases. Despite these successes, small molecule PROTACs are largely limited to targeting intracellular proteins. Given this challenge, there is a need for novel technologies that expand the scope of targeted degradation to membrane proteins. Recently, our lab has developed a method termed antibody-based PROTACs which utilize bispecific antibody scaffolds to bring membrane-bound E3 ligases in proximity to a membrane protein of interest for targeted degradation.Thus far, AbTACs have shown success in using bispecific IgGs to recruit E3 ligase RNF4 to programmed death ligand 1 for efficient lysosomal degradation. This data suggests that it is possible to use bispecific antibodies to degrade membrane proteins for which antibodies already exist or that have characteristics amenable to recombinant antibody selection strategies.However, the ability to degrade multipass membrane proteins, such as GPCRs, remains challenging due to few extracellular-binding antibodies existing for this target class. Here, we describe a novel approach to expand the scope of AbTACs to targeting multi-pass membrane proteins. This approach, termed antibody-drug conjugate PROTACs , comprises of an antibody targeting a cell surface E3 ligase chemically conjugated to a small molecule that specifically binds the protein of interest .

VLE identified focal areas of concern in 77% of BE procedures

All patients underwent standard of care endoscopy including WLE in accordance with their institution’s standard procedures followed by VLE examination. Sample VLE features relevant to normal and abnormal structures in the esophagus were used as a general guideline to interpret VLE images in the study .Investigators were trained on the use of the technology and supported as needed onsite and offsite by technical experts from the sponsor throughout the study. VLE scans were registered longitudinally and rotationally with the WLE image of the esophagus. When a lesion was identified on VLE,the investigator would triangulate the location of the lesion by recording the distance and clock face registered with the WLE orientation. This information then was used to guide the investigator to acquire the tissue using WLE. At the time of the study, this was the method that was available to target a tissue site for sampling. Additional procedure details can be found in Supplementary Material A. Following VLE, each investigator performed any desired diagnostic or therapeutic actions based on their standard of care according to WLE and advanced imaging findings. Highest grade of disease on the pathology results was recorded for advanced imaging guided tissue acquisition, targeted endoscopic tissue acquisition, and random biopsies. VLE guided tissue acquisition refers to the subgroup of advanced imaging guided tissue biopsy or resection specimens where only VLE imaging was used to identify the areas of interest. Investigators were given a questionnaire post procedure and data were collected as to the clinical workflow and utility of the VLE images. The questions included whether VLE guided either their tissue sampling or therapeutic decisions for each patient,vertical indoor hydroponic system and whether VLE identified suspicious areas not seen on WLE or other advanced imaging modalities.

Descriptive statistics were used for quantitative analyses in the study. In light of the vast majority of registry patients having suspected or confirmed BE, the investigators elected to focus initial analysis on this group and to assess potential roles of VLE in BE management. Suspected BE refers to patients with no prior histologic confirmation of BE who had salmon colored mucosa found on endoscopic examination with WLE. The analysis focused on the incremental diagnostic yield improvement of VLE as an adjunct modality on top of the standard of care practice. Procedures with confirmed neoplasia were included in the analysis. The procedures were divided into subgroups according to whether the tissue acquisition method was VLE targeted. Dysplasia diagnostic yields were calculated using the number of procedures in each subgroup and total number of procedures in patients with previously diagnosed or suspected BE. Negative predictive value analysis in patients with prior BE treatment evaluated the utility of VLE on top of the standard of care surveillance to predict when there is no dysplasia present. Procedures with negative endoscopy findings and negative VLE findings but with tissue acquisition performed were included in the analysis and NPVs for both SoC and SoC + VLE were calculated. The primary evaluation focused on HGD and cancer since the recommended image interpretation criteria were validated for detecting BE-related neoplasia,and treatment is recommended for patients with neoplasia per existing guidelines.From August 2014 through April 2016, 1000 patients were enrolled across 18 trial sites . The majority of patients were male , with a mean age of 64 years . A total of 894 patients had suspected or confirmed BE at the time of enrollment including 103 patients with suspected BE and 791 patients with prior histological confirmation. Of the confirmed BE patients, 368 had BE with neoplasia, 170 had BE with low grade dysplasia , 49 had BE indefinite for dysplasia , and 204 had nondysplastic BE .

A total of 56% of patients had undergone prior endoscopic or surgical interventions for BE including RFA, Cryo, and EMR . Post-procedure questionnaires were completed for all procedures in patients with previously diagnosed or suspected BE . In over half of the procedures, investigators identified areas of concern not seen on either WLE or other advanced imaging modalities. Both VLE and endoscopic BE treatment were performed in 352 procedures. VLE guided the intervention in 52% of these procedures. In 40% of procedures, the depth or extent of disease identified on VLE aided the selection of a treatment modality. Neoplasia was confirmed on tissue sampling performed in 76 procedures within the cohort of patients with previously diagnosed or suspected BE . Among these procedures, VLE-guided tissue acquisition alone found neoplasia in 26 procedures , with an additional case where HGD on random forceps biopsy was upstaged to IMC on VLE-targeted sampling. Histology from these procedures included 16 HGD, 5 IMC, and 6 EAC. Thus, VLE-guided tissue acquisition as an adjunct to standard practice detected neoplasia in an additional 3% of the entire cohort of patients with previously diagnosed or suspected BE, and improved the diagnostic yield by at least 55% . Of the 894 BE patients, 393 had no prior history of esophageal therapy. Mean Prague classification score for this cohort were C = 2.3 cm , M = 4.1 cm . In 199 of these treatment na¨ıve patients, VLE identified at least one focally suspicious area not appreciated during either WLE or other advanced imaging evaluation. Neoplasia was confirmed on histology in 24 procedures . In of these procedures, VLE alone identified neoplasia as all random biopsies for these patients were negative. Additionally, one casewhere HGD was found on random forceps biopsy was upstaged to IMC on VLE-targeted sampling. In this group, VLE-guided tissue acquisition increased neoplasia detection by 700% . For these untreated BE patients, VLE-guided tissue acquisition as an adjunct to standard practice detected neoplasia in an additional 5.3% of procedures .

The number needed to test with VLE to identify neoplasia not detected with standard of care technique was 18.7. An average of 1.7 additional sites per patient required targeted tissue acquisition when suspected regions were identified using VLE compared to an average of 11 random biopsies per patient. A sub-analysis was conducted in the 238 patients with prior BE treatment and either no visible BE or irregular z-line. From this group, 82% had no focally suspicious findings on WLE examination, where two procedures were subsequently diagnosed with neoplasia . Thus, the NPV for WLE was 99% for neoplasia. When combining WLE/NBI with VLE as an adjunct, we found that 49% of the post-treatment procedures had no suspicious WLE or VLE findings. Neoplasia was found in none of these procedures, corresponding to a negative predictive value of 100% .Advanced imaging techniques including high definition-WLE, NBI, CLE, and chromoendoscopy have continued to improve the evaluation of Barrett’s esophagus. However, these provide only superficial epithelial evaluation. VLE breaks this boundary by imaging the mucosa, submucosa, and frequently,vertical farming tower for sale down to the muscular is propria. It does so while evaluating a large tissue area in a short period of time without sacrificing resolution. This 1000-patient multi-center registry assessed the clinical utility of VLE for the management of esophageal disorders and has demonstrated its potential as an adjunct tool for detecting disease. Abnormalities were found on VLE which were not seen with other imaging in over half of the procedures. Endoscopists using VLE in this study felt that it guided tissue acquisition in over 70% of procedures and BE treatment in the majority of procedures where interventions were performed. VLE visualization of subsurface tissue structures allows comprehensive morphological evaluation, resulting in physicians reporting suspicious areas only seen on VLE when other advanced imaging modalities were also used in more than half of procedures. Although subjective, these results still provide useful insight into the physicians’ perception of the technology. This study found that VLE as an adjunct modality increased neoplasia diagnosis by 3%, and improved the neoplasia diagnostic yield by 55% over standard practice and other advanced imaging modalities. For a treatment na¨ıve population with no focally suspicious regions found on WLE, VLE-guided tissue acquisition improved neoplastic diagnostic yield by 700%. This finding is impressive, particularly as these procedures were performed prior to the release of a real time laser marking system.

Laser marking has since been evaluated by Alshelleh et al., who found a statistically significant improvement of neoplasia yield using the VLE laser marking system compared to the standard Seattle protocol.In this registry, an additional 2.3 sites per patient on average required guided biopsy or resection when suspected regions were identified using VLE, while an average of 15.8 random biopsies per patient were performed in the cohort of patients with previously diagnosed or suspected BE . In general, higher tissue sampling density leads to an increased chance of detecting dysplasia due to its focal nature, therefore taking additional biopsies should increase the diagnostic yield. However, the potential for advanced imaging such as VLE to provide targeted, high yield biopsies could reduce the total number of biopsies necessary to adequately evaluate the diseased mucosa with the Seattle protocol. The combination of a focally unremarkable WLE and VLE examination provided a negative predictive value of 100% for neoplasia in post-treatment population. Although not reaching statistical significance due to limited sample size, these early results provide promise for the utility of VLE to better predict when there is no disease present, i.e. a ‘clean scan.’ Such a tool could then potentially allow for extended surveillance intervals reducing the number of endoscopies to manage the patient’s needs. The utility of this analysis is subject to several limitations. As a post-market registry study, there was no defined protocol for imaging, image interpretation and tissue acquisition, and there was no control group for matched population comparisons. The early experience of users on VLE image interpretation may have resulted in over calling areas of concern. Abnormalities located deeper in the esophageal wall could be targeted with forceps biopsies at one site, while other sites would utilize endoscopic resection techniques that are more likely to remove the target. All of these discrepancies could affect any calculations regarding the adjunctive yield of VLE-targeted sampling. Further analysis of the global detection rate of dysplasia by site did not reveal any statistical difference. At the time of this study, image interpretation was performed using previously published guidelines for detection of neoplasia in Barrett’s esophagus with OCT.Challenges with histopathological diagnosis of LGD limited the development of VLE criteria for LGD. As such, the analyses in this study focused on neoplasia. Current guidelines suggest that treatment of LGD is acceptable so detection of LGD with VLE should be addressed in a future study. Additionally, the characteristic image features that maximize sensitivity and specificity of confirmatory biopsies must be optimized. Recently, Leggett et al. established an updated step-wise diagnostic algorithm to detect dysplasia based on similar VLE features used in this study.This diagnostic algorithm achieved 86% sensitivity, 88% specificity, and 87% diagnostic accuracy to detect BE dysplasia with almost perfect interobserver agreement among three raters .Further optimization of VLEimage features for identifying dysplasia and neoplasia are ongoing . Other limitations of the study include the lack of central pathology for interpretation of specimens, which could affect the reported benefit of VLE in finding dysplasia. However, this manuscript focuses on neoplasia where there is less inter observer variability compared to lowgrade dysplasia. Finally, as a non-randomized study conducted mostly at large BE referral centers with possibly higher pre-test probability of neoplasias, it is plausible that their validity in a community setting is limited. However, the large sample size, its heterogeneity, plus variation in technique by site likely restore at least some of the external validity of the findings. This registry-based study demonstrates the potential for VLE to fill clinically relevant gaps in our ability to evaluate and manage BE. Physicians perceived significant value of VLE across the BE surveillance and treatment paradigm. Biopsy confirmation demonstrated benefits of VLE for both treatment na¨ıve and post treatment surveillance, although pathology results did not always align with physician perception, most likely due to limitations of the technology and image criteria at the time of study. Given expected refinement and validation of image interpretation, and the availability of laser-marking for more accurate biopsy targeting, VLE is well positioned to enhance our ability to identify and target advanced disease and enable a more efficient endoscopic examination with higher yield of tissue acquisition.VLE is well positioned to enhance our ability to identify and target advanced disease and enable a more efficient endoscopic examination with higher yield of tissue acquisition of burnout among workers.

The program included nine in-class nutrition lessons coordinated with garden activities

These spheres of influence are multifaceted and include factors such as income, ethnicity and cultural values and settings such as schools and retail food establishments. Consequently, measurable progress in reducing childhood obesity requires a multifaceted approach: a coordinated, comprehensive program that integrates messages regarding nutrition, physical activity and health with a child’s immediate environment and surrounding community . Adequate access to healthy food and physical recreation opportunities is essential to promote sustained behavior changes . Schools and after-school programs provide a unique setting for this approach, as they provide access to children, parents, families, educators, administrators and community members . The purpose of this article is to examine garden-enhanced nutrition education and Farm to School programs. Further, a questionnaire was developed and distributed to UC Cooperative Extension advisors and directors to assess their role in garden-enhanced nutrition education and Farm to School programs. Results from this questionnaire highlight UCCE’s integral role in this field.School gardens were first implemented in the United States at the George Putnam School in Roxbury, Massachusetts, in 1890, and by 1918 there was at least one in every state . During World Wars I and II, more than a million children were contributing to U.S. food production with victory gardens, which were part of the U.S. School Garden Army Program . More recently, incorporating gardens into the educational environment has become more popular worldwide,plastic pots 30 liters due partly to the appreciation of the importance of environmental awareness and integrated learning approaches to education .

As the agricultural powerhouse of the nation , California is poised to serve as a model for agriculture-enhanced nutrition and health education. Within California, the impetus to establish gardens in every school gained momentum in 1995, when then-State Superintendent of Public Instruction Delaine Eastin launched an initiative to establish school gardens as learning laboratories or outdoor classrooms . Assembly Bill 1535 created the California Instructional School Garden Program, allowing the California Department of Education to allocate $15 million for grants to promote, develop and sustain instructional school gardens. About 40% of California schools applied for these grants, and $10.9 million was awarded . It has been repeatedly shown that garden-enhanced nutrition education has a positive effect on children’s fruit and vegetable preferences and intakes . For example, after a 17-week standards-based, garden-enhanced nutrition education program, fourth-grade students preferred a greater variety of vegetables than did control students.For example, students learned that plants and people need similar nutrients. Many of these improvements persisted and were maintained at a 6-month follow-up assessment . In a similar study of a 12-week program combining nutrition lessons with horticulture, sixth grade students likewise improved their vegetable preferences and consumption . In addition, after a 13-week garden-enhanced nutrition program, middle school children ate a greater variety of vegetables than they had initially . While garden-enhanced nutrition education is one innovative method to improve children’s vegetable preferences and intake, researchers and educators consistently call for multi-component interventions to have the greatest impact on student health outcomes. Suggested additional components include classroom education, Farm to School programs, healthy foods available on campus, family involvement, school wellness policies and community input .

Moreover, the literature indicates that providing children with options to make healthy choices rather than imposing restrictions has long-term positive effects on weight . Taken together, it is reasonable to suggest that we are most likely to achieve long-lasting beneficial changes by coordinating a comprehensive garden-enhanced nutrition education program with school wellness policies, offering healthy foods on the school campus, fostering family and community partnerships and incorporating regional agriculture.Farm to School programs connect K-12 schools and regional farms, serving healthy, local foods in school cafeterias or classrooms. General goals include improving student nutrition; providing agricultural, health and nutrition education opportunities; and supporting small and mid-sized local and regional farms . Born through a small group of pilot projects in California and Florida in the late 1990s, Farm to School is now offered in all 50 states, with more than 2,000 programs nationwide in 2010 . The dramatic increase in the number and visibility of Farm to School programs can likely be attributed to factors including heightened public awareness of childhood obesity, expanding access to local and regional foods in school meals, concerns about environmental and agricultural issues as well as the sustainability of the U.S. food system. Farm to School programs provide a unique opportunity to address both nutritional quality and food system concerns. From a nutrition and public health standpoint, these programs improve the nutritional quality of meals served to a large and diverse population of children across the country. From a food systems and economic perspective, Farm to School programs connect small and mid-sized farms to the large, stable and reliable markets created by the National School Lunch Program .

Farm to School programs require partnerships that include a state or community organization, a local farmer or agricultural organization, a school nutrition services director and parents. Historically, Farm to School programs are driven, supported and defined by a community. Because they reflect the diverse and unique communities they serve, individual Farm to School programs also vary from location to location, in addition to sharing the characteristics described above. The first national Farm to School programs were initiated in 2000 and soon gained momentum in California, with support from the USDA Initiative for Future Agriculture and Food Systems as well as the W.K. Kellogg Foundation. In 2005, Senate Bill 281 established the California Fresh Start Program to encourage and support additional portions of fresh fruits and vegetables in the School Breakfast Program. This bill also provided the California Department of Education with $400,000 for competitive grants to facilitate developing the California Fresh Start Program . Concomitant with the growth of Farm to School programs, the National Farm to School Network was formed in 2007 with input from over 30 organizations and today engages food service, agricultural and community leaders in all 50 states. The evolution of this network has influenced school food procurement and nutrition/ food education nationwide .Evaluations of Farm to School impact have been conducted since the program’s inception. A 2008 review of 15 Farm to School evaluation studies, which were conducted between 2003 and 2007, showed that 11 specifically assessed Farm to School–related dietary behavior changes . Of these 11 studies, 10 corroborated the hypothesis that increased exposure to fresh Farm to School produce results in positive dietary behavior changes. In addition, a 2004-2005 evaluation of plate waste at the Davis Joint Unified School District salad bar showed that 85% of students took produce from the salad bar and that 49% of all selected salad bar produce was consumed . Additionally, school record data demonstrates that throughout the 5 years of the 2000-to-2005 Farm to School program, overall participation in the school lunch program ranged from a low of 23% of enrollment to a high of 41%, with an overall average of 32.4%. This compared to26% participation before salad bars were introduced. Overall participation in the hot lunches averaged 27% of enrollment . While Farm to School evaluations generally indicate positive outcomes ,round plastic pots conclusive statements regarding the overall impact of such programs on dietary behavior cannot be made. This can be attributed to the substantial variation in Farm to School structure from district to district, and variation in the study design and methodologies of early program evaluations. Methods for evaluating dietary impact outcomes most commonly include using National School Lunch Program participation rates and food production data as proxies for measuring consumption.

Additional evaluation methods include using self-reported measures of consumption such as parent and student food recalls or frequency questionnaires, and direct measures of consumption such as school lunch tray photography and plate waste evaluation. There are relatively few studies using an experimental design to evaluate the impact of Farm to School programs on fruit and vegetable intake, and even fewer of these studies use controls. Moreover, the Farm to School evaluation literature has no peer-reviewed dietary behavior studies using a randomized, controlled experimental design, which is undoubtedly due to the complex challenges inherent in community research. For example, schools may view the demands of research as burdensome or may question the benefits of serving as control sites. Due partly to its year-round growing season, California has more Farm to School programs than most, if not all, states. UC Davis pioneered some of the early uncontrolled studies quantifying Farm to School procurement, costs and consumption. UC ANR is now conducting new controlled studies to collect more rigorous data, which will differentiate outcomes of Farm to School programs from those due to other environmental factors. To clarify the role of UC ANR in garden-based nutrition education and Farm to School programs, a questionnaire was developed and administered through Survey Monkey in November 2011. This survey was sent to 60 UCCE academic personnel, including county directors; Nutrition, Family and Consumer Sciences advisors; 4-H Youth Development advisors; and others. For the purposes of this questionnaire, Farm to School was broadly defined as a program that connects K-12 schools and local farms and has the objectives of serving healthy meals in school cafeterias; improving student nutrition; providing agriculture, health and nutrition education; and supporting local and regional farmers. Survey. A cover letter describing the purpose of the survey and a link to the questionnaire was emailed to representatives from all UCCE counties. The questionnaire was composed of 26 items that were either categorical “yes/no/I’m not sure” questions or open-ended questions allowing for further explanation. An additional item was provided at the end of the questionnaire for comments. Respondents were instructed to return the survey within 11 days. A follow-up email was sent to all participants after 7 days. This protocol resulted in a 28% response rate, typical in a survey of this kind. Respondents represented 21 counties, with some representing more than one county; in addition, one was a representative from a campus-based unit of ANR. Questionnaire respondents included three county directors, six NFCS advisors, four 4-HYD advisors, one NFCS and 4-HYD advisor, and three other related UCCE academic personnel . The responding counties were Riverside, San Mateo and San Francisco; San Bernardino, Stanislaus and Merced; Contra Costa, Yolo, Amador, Calaveras, El Dorado and Tuolumne; Mariposa, Butte, Tulare, Alameda, ShastaTrinity, Santa Clara, Ventura and Los Angeles. Farm to School and school gardens. All 21 counties responding to the survey reported that they had provided a leadership role in school gardens, after-school gardens and/or Farm to School programs during the previous 5 years . Five out of 17 respondents reported that their counties provided a leadership role in Farm to School programs. Fourteen out of 17 respondents indicated that they individually played a leadership role in school garden programs, including serving as a key collaborator on a project, organizing and coordinating community partners, acting as school/agriculture stakeholders and/or serving as a principal investigator, coprincipal investigator or key collaborator on a research study. The most frequently reported reasons for having school and after-school gardens were to teach nutrition, enhance core academic instruction and provide garden produce . Additional reasons cited in the free responses included to study the psychological impacts of school gardens, enhance science and environmental education, teach composting, increase agricultural literacy, teach food origins, participate in service learning and provide a Gardening Journalism Academy. Reasons for success. The factors most frequently cited as contributing to successful school and after-school garden and Farm to School programs were community and nonparent volunteers, outside funding and enthusiastic staff . The 17 respondents indicated that the success of these programs was also aided by the multidisciplinary efforts within UC ANR , Farm Bureau, Fair Board and 4-H Teens as Teachers. Barriers. The most common factors cited as barriers to school and after-school gardens and Farm to School programs were lack of time and lack of knowledge and experience among teachers and staff . Additional barriers included lack of staff, cutbacks, competing programs for youth and lack of after-school garden-related educational materials for mixed-age groups. With regard to the Farm to School programs, one respondent perceived increased expense to schools, absence of tools to link local farmers with schools, a lack of growers and a lack of appropriate facilities in school kitchens.

Forward osmosis technology is also commonly used for food and drug processing

Area with high N2O emission has a relatively lower oxygen concentration due to the expansion of nutrients runoff from land. To diminish the negative environmental impacts, fertigation treatment could reduce the amount of nitrogen and nutrients input to the soil, prevent over fertilization, and excess nutrient runoff to the river. Forward osmosis has many advantages regard saving physical footprints. High waste water recovery rate, minimized resupply, and low energy cost can facilitate the sustainability of forward osmosis. However, forward osmosis has a lower membrane fouling propensity compared to other pressure-driven membrane processes. Forward osmosis is usually applied as pretreatment of reverse osmosis, the total energy consumption of a combination of FO and RO is lower than reverse osmosis alone. Moreover, osmotic backwashing can be compelling to restrict the membrane while reducing energy consumption at the same time. In the situation when Nanofiltration served as post-treatment combined with fertilizer draw forward osmosis can backwash the excess fertilizer replenishment and turn it into concentrated fertilizer draw solutions. The energy consumption of FDFO brackish water recovery using cellulose triacetate is affected by draw solution concentration , flow rates ,fodder systems for cattle and membrane selection. Membrane orientation and the flow rates have a minor effect on specific energy consumption compared to draw solution concentration. A diluted fertilizer draw solution can boost the system’s performance while a higher draw solution concentration can lower the specific energy consumption.

Moreover, a lower flow rate with a higher draw solution concentration can diminish the energy consumption of fertilizer draw forward osmosis to the lowest. This additional process would increase the energy consumption of the system. However, nanofiltration is necessary for desalination and direct fertigation treatment.The energy consumption of the nanofiltration process is determined by the environmental impacts, such as recovery rate, membrane lifetime, and membrane cleaning. Forward osmosis technology performs a 40-50% reduction in specific energy consumption compared to other alternatives. As a result, FO technology has the potential for wide adoption in drinking water treatment. Another area of application of FO usage is seawater desalination/brine removal, direct fertigation, wastewater reclamation, and wastewater minimization. Without the draw solution recovery step, forward osmosis could be applied as osmotic concentration. For example,fertilizer-draw forward osmosis is widely accepted for the freshwater supply and direct fertigation. However, in terms of the evaporative desalination process, it is more practical to treat the water with a lower total dissolved solid /salinity. Forward osmosis technology can be combined with other treatment methods such as reverse osmosis, nanofiltration, or ultrafiltration for different water treatment purposes. To be more specific, forward osmosis can be an alternative pre-treatment in conventional filtration/separation system ; an alternative process to conventional membrane treatment system ; a post-treatment process to recycle the volume of excess waste . The standalone forward osmosis process usually combines with additional post-treatment to meet the water quality standards for different purposes.

Forward osmosis has been researched in the past. In this review, we focused on fertilizer drawn forward osmosis, which can not only remove brine but also reduce multiple nutrient inputs such as nitrogen, phosphorous, potassium, and so on. Since a proper draw solution can reduce the concentration polarization, the draw solution selection becomes vital for both FO and FDFO processes. Moreover, different fertilizer draws solutions have various influences on energy consumption. The nutrient concentrations of treated water are controllable using the fertilizer-drawn forward osmosis treatment method. The composition of nutrients can be adjusted in the draw solution to produce water with different ratios of nutrients, which makes fertilizer draw forward osmosis a nearly perfect treatment method for direct fertigation. For the purpose of reducing N2O emissions, the removal rate of nitrogen in fertigation water is required to be improved using fertilizer drawn forward osmosis and nanofiltration. When nanofiltration is applied as post-treatment with fertilizer drawn forward osmosis, the nitrogen removal rate can reach up to 82.69% while using SOA as the draw solution. This number shows that treatment of fertigation can reach a higher standard of water quality attenuating nitrogen concentrations. As a result, lower nitrogen input in fertigation can significantly decrease the nitrous oxide emission from the soil for sustainable agricultural use. Forward osmosis can be also combined with other treatment methods to resolve the freshwater shortage problem. Despite the traditional seawater desalination treatment incorporating forward osmosis and reverse osmosis, the hybrid process of reverse osmosis and fertilizer drawn forward osmosis can remove the brine from water and lower the final nutrient concentration with a higher recovery rate. Lastly, the value of water flux, recirculation rate, draw solution concentration, membrane lifetime, and membrane cleaning can all be adjusted to minimize energy consumption as much as possible. In conclusion, FO and FDFO technologies are both environmentally friendly and economically for desalination and fertigation.

Evapotranspiration estimation is important for precision agriculture, especially precision water management. Mapping the ET temporally and spatially can identify variations in the field, which is useful for evaluating soil moisture and assessing crop water status. ET estimation can also benefit water resource management and weather forecast. ET is a combination of two separate processes, evaporation and transpiration . Evaporation is the process whereby liquid water is converted to water vapor through latent heat exchange. Transpiration is the process of the vaporization of liquid water contained in plant tissues,fodder sprouting system and the vapor removal to the atmosphere. The current theory for transpiration is constituted by the following three steps. First, the conversion of liquid-phase water to vapor water causes canopy cooling from latent heat exchange. Thus, canopy temperature can be used as an indicator of ET. Second, diffusion of water vapor from inside plant stomata on the leaves to the surrounding atmosphere. Third, atmospheric air mixing by convection or diffusion transports vapor near the plant surfaces to the upper atmosphere or off-site away from the plant canopy. Usually, evaporation and transpiration occur simultaneously.These direct ET methods, however, are usually point-specific or area-weighted measurements and cannot be extended to a large scale because of the heterogeneity of the land surface. The experimental equipment is also costly and requires substantial expense and effort, such as lysimeters, which are only available for a small group of researchers. For indirect methods, there are energy balance methods and remote sensing methods. For energy balance methods, Bowen ratio and eddy covariance have been widely used in ET estimation. However, they are also area-weighted measurements. Remote sensing techniques can detect variations in vegetation and soil conditions over space and time. Thus, they have been considered as some of the most powerful methods for mapping and estimating spatial ET over the past decades. Remote sensing models have been useful in accounting for the spatial variability of ET at regional scales when using satellite platforms such as Landsat and ASTER. Since the satellite started being applied, several remote sensing models have been developed to estimate ET, such as surface energy balance algorithm for land, mapping evapotranspiration with internalized calibration, the dual temperature difference, and the Priestley–Taylor TSEB. Remote sensing techniques can provide information such as normalized difference vegetation index , leaf area index , surface temperature, and surface albedo. Related research on these parameters has been discussed by different researchers. As a new remote sensing platform, researchers are very interested in the potential of small UAVs for precision agriculture, especially on heterogenous crops, such as vineyard and orchards.

UAVs overcome some of the remote sensing limitations faced by satellite. For example, satellite remote sensing is prone to cloud cover; UAVs are below the clouds. Unlike satellites, UAVs can be operated at any time if the weather is within operating limitations. The satellite has a fixed flight path; UAVs are more mobile and adaptive for site selection. Mounted on the UAVs, lightweight sensors, such as RGB cameras, multispectral cameras, and thermal infrared cameras, can be used to collect high-resolution images. The higher temporal and spatial resolution images, relatively low operational costs, and the nearly real-time image acquisition, make the UAVs an ideal platform for mapping and monitoring ET. Many researchers have already used UAVs for ET estimation, as shown in Table 1. For example, in Ortega-Farías et al. implemented a remote sensing energy balance algorithm for estimating energy components in an olive orchard, such as incoming solar radiation, sensible heat flux, soil heat flux, and latent heat flux. Optical sensors were mounted on a UAV to provide high spatial resolution images. By using the UAV platform, experiment results show that the RSEB algorithm can estimate latent heat flux and sensible heat flux with errors of 7% and 5%, respectively. It demonstrated that UAV could be used as an excellent platform to evaluate the spatial variability of ET in the olive orchard.There are two objectives for this paper. First, to examine current applications of UAVs for ET estimation. Second, to explore the current uses and limitations of UAVs, such as UAVs’ technical and regulatory restrictions, camera calibrations, and data processing issues. There are many other ET estimation methods, such as surface energy balance index, crop water stress index , simplified surface energy balance index, and surface energy balance system, which have not been applied with UAVs. Therefore, they are out of the scope of this article. This study is not intended to provide an exhaustive review of all direct or indirect methods that have been developed for ET estimation. The rest of the paper is organized as follows: Section 2 introduces different UAV types being used for ET estimation. Several commonly used lightweight sensors are also compared in Section 2. The ET estimation methods being used with UAV platforms, as shown in Table 1, are discussed. In Section 3, different results of ET estimation methods and models are compared and discussed. Challenges and opportunities, such as thermal camera calibration, UAV path planning, and image processing, are discussed in Section 4. Lastly, the authors share views regarding ET estimation with UAVs in future research and draw conclusive remarks. Many kinds of UAVs are used for different research purposes, including ET estimation. Some popular UAV platforms are shown in Figure 1. Typically, there are two types of UAV platforms, fixed-wings and multirotors. Fixed-wings can usually fly longer with a larger payload. They can usually fly for about 2 h, which is suitable for a large field. Multirotors can fly about 30 min, which is suitable for short flight missions. Both of them have been used in agricultural research, such as, which promises great potential in ET estimation.Mounted on UAVs, many sensors can be used for collecting UAV imagery, such as multispectral and thermal images, for ET estimation. For example, the Survey 3 camera has four bands, blue, green, red, and near-infrared , with a spectral resolution of 4608 × 3456 pixels, and a spatial resolution of 1.01 cm/pixel. The Survey 3 camera has a fast interval timer, 2 s for JPG mode, and 3 s for RAW + JPG mode. Faster interval timer would benefit the overlap design for UAV flight missions, such as reducing the flight time, and enabling higher overlapping. Another multi-spectral camera being commonly used is the Rededge M. The Rededge M has five bands, which are blue, green, red, near-infrared, and red edge. It has a spectral resolution of 1280 × 960 pixel, with a 46field of view. With a Downwelling Light Sensor , which is a 5-band light sensor that connects to the camera, the Rededge M can measure the ambient light during a flight mission for each of the five bands. Then, it can record the light information in the metadata of the images captured by the camera. After the camera calibration, the information detected by the DLS can be used to correct lighting changes during a flight, such as changes in cloud cover during a UAV flight. The thermal camera ICI 9640 P has been used for collecting thermal images as reported in. The thermal camera has a resolution of 640 × 480 pixels. The spectral band is from 7 to 14 µm. The dimensions of the thermal camera are 34 × 30 × 34 mm. The accuracy is designed to be ±2 C. A Raspberry Pi Model B computer can be used to trigger the thermal camera during flight missions. The SWIR 640 P-Series , which is a shortwave infrared camera, can also be used for ET estimation. The spectral band is from 0.9 µm to 1.7 µm. The accuracy for the SWIR camera is ±1 C. It has a resolution of 640 × 512 pixels.

Crop yields can also vary endogenously in response to demand and price changes

Typically, they allow for endogenous structural adjustments in land use, management, commodity production, and consumption in response to exogenous scenario drivers . However, with several components of productivity parameters endogenously determined, it can be difficult to isolate the potential role of livestock efficiency changes due to technological breakthroughs or policy incentives. For example, as production decreases due to decreasing demand, so could productivity. In this case, a design feature can be a design faw for sensitivity analysis and policy assessment focused on individual key system parameters, even if model results can be further decomposed to disentangle endogenous and exogenous productivity contributions . Accounting-based land sector models, such as the FABLE Calculator, which we also employ in this current study, can offer similarly detailed sector representation, without the governing market mechanisms, thus allowing fully tunable parameters for exploring policy impacts . This feature facilitates quantifying uncertainty and bounding estimates through sensitivity analyses. The FABLE Calculator is a sophisticated land use accounting model that can capture several of the key determinants of agricultural land use change and GHG emissions without the complexity of an optimization based economic model. Its high degree of transparency and accessibility also make it an appealing tool to facilitate stakeholder engagement.This paper explores the impacts of healthier diets and increased crop yields on U.S. GHG emissions and land use,dutch buckets as well as how these impacts vary across assumptions of future livestock productivity and ruminant density in the U.S. We employ two complementary land use modeling approaches.

The first is the FABLE Calculator , a land use and GHG accounting model based on biophysical characteristics of the agricultural and land use sectors with high agricultural commodity representation. The second is a spatially-explicit partial equilibrium optimization model for global land use systems . The combination of these modeling approaches allows us to provide both detailed representation of agricultural commodities with high flexibility in scenario design and a dynamic representation of land use in response to known economic forces , qualities that are difficult to achieve in a single model. Both modeling frameworks allow us to project to 2050 U.S. national scale agricultural production, diets, land-use, and carbon emissions and sequestration under varying policy and productivity assumptions. Our work makes several advances to sustainability research. First, using agricultural and forestry models that capture market and intersectoral dynamics, this is the first non-LCA study to examine the sustainability of a healthier average U.S. diet . Second, using two complementary modeling approaches, this is the first study to explore the GHG and land use effects of the interaction of healthy diets and agricultural productivity. Specifically, we examined key assumptions about diet, livestock productivity, ruminant density, and crop productivity. Two of the key production parameters we consider—livestock productivity and stocking density—are affected by a transition to healthier diets but have not been extensively discussed in the agricultural economic modeling literature. Third, we isolate the effects of healthier diets in the U.S. alone, in the rest of the world, and globally, which is especially important given the comparative advantage of U.S. agriculture in global trade.To model multiple policy assumptions across dimensions of food and land use and have full flexibility in terms of parameter assumptions and choice of underlying data sets, we customized a land use accounting model built in Excel, the FABLE Calculator , for the U.S. Below we describe the design of the Calculator, but for more details we direct the reader to the complete model documentation .

The FABLE Calculator represents 76 crop and livestock products using data from the FAOSTAT database. The model first specifies demand for these commodities under selected scenarios , the Calculator computes agricultural production and other metrics, land use change, food consumption, trade, GHG emissions, water use, and land for biodiversity. The key advantages of the Calculator include its speed, the number and diversity of scenario design elements , simplicity, and its transparency. However, unlike economic models using optimization techniques, the Calculator does not consider commodity prices in generating the results, does not have any spatial representation, and does not represent different production practices. The following assumptions can be adjusted in the Calculator to create scenarios: GDP, population, diet composition, population activity level, food waste, imports, exports, livestock productivity, crop productivity, agricultural land expansion or contraction, reforestation, climate impacts on crop production, protected areas, post-harvest losses, bio-fuels. Scenario assumptions in the Calculator rely on “shifters” or time-step-specific relative changes that are applied to an initial historic value using a user-specified implementation rate. The Calculator performs a model run through a sequence of steps or calculations, as follows: calculate human demand for each commodity; calculate livestock production; calculate crop production; calculate pasture and cropland requirements; compare the land use requirements with the available land accounting for restrictions imposed and reforestation targets; calculate the amount of feasible pasture and cropland; and calculate the feasible crop and livestock production; calculate feasible human demand; calculate indicators . See Figure S1 in the Supplementary Materials for a diagram of these steps. Using U.S. national data sources, we modified or replaced the US FABLE Calculator’s default data inputs and growth assumptions based on Food and Agriculture Organization data.

Specifically, we used crop and livestock productivity assumptions from the U.S. Department of Agriculture , grazing/stock intensity using literature from U.S. studies, miscanthus and switch grass bio-energy feed stock productivity assumptions from the Billion Ton study , updated beef and other commodity exports using USDA data, and created a “Healthy Style Diet for Americans” diet using the 2015–2020 USDA Dietary Guidelines for Americans . See SM Table S6 for all other US Calculator data and assumptions. We used these U.S.-specific data updates to construct U.S. diet, yield, and livestock scenarios and sensitivities . See for a full description of the other assumptions and data sources used in the default version of the FABLE Calculator.As a complement to the FABLE Calculator’s exogenously determined trade flows, we used GLOBIOM [a widely used and well-documented global spatially explicit partial equilibrium model of the forestry and agricultural sectors. Documentation can be found at the GLOBIOM github development site to capture the dynamics of endogenously determined international trade. Unlike the FABLE Calculator, GLOBIOM is a spatial equilibrium economic optimization model based on calibrated demand and supply curves as typically employed in economic models. GLOBIOM represents 37 economic production regions, with regional consumers optimizing consumption based on relative output prices, income, and preferences. The model maximizes the sum of consumer and producer surplus by solving for market equilibrium and using the spatial equilibrium modeling approach described in McCarl and Spreen and Takayama and Judge . Product-specific demand curves and growth rates over time allow for selective analysis of preference or dietary change through augmenting demand shift parameters over time to reflect differences in relative demand for specific commodities . Production possibilities in GLOBIOM apply spatially explicit information aggregated to Simulation Units, which are aggregates of 5 pixels of the same altitude, slope, and soil class, within the same 30 arcmin pixel, and within the same country. Land use, production and prices are calibrated to FAOSTAT from the 2000 historic period. Production systems parameters and emissions coefficients for specific crop and livestock technologies are based on detailed biophysical process models,grow bucket including EPIC for crops and RUMINANT for livestock . Livestock and crop productivity changes are reflected by both endogenous and exogenous components. For crop production, GLOBIOM yields can be shifted exogenously to reflect technological or environmental change assumptions and their associated impact on yields. Exogenous yield changes are accompanied by changes in input use intensity and costs .A similar approach has been applied in other U.S.-centric land sector models, including the intertemporal approach outlined in Wade et al. . Furthermore, reflecting potential yield growth with input intensification per unit area is consistent with observed intensification of some inputs in the U.S. agricultural system. This includes nitrogen fertilizer intensity , which grew approximately 0.4% per year from 1988 to 2018 .

Higher prices can induce production system intensification or crop mix shifts across regions to exploit regional comparative advantages. GLOBIOM accounts for several different crop management techniques, including subsistence-level , low input, high input, and high input irrigated systems. The model simulates spatiotemporal allocation of production patterns and bilateral trade fows for key agriculture and forest commodities. Regional trade patterns can shift depending on changes in market or policy factors that Baker et al. and Janssens et al. explore in greater detail in addition to providing a more comprehensive documentation of the GLOBIOM approach to international trade dynamics, including cost structures and drivers of trade expansion or contraction, or establishing new bilateral trade flows. This approach allows for flexibility in trade adjustments at both the intensive and extensive margins given a policy or productivity change in a given region. GLOBIOM has been applied extensively to a wide range of relevant topics, including climate impacts assessment , mitigation policy analysis , diet transitions , and sustainable development goals . We designed new U.S. and rest-of-the world diet and yield scenarios , and ran all scenarios at medium resolution for the U.S. and coarse resolution for ROW. We chose Shared Socioeconomic Pathway 2 macroeconomic and population growth assumptions for all parameters across all scenarios when not specified or overridden by scenario assumptions .We aligned multiple assumptions in the FABLE Calculator with GLOBIOM inputs and/or outputs to isolate the impacts of specific parameter changes in livestock productivity and ruminant density. Specifically, we used the same set of U.S. healthy diet shifters in both models, but aligned the US FABLE Calculator’s crop yields and trade assumptions with GLOBIOM outputs to isolate the effects of increasing the ruminant livestock productivity growth rate and reducing the ruminant grazing density using the Calculator . While we developed high and baseline crop yield inputs for GLOBIOM, actual yields are reported because of the endogenous nature of yields in GLOBIOM. This two model approach allows us to explore the impact of exogenous changes to the livestock sector that cannot be fully exogenous in GLOBIOM. Subsequent methods sections describe each of these scenarios and sensitivity inputs in greater detail.We constructed a “Healthy U.S. diet” using the “Healthy U.S.-style Eating Pattern” from the USDA and US Department of Health and Human Services’ 2015–2020 Dietary Guidelines for Americans . We use a 2600 kcal average diet. This is a reduction of about 300 kcal from the current average U.S. diet given that the current diet is well over the Minimum Dietary Energy Recommendations of 2075 kcal, computed as a weighted average of energy requirement per sex, age, and activity level and the population projections by sex and age class following the FAO methodology . The DGA recommends quantities of aggregate and specific food groups in units of ounces and cup-equivalents on a daily or weekly basis. We chose representative foods in each grouping to convert volume or mass recommendations into kcal/day equivalents and assigned groupings and foods to their closest equivalent US Calculator product grouping . For DGA food groups that consist of more than one US Calculator product group, e.g., “Meats, poultry, eggs”, we used the proportion of each product group in the baseline American diet expressed in kcal/day and applied it to the aggregated kcal from the DGA to get the recommended DGA kcal for each product group . We made one manual modification to this process by increasing the DGA recommendation for beef from a calculated value of 36 kcal/day to 50 kcal/day, since trends in the last decade have shown per capita beef consumption exceeding that of pork . This process led to a total daily intake of 2576 kcal for the healthy U.S. diet . The Baseline, average U.S. diet is modeled in the US FABLE Calculator using FAO reported values on livestock and crop production by commodity in weight for use as food in the U.S., applying the share of each commodity that is wasted, then allocating weight of each commodity to specific food product groups , converting weight to kcal, and finally dividing by the total population and days in a year to get per capita kcal/day. See the Calculator for more details and commodity specific assumptions . This healthy U.S. diet expressed in kcal was used directly in the Calculator as a basis for human consumption demand calculations for specific crop and livestock commodities.

The harvested materials were frozen and ground into fine powder in liquid nitrogen

Previous studies have shown that SL promotes photomorphogenesis by increasing HY5 level . However, the molecular links from SL signaling to HY5 regulation have remained unclear. Our results show that BZS1 mediates SL regulation of HY5 level and photomorphogenesis. Similar to hy5-215, BZS1-SRDX seedlings are partially insensitive to GR24 treatment under light , which indicates that BZS1 plays a positive role in SL regulation of seedling morphogenesis. Actually, BZS1 is the only member in the subfamily IV of B-box protein family that is regulated by SL , suggesting that BZS1 plays a unique role in SL regulation of photomorphogenesis. As BZS1 increases HY5 level, SL activation of BZS1 expression would contribute, together with inactivation of COP1 , to the SL-induced HY5 accumulation. On the other hand, the BZS1-SRDX plants showed normal branching phenotypes , which suggests that BZS1 is only involved in SL regulation of HY5 activity and seedling photomorphogenesis but not shoot branching. Our finding of BZS1 function in SL response further supports a key role for BZS1 in integration of light, BR and SL signals to control seedling photomorphogenesis . To generate 15N-labeled seeds, Arabidopsis plants were grown hydroponically in diluted Hoagland solution containing 10 mM K15NO3 . One eighth diluted Hoagland medium was used at seedling stage and 1/4 Hoagland medium was used when plant started to bolt. After the siliques were fully developed, 1/8 Hoagland medium was used till seeds were fully mature. For SILIA-IP-MS assay,strawberry gutter system the 14N- or 15N-labeled seeds were grown on Hoagland medium containing 10 mM K14NO3 or K15NO3, respectively, for 5 days under constant white light.

The seedlings were harvested and ground to fine powder in liquid nitrogen. Five grams each of 14N-labeled BZS1-YFP or YFP and 15N-labeled wild-type tissue power were mixed and total proteins were extracted using extraction buffer . After removing the cell debris by centrifugation, 20 μL GFP-Trap®_MA Beads were added to the supernatant and then incubated in the cold room for 2 h with constant rotating. The beads were washed three times with IP wash buffer . The proteins were eluted twice using 50 μL 2 × SDS sample loading buffer by incubating at 95°C for 10 min. The isotope labels were switched in repeat experiments. The eluted proteins were separated by NuPAGE® Novex 4–12% Bis-Tris Gel . After Colloidal Blue staining , the gel was cut into five fractions for trypsin digestion. In-gel digestion procedure was performed according to Tang et al. . Extracted peptides were analyzed by liquid chromatographytandem mass spectrometry . The LC separation was performed using an Eksigent 425 NanoLC system on a C18 trap column and a C18 analytical column . Solvent A was 0.1% formic acid in water, and solvent B was 0.1% formic acid in acetonitrile. The flow rate was 300 nL/min. The MS/MS analysis was conducted with a Thermo Scientific Q Exactive mass spectrometer in positive ion mode and data dependent acquisition mode to automatically switch between MS and MS/MS acquisition. The identification and quantification were done by pFind and pQuant softwares in an open search mode. The parameters of software were set as follows: parent mass tolerance, 15 ppm; fragment mass tolerance, 0.6 Da. The FDR of the pFind analysis was 1% for peptides. Arabidopsis TAIR10 database was used for data search. Three-day-old Arabidopsis seedlings expressing BZS1-YFP or YFP alone were grown under constant light and used for BZS1-COP1 co-immunoprecipitation assay. For the BZS1, HY5 and STH2 co-immunoprecipitation assay, about one-month-old healthy Nicotiana benthamiana leaves were infiltrated with Agrobacterium tumefaciens GV3101 harboring corresponding plasmids.

The plants were then grown under constant light for 48 h and infiltrated leaves were collected. Total proteins from 0.3 g tissue powder were extracted with 0.6 mL extraction buffer . The lysate was pre-cleared by centrifugation twice at 20,000 g for 10 min at 4°C, and then diluted with equal volume of extraction buffer without Triton X-100. Twenty microliter of Pierce Protein A Magnetic Beads coupled with 10 μg anti-GFP polyclonal antibody were added to each protein extract and incubated at 4°C for 1 h with rotation. The beads were then collected by DynaMag™-2 Magnet and washed three times with wash buffer . The bonded proteins were eluted with 50 μL 2 × SDS loading buffer by incubating at 95°C for 10 min. For western blot analysis, proteins were separated by SDS-PAGE electrophoresis and transferred onto a nitrocellulose membrane by semi-dry transfer cell . The membrane was blocked with 5% none-fat milk followed by primary and secondary antibodies. Chemiluminescence signal was detected using SuperSignal™ West Dura Extended Duration Substrate and FluorChem™ Q System . Monoclonal GFP antibody was purchased from Clontech, USA. Myc antibody and ubiquitin antibody were from Cell Signaling Technology, USA.HY5 and COP1 antibodies were from Dr. Hongquan Yang’s lab. Secondary antibodies goat anti-mouse-HRP or goat anti-rabbitHRP were from Bio-Rad Laboratories. Arundo donax is a tall grass that is native from the lower Himalayas and invaded the Mediterranean region, prior to its introduction in the America’s . It is suspected to first have been introduced to the United States in the 1700’s, and in the Los Angeles area in the 1820’s by Spanish settlers . Its primary use was for erosion control in drainage canals.

A number of other uses for Arundo have been identified. It is the source of reeds for single reed wind instruments such as clarinet and the saxophone . In Europe and Morocco Arundo is used for waste water treatment , such as nutrient and heavy metal removal, and water volume evapotranspiration. The high rate of evapotranspiration by stands of this species, used as a benefit in these countries, is one of the characteristics that is detrimental in the California ecosystems invaded by Arundo. By the 1990’s Arundo has infested tens of thousands of acres in California riparian ecosystems, and these populations affect the functioning of these systems in different ways. It increases the fire hazard in the dry season . The regular fires promoted by the dense Arundo vegetation, are changing the nature of the ecosystem from a flood-defined to a fire-defined system . During floods, Arundo plant material can accumulate in large debris dams against flood control structures and bridges, and interfere with flood water control management , and bridges across Southern California rivers. It can grow up to 8-9 m tall, and its large leaf surface area can cause the evapotranspiration of up to 3 x the amount of water that would be lost from the water table by the native, riparian vegetation . Displacement of the native vegetation results in habitat loss for desired bird species, such as the federally endangered Least Bell’s Vireo and the threatened Willow Flycatcher . Due to the problems listed above, removal of Arundo from California ecosystems has been one of the priorities of a variety of organizations and agencies involved in the management of the state’s natural resources, such as the California Department of Fish & Game, a number of resource conservation districts. In the practice of Arundo control,grow strawberry in containers both mechanical and chemical methods of Arundo control are applied, sometimes in combination , the choice of their use depending on timing, terrain, vegetation, and funding. The risks, costs, and effects of the different control methods were listed in the most recent Arundo and saltcedar workshop by . The timing of the eradication effort can be affected by a number of factors other than the biology of the target species, such as limitations due to bird nesting season, and funding availability. Ideally, the timing of any eradication effort, chemical or mechanical should be determined by the ecophysiology of the target species, in this case Arundo donax, rather than the calendar year. For chemical eradication, this has been recognized for a while, as stated by Nelroy Jackson of Monsanto, at the first Arundo workshop: “Timing of application for optimal control is important. Best results from foliar applications of Rodeo© or Roundup© are obtained when the herbicides are applied in late summer to early fall, when the rate of downward translocation of glyphosate would be greatest.” A similar statement has not yet been made for the timing of mechanical eradication methods, nor had the effect of timing on the effectiveness of mechanical eradication been identified. Mechanical eradication of Arundo can be attempted in many different manners. The most frequently used method is the cutting of the above ground material, the plant’s tall stems. Another method of mechanical eradication is digging out the underground biomass, the rhizomes. The cutting of stems can occur before and after herbicide applications.

The large amount of standing above ground biomass, up to 45 kg/m2 impedes the removal of the cut material, because the costs will be too high. The costs associated with the removal of the large biomass of the stems, has led to the use of “chippers” that will cut the stems into pieces of approximately 5 – 10 cm in situ. After these efforts, the chipped fragments are left in place. A small fraction of the fragments left behind after chipping will contain a meristem. The stem pieces of these fragments may have been left intact, or split lengthwise. In the second case the node at which the meristem at located will have been split as well. On many pieces with a meristem, the meristem itself may still be intact. These stem fragments might sprout and regenerate into new Arundo plants . If stems are not cut into small pieces, or removed after cutting, the tall, cut stems can be washed into the watershed during a flood event. This material can accumulate behind bridges and water control structures with possible consequences as described in the introduction. Meristems on the stems can also sprout, and lead to the establishment of new stands of Arundo at the eradication project site, or down river . A. donax stands have a high stem density. The outer stalks of dense stands will start to lean to the outside because the leaves produced during the growing season push the stems in the stand apart. After the initial leaning due to crowding, gravity will pull the tall outside stems almost horizontal . Throughout this report these outside hanging stems will be referred to as “hanging stems”. The horizontal orientation causes hormonal asymmetry in these stems. The main hormones involved are IAA , GA and ethylene . The unusual IAA and GA distributions cause the side shoots developing on these hanging stems, to grow vertically. IAA also plays an important role in plant root development , and may therefore have a stimulative effect on root emergence from the adventious shoot meristem on fragments that originated from hanging stems, that would be absent in stem fragments from upright stems. In a preliminary experiment comparing root emergence between stem fragments from hanging and upright stems, 38% of the hanging stemstem fragments developed roots, while none of the upright stem-stem fragments showed root emergence . These results indicated the need for further study into the possibility that new A. donax plants can regenerate from the stem fragments with shoot meristems that might be dispersed during mechanical Arundo removal efforts. In order to apply herbicides at that time that the rate of downward translocation of photosynthates and herbicide would be greatest, this time period has to be established. Carbohydrate distribution and translocation within indeterminate plants, such as Arundo, results from the balance between the supply of carbon compounds to and the nitrogen concentration in the different plant tissues. Carbon and nitrogen are the most important elements in plant tissues. Due to different diffusion rates of NO3 – and NH4 + in soil water versus that of CO2 in air, and differences in plant N and C uptake rates, plant growth will earlier become nitrogen limited than carbon limited. During plant development tissue nitrogen concentrations are diluted by plant growth , which is mainly based on the addition of carbohydrates to the tissues. When plant growth becomes nitrogen limited, the tissue will maintain the minimum nitrogen content needed for the nucleic acids and proteins that maintain metabolic function. At this low tissue nitrogen content, there is not enough nitrogen in an individual cell to provide the nucleic acids and proteins to support the metabolism of two cells, therefore the cells cannot divide. This means that the tissue cannot grow anymore , until it receives a new supply of nitrogen.

SA treatment and SA deficiency conferred by NahG did not significantly impact ABA levels

The results suggest that SA responses in tomato play a less important role in defense against Phytophthora capsici than to Pst. The impact of SA and plant activators on ABA accumulation was measured in tomato roots and shoots.However, ABA accumulation in non-stressed TDL and BTH treatments trended higher than those observed in salt-stressed plants that did not receive a plant activator treatment . Protection by TDL against Pst is likely the result of a triggered SAR response and not the result of an antagonistic effect on ABA levels. The efficacy of plant activators depends on the specific diseases targeted and the environmental context, which may present additional stressors to confound defense network signaling in the plant. A challenge for successful deployment of plant activators in the field is to manage the allocation, ecological and fitness costs that are associated with induced defenses . These costs can be manifested by reduced growth and reproduction, vulnerability to other forms of attack, and potential interference with beneficial associations . It would seem that the severity of these costs is conditioned in part by the milieu of abiotic stressors operative at any given time. Reactive oxygen species contribute to the initiation of SAR , are induced by SA and BTH , and are essential co-substrates for induced defense responses such as lignin synthesis . ROS also are important in modulating abiotic stress networks, for example in ABA signaling and response . The potential compounding effect of ROS generated from multiple stressors presents a dilemma in that the plant must reconcile these to adapt or else suffer the negative consequences of oxidative damage for failure to do so . Paradoxically, SA and BTH also are reported to protect plants against paraquat toxicity, blueberry grow pot which involves ROS generation for its herbicidal action . How plants balance ROS’s signaling roles and destructive effects within multiple stress contexts is unresolved and a critically important area of plant biology with relevance for optimizing induced resistance strategies in crop protection .

Although our experiments were conducted under highly controlled conditions, the results with TDL are encouraging and show that chemically induced resistance to bacterial speck disease occurs in both salt-stressed and non-stressed plants and in plants severely compromised in SA accumulation. Future research with plant activators should consider their use within different abiotic stress contexts to fully assess outcomes in disease and pest protection.These syntenies of wheat and rye chromosomes permit the formation of compensating translocations of wheat and rye chromosomes. A compensating translocation is genetically equivalent to either of the two parental chromosomes; that is, it carries all relevant genes, but not necessarily in the same order. On the other hand, homoeology between wheat group 1S and rye 1S arms permitted induction of homoeologous genetic recombination, thus the development of recombinants of much smaller segments of rye 1RS to wheat than the entire arm. Many of the present wheat cultivars developed by breeding for disease resistance carry a spontaneous centric rye-wheat translocation 1RS.1BL that has been very popular in wheat breeding programs . This translocation contains a short arm of rye chromosome 1, and the long arm of wheat chromosome 1BL . It must have occurred by misdivision of centromeres of the two group 1 chromosomes, and fusion of released arms and first appeared in two cultivars from the former Soviet Union, Aurora and Kavkaz. Rye chromosome arm 1RS in the translocation contains genes for resistance to insect pest and fungal disease but as it spread throughout wheat breeding programs it became apparent that the translocation was also responsible for a yield boost in the absence of pests and disease . Besides the presence of genes for resistance and yield advantage on 1RS, there is a disadvantage of 1RS in wheat due to the presence of the rye seed storage protein secalin, controlled by the Sec-1 locus on 1RS, and the absence of the wheat loci, Gli-B1 and Glu-B3, on the 1RS arm. Lukaszewski modified the 1RS.1BL translocation by removing the Sec-1 locus and adding Gli-B1 and Glu-B3 on the 1RS arm. Lukaszewski developed a set of wheat−rye translocations, derived from ‘Kavkaz’ winter wheat that added 1RS to wheat arms 1AL, 1BL, and 1DL in spring bread wheat ‘Pavon 76’, a high yielding spring wheat from CIMMYT.

Studies showed that the chromosomal position of 1RS in the wheat genome affected agronomic performance as well as bread-making quality . Using the 1RS translocation, Lukaszewski developed a total of 183 wheatrye short arm recombinant lines for group 1 chromosomes in a near-isogenic background of cv. Pavon 76 bread wheat. Out of 183 recombinant chromosomes, 110 were from 1RS- 1BS combinations, 26 from 1RS-1AS and 47 from1RS-1DS combinations. Mago et al. used some of these lines to link molecular markers with rust resistance genes on 1RS. These recombinant brea kpoint populations provide a powerful platform to locate region specific genes. Wheat roots have two main classes, seminal roots and nodal roots . Seminal roots originate from the scutellar and epiblast nodes of the germinating embryonic hypocotyls, and nodal roots, emerge from the coleoptiler nodes at the base of the apical culm . The subsequent tillers produce their own nodal roots, two to four per node and thus contribute towards correlation of root and shoot development . The seminal roots constitute from 1-14% of the entire root system and the nodal roots constitute the rest . Genetic variation for root characteristics was reported in wheat and other crop species . Genetic variability for seedling root number was studied among different Triticum species at diploid, tetraploid, and hexaploid level and it was found to be positively correlated with seed weight . In a hydroponic culture study in winter wheat, Mian et al. found significant genotypic differences in root and shoot fresh weights, number of roots longer than 40 cm, longest root length and total root length. Wheat genotypes with larger root systems in hydroponic culture were higher yielding in field conditions than those with smaller root systems . Also, wheat yield stability across variable moisture regimes was associated with greater root biomass production under drought stress . Studies in other cereal crops associated quantitative trait loci for root traits with the QTL for grain yield under field conditions. Champoux et al. provided the first report of specific chromosomal regions in any cereal likely to contain genes affecting root morphology. They reported that QTL associated with root traits such as root thickness, root dry weight per tiller, root dry weight per tiller below 30 cm,hydroponic bucket and root to shoot ratio shared common chromosomal regions with putative QTL associated with field drought avoidance/tolerance in rice. Price and Tomos also mapped QTL for root growth using a different population than that used by Champoux et al. in rice.

In a field study of maize recombinant lines, QTL for root architecture and above ground biomass production shared the same location . Tuberosa et al. reported the overlap of QTL for root characteristics in maize grown in hydroponic culture with QTL for grain yield in the field under well-watered and droughted regimes occurred in 8 different regions. They observed that QTL for weight of nodal and seminal roots were most frequently and consistently overlapped with QTL for grain yield in drought and well watered field conditions. Also, at four QTL regions, increase in weight of the nodal and seminal roots was positively associated with grain yield under both irrigation regimes in the field. There are a few reports on QTL studies for root traits in durum wheat but none has been reported in bread wheat. Kubo et al. studied root penetration ability in durum wheat. They used discs of paraffin and Vaseline mixtures as substitute for compact soil. Later, a QTL analysis was done for the number of roots penetrating the poly vinyl disc, total number of seminal and crown roots, root penetration index and root dry weight . The QTL for number of roots penetrating the poly vinyl disc and root penetration index was located on chromosome 6A and a QTL for root dry weight was located on 1B. Wang et al. demonstrated significant positive heterosis for root traits among wheat F1 hybrids. They showed that 27% of the genes were differentially expressed between hybrids and their parents. They suggested the possible role of differential gene expression in root heterosis of wheat, and possible other cereal crops. In a recent molecular study of heterosis, Yao et al. speculated that up-regulation of TaARF, an open reading frame encoding a putative wheat ARF protein, might contribute to heterosis observed in wheat root and leaf growth. Rye, wheat and barley develop 4-6 seminal roots which show a high degree of vascular segmentation . Feldman traced files of metaxylem to their levels of origin in maize root apex and showed their differentiation behind the root apex in three-dimensional model. In drier environments, Richards and Passioura demonstrated that genotypes, when selected for narrow root xylem vessels as against unselected controls, yielded up to 3%-11% more than the unselected controls depending upon their genetic background. This yield increase in the selections with narrow root vessel was correlated with a significantly higher harvest index, also higher biomass at maturity and kernel number. Huang et al. indicated the decrease in diameter of metaxylem vessel and stele with increase in temperature which resulted in decreased axial water flow in wheat roots. The decrease in axial water flow is very critical in conserving water during vegetative growth and making it available during reproductive phase of the plant. In a recent study on root anatomy, QTL for metaxylem were identified on the distal end of the long arm of chromosome 10 of rice . In another comparative study of rye DNA sequences with rice genome, the distal end of the long arm of chromosome 10 of rice showed synteny to the 1RS chromosome arm . The 1RS.1BL chromosome is now being used in many wheat breeding programs. Rye has the most highly developed root system among the temperate cereals and it is more tolerant to abiotic stresses such as drought, heat, and cold than bread wheat .

Introgression of rye chromatin into wheat may enlarge the wheat root system. Manske and Vlek reported thinner roots and higher relative root density for 1RS.1BL translocations compared with their non-translocated bread wheat checks in an acid soil, but not under better soil conditions. Repeated studies with the 1RS translocation lines of Pavon 76 have demonstrated a consistent and reproducible association between root biomass and the presence and position, of the rye 1RS arm . The increased grain yield of 1RS translocations under field conditions observed and reported earlier may be due to the consistent tendency of 1RS to produce more root biomass and also to the higher transpiration rate measured .Those authors have shown a significant increase of root biomass in wheat lines with 1RS translocations, and a positive correlation between root biomass and grain yield. All translocations of 1RS: with 1A, 1B, and 1D chromosomes have shown increased root biomass and branching as compared to Pavon 76 and there was differential expression for root biomass among these translocation lines with ranking 1RS.1AL > 1RS.1DL > 1RS.1BL > Pavon 76. In Colorado, the 1RS.1AL translocation with 1RS from Amigo showed 23% yield increase under field conditions over its winter wheat check, Karl 92 . Many present day bread wheat cultivars carry a centric rye-wheat translocation 1RS.1BL in place of chromosome 1B . Originally the translocation was thought to have been fixed because the 1RS arm of rye carries genes for resistance to various leaf and stem fungal diseases and insects . However, the translocation increased grain yield even in the absence of pathogens . It has been shown recently that this yield increase may be a direct consequence of a substantially increased root biomass . Studies by Ehdaie et al. 2003 showed a significant increase of root biomass in wheat lines with 1RS translocations and a positive correlation between root biomass and grain yield. In sand cultures, all three 1RS translocations on 1AL, 1BL, and 1DL in ‘Pavon 76’ genetic background showed clear position effects with more root biomass and root branching over Pavon 76 .

The transcript level of ALS3 target gene increased in AlT treatment

A previous study about the response of different Andean and Mesoamerican common-bean cultivars to AlT showed that Andean genotypes are more tolerant to this abiotic stress, as compared to Mesoamerican genotypes . Our phylogenetic analysis revealed that all the Andean genotypes present a deleted version of the MIR1511 that would result in the absence of functional mature miR1511 . Previous work from our group showed that common-bean miR1511 expression responds to AlT stress . Here we analyzed the regulation of miR1511 and ALS3, as well as the early effects of AlT in roots of common-bean plants from the Mesoamerican BAT93 genotype vs. Andean G19833 genotype, with a deleted MIR1511 . Common-bean plantlets from BAT93 and G19833 genotypes were grown in hydroponic conditions either in control or AlT treatments, for up to 48 hrs. First, we performed the expression analysis of miR1511 and ALS3 target gene, using qRT-PCR . In AlT-stressed BAT93 plants, the transcript accumulation level of mature miR1511 gradually decreased, reaching more than half at 24 hours post-treatment , while at 48 hpt it returned to values close to those of time 0 . As expected, G19833 plants did not express mature miR1511 .The ALS3 transcript accumulation was significantly higher in G19833 roots, which lack miR1511, compared to BAT93 roots . At 6 hpt, ALS3 expression in G19833 roots almost doubled and remained unchanged up to 48 hpt, when transcript accumulation in BAT93 and G1988 roots reached similar levels . To further study the role of miR1511/ALS3 in the physiological reaction of common-bean plants to high Al levels, nft hydroponic system we aimed to over express the miR1511 precursor in transgenic roots.

As long as stable transformation of Phaseolus vulgaris plants is, to date, nearly impossible, we chose to use BAT93 and G19833 composite plants -with untransformed aerial organs and transgenic roots . As long as common bean is recalcitrant to stable transformation, this method is an alternative to demonstrate miRNA functionality . The miR1511-overexpressing composite plants as well as control plants, transformed with empty vector , were grown in AlT and control treatments. The expression level of miR1511 and ALS3 were determined by qRT-PCR in roots from composite plants harvested at 48 hpt . A two-fold accumulation of miR1511 transcript was observed in BAT93 OE1511 roots from plants grown in either treatment, compared to EV . In G19833 EV roots, the absence of miR1511 was confirmed, however a significant accumulation of miR1511 mature transcript was observed in OE1511 roots, albeit at lower levels than the level from BAT93 OEmiR1511 roots . In control treatment, both genotypes showed lower expression level of ALS3 in OEmiR1511 vs. EV roots. In addition, increased ALS3 transcript levels were observed in AlT stressed roots from both genotypes, as compared to control treatment . The primary and earliest symptoms of plants subjected to AlT stress is an inhibition of lateral root formation and root growth due to the alteration of root cell expansion and elongation, inhibiting cell division . On this basis, we analyzed the root architecture phenotype of BAT93 and G19833 OEmiR1511 and EV plants, grown under AlT or control treatments for 48 h . The growth rate of root length, width and area as well as the number of lateral roots, was calculated from the difference of each value at 48 hpt vs. time 0. The BAT93 EV plants under AlT showed decreased rates of each root parameter , thus indicating the drastic effect of high Al on root development. In contrast, G19833 EV plants showed higher tolerance to AlT evidenced by similar rate of the root length, area, width and lateral root number in stress vs control treatments .

These results are in agreement with those previously reported indicating a higher tolerance to AlT of Andean common-bean genotypes compared to Mesoamerican genotypes . Surprisingly, in G19833 plants genetically engineered for the expression of mature miR1511, the effect of root phenotype was evident. The rate of root length, area, width and lateral root number of G19833 OEmiR1511 AlT-stressed plants significantly decreased ascompared to EV plants, showing reduced levels similar to those from BAT93 stressed plants . In A. thaliana, primary root growth inhibition under phosphate limitation or AlT is mediated by ALS3 and LPR1, a ferroxidase . LPR1 acts downstream of ALS3 and its expression is epistatic to ALS3 expression . To determine if LPR1 could be involved in the different response to AlT of BAT93 vs. G19833 plants, we measured the accumulation of LPR1 transcripts in similar AlT conditions as those from Figure 4. The transcript level of LPR1 gene decreased in AlT treatment. In AlT BAT93 roots, the transcript level of LPR1 gradually decreased reaching half of the initial expression at 48 hpt. In AlT G19833 roots, the LPR1 expression was significantly lower compared to BAT93 roots from 6 hpt to 24 hpt . At 48 hpt, LPR1 transcript reached similar levels in roots from both genotypes . The LPR1 expression pattern was opposite to the ALS3 expression profile in AlT-stressed roots , indicating an epistatic relation between these two genes and the possible participation of LPR1 together with ALS3 in the control of common-bean root growth under AlT. In order to determine if miR1511 indirectly controls LPR1 expression via the regulation of ALS3 transcript, we evaluated the LPR1 transcripts accumulation in transgenic roots from OEmiR1511 and EV composite plants, growing in Alt vs control conditions. In both BAT93 and G19833 roots, a significant increase of LPR1 transcript accumulation was observed in OEmiR1511 roots from plants grown in either treatment, compared to EV roots . In AlT treatment, roots from both genotypes showed significant lower LPR1 transcript level compared to roots from control condition.

Again, LPR1 expression pattern was the opposite compared to that of ALS3 in the same transgenic root samples , thus indicating the probable epistatic relationship between these two genes and the indirect regulation of miR1511 on LPR1 expression. In plants, microRNA genes have a higher birth and death rates than protein-coding genes . For various authors, the miRNAs’ evolution rate generates a reservoir of adaptability for gene regulation . Due to this high evolutionary turnover rate, new miRNA families and members emerge, while others lose their regulatory role and disappear from genomes of phenotypically close species or genotypes. In soybean, MIR1511 is subjected to this mechanism. Htwe et al.,reported two altered versions of MIR1511 alleles in some annual wild soybean genotypes,hydroponic nft system whereas no deletion was found in G. max and perennial wild soybean MIR1511. Here, we report a similar phenomenon for P. vulgaris MIR1511 genotypic variations. Only part of the MW1 subgroup of P. vulgaris Mesoamerican genotypes and all the Andean genotypes analyzed displayed a 58 bp-deletion in the miR1511 precursor gene compared to the corresponding sequence of P. dumosus, P. coccineus, the PhI gene pool and the rest of P. vulgaris Mesoamerican genotypes . As MIR1511 is present in non-legume species, the most parsimonious hypothesis to explain the evolution process associated with this event is to consider a deletion of part of miR1511 precursor sequence. In contrast to soybean, where probably two deletion events were required for the generation of two alternative MIR1511 alleles, our results suggest a single deletion event in the common ancestor of a part of MW1 Mesoamerican subgroup and the Andean genotypes for the generation of a different allele of miR1511 precursor gene. This single MIR1511 deletion event hypothesis supports the Mesoamerican model proposed by Ariani and colleagues where the Mesoamerican gene pool is the ancestral population from which the other gene pools have derived. The fact that the PhI gene pool contains an untruncated version, as do the other closely-related Phaseolus species included in this analysis, further confirms the sister-species status of the PhI gene pool, now known as P. debouckii . P. debouckii also contains ancestral, i.e., non-derived, sequences for phaseolin seed protein and chloroplast DNA . Based on the MIR1511 phylogenetic history presented here , we propose an addendum to this model where AW gene pool genotypes derived from one, or more, member of the MW1 Mesoamerican subgroup. A clear distinct geographical distribution pattern was observed among the P. vulgaris genotypes featuring the MIR1511 deletion and the ones with an unaltered allele . MIR1511 deletion occurred in genotypes originating from the northern and southern extreme limits of the common-bean distribution in Latin American area. Such distribution pattern correlates with the annual precipitation pattern reported for the American continent , indicating that genotypes with MIR1511 deletion originated from areas with significantly less precipitation as compared to areas where genotypes with unaltered MIR1511 originated . Drought makes soil not suitable for agriculture; it tends to increase soil concentration of different compounds that would result in plant toxicity, including aluminum toxicity, which is an important plant growth-limiting factor . The harsh soil conditions of areas where P. vulgaris genotypes lacking of MIR1511 originated probably forced these common-bean populations to adapt and favored selection of genotypes with higher AlT tolerance. In this work, we showed the experimental validation of a target gene for P. vulgaris miR1511. We validated the miR1511-induce cleavage of ALS3 transcript, an ABC transporter participating in Al detoxification in plants . However, additional action of miR1511 by translation repression of ALS3 cannot be excluded. Other proposed target genes for P. vulgaris miR1511 are not related to plants AlT response and show high binding-site penalty score, thus improbable to be considered as functional in the AlT response. Here we provided evidence of the role of the miR1511/ALS3 node in the common-bean response to AlT .

We interpret that the MIR1511 deletion resulting in lack of mature miR1511 allowed common-bean adaptation to high Al in the soils by eliminating the negative regulation of ALS3 transcript and the accumulation of LPR1, in the first 48 hpt, thus increasing its tolerance to AlT and favoring plant growth. Interestingly, similar characteristics hold for the soybean MIR1511-deletion case where the origin of soybean genotypes featuring a MIR1511-altered allele is geographically correlated with areas susceptible to high Al concentration in soil due to presence of drought in these regions .High aluminum levels in soil mainly affect plant roots; aluminum can be allocated to different subcellular structures thus altering the growth of the principal root and the number of lateral roots . In this sense, it has been observed that AlT-stressed plants favor the transport of chelated Al to vacuoles and from roots, through the vasculature, to aerial tissues that are less sensitive to Al accumulation . In Arabidopsis and other plants, ALS1 and ALS3, from the ATP-binding cassette transporter family, are involved in Al detoxification and enhance tolerance to AlT . ALS3 is located in the tonoplast, the plasma membrane of root cortex epidermal cells, and in phloem cells throughout the plant . It has been shown that Arabidopsis als3 mutants are more sensitive to AlT exhibiting extreme root growth inhibition, compared to wild type plants . Recent studies on the role of Arabidopsis ALS3 in root growth inhibition revealed its regulation via the inhibition of the STOP1-ALMT1 and LPR1 pathways, which indirectly control ROS accumulation in roots via the modulation of Fe accumulation . Furthermore, Arabidopsis ALS3 expression is induced by excess Al , a phenomenon we observed in common-bean plants as well . Common-bean ALS3 expression doubled after 6 hours under AlT in roots from G19833 plants, while in stressed roots from BAT93 plants a similar level was reached only after 48 h of treatment . The opposite expression profile was found for the ALS3-epistatic gene LPR1, in the same samples . Our data on the different ALS3 and LPR1 expression level from both genotypes indicate that the absence of the negative regulator miR1511 in G19833 plants allows a faster response to AlT. Although the level of mature miR1511 decreased in BAT93 roots up to 24 h of after exposure to high Al, this level seems sufficient to induce degradation of ALS3 transcript, which showed reduced levels compared to G19833 roots, and an increase of LPR1 expression . Our analysis of root architecture in common-bean plants showed that G19833 Andean genotype plants are more tolerant to AlT as compared to Mesoamerican BAT93 plants . These data agree with those reported by Blair et al. .

From Water to Harvest: Exploring the Wonders of Hydroponic Agriculture

ABA is therefore necessary for the stomatal closure we observe in esb1-1. Te elevated ABA concentration we observe in leaves of esb1-1 compared to wild-type supports this conclusion. We also used the esb1-1sgn3-3 double mutant to test if SGN3 is involved in initiating this leaf ABA response. In leaves of the esb1-1sgn3-3 double mutant the elevated expression of a set of ABA signalling and response genes observed in esb1-1 is reduced to below that of wild-type . Further, the reduced stomatal aperture of esb1-1 is also recovered to wild-type levels in this double mutant . SGN3is therefore necessary for the ABA-dependent stomatal closure in response to the defective endodermal diffusion barrier in esb1-1. This raises the question of what links detection of a break in the endodermal diffusion barrier with ABA-driven closure of stomates in the leaf? Removal of endodermal suberin in esb1-1 expressing CDEF1 revealed a significant reduction in ABA-regulated gene expression, and a tendency to increasing stomatal aperture towards wild-type . Thus, increased suberin deposition in the endodermis of the esb1-1 root appears to play a partial role in the ABA controlled reduction in leaf transpiration. We have ruled out a role of local ABA signalling in controlling enhanced suberin deposition at the endodermis in esb1-1 . Using a similar strategy of expressing abi1 in the endodermis, in this case using the SCARECROW promoter , primarily active in the endodermis, we also show that in esb1-1 ABA signalling at the endodermis is not promoting stomatal closure or enhanced ABA signalling in leaves . We note that pSCR is also active in bundle sheath cell, and so ABA-signalling in these cells is also not involved in promoting stomatal closure in esb1-1. Furthermore,blueberry packaging containers enhanced ABA signalling in the endodermis is also not responsible for the initiation of the long-distance response of stomatal closure in leaves, and again it is more likely that suppression of ABA signalling is playing a role.

This can be seen in the fact that expression of abi1 in the endodermis, blocking ABA signalling, mimics the efect of esb1-1 on Lpr and stomatal aperture closure . However, these possibilities remain to be further explored. In contrast to these root-based or long-distance effects, the closure of stomata in leaves in response to a root-based CIFs/SGN3 derived signal is mediated by ABA locally in the leaves. We also note that the long distance signal connecting CIFs/SGN3 in roots with reduced leaf transpiration is currently unknown. Interestingly, a root-derived peptide has been recently identified as involved in long-distance signalling. In response to drought stress, CLE25 move from root to shoot and induces ABA accu-mulation in leaves and stomatal closure. Casparian strips have been suggested to play a critical role in forming a barrier to apoplastic diffusion to limit uncontrolled uptake and back fow of solutes from roots reviewed in . However, most Casparian strip mutants only appear to show fairly subtle phenotypic effects, and this has been a source of continued puzzlement. Here, we show that sensing damage to Casparian strips via leakage of the vasculature-derived CIF peptides from the stele into the cortex triggers a mechanism that inactivates aquaporins, promotes enhanced deposition of suberin limiting solute leakage in roots, and reduces transpiration in leaves, which all contribute to increasing solute concentration in the xylem . The overall outcome of this integrated response is a rebalancing of solute and water uptake and leakage. These physiological compensation mechanisms mitigate the loss of Casparian strip integrity, allowing relatively normal growth and development. A key part of this compensation mechanism is the ability of esb1-1 to limit water loss by the shoot by reducing stomatal aperture, in an ABA-dependent manner. This is clearly established by our observation that the esb1-1aba1 double mutant has severely reduced growth and seed production compared to either of the single mutants, and these growth defects can be partially supressed by the exogenous application of ABA .

The mechanisms we have identified are triggered by the loss of Casparian strips integrity. Such an event can occur during biotic stress including root nematodes infestation, and also during developmental processes such as lateral root emergence where Casparian strips are remodelled, suberin deposition occurs, and aquaporin expression is suppressed. Here, we describe novel outputs of the CIFs/SGN3 surveillance system that couple sensing of the integrity of the Casparian strip-based apoplastic diffusion barrier at the endodermis with pathways that regulate both solute leakage and hydraulic conductivity in the root . Long distance signals then connect these root-based responses with compensatory mechanisms in leaves which are mediated by local ABA signalling . Our dis-coveries provide a new framework which integrates our emerging understanding of the molecular development of the Casparian strip and suberin diffusion barriers with two of the major physiological functions required for plant survival – solute and water uptake.In recent years, California has tightened rules for reporting diversions of water for agriculture and other uses. One key challenge has been establishing workable standards for the collection of reliable data on relatively small and remote diversions — such as those for far-flung farms and ranches. Under new legislation, a certification program run by UC Cooperative Extension is helping to solve that problem. The State Water Resources Control Board views ac-curate diversion reporting as a key element of sound water management. “It’s incredibly important to monitor how much water comes into and goes out of the system,” says Kyle Ochenduszko, chief of water rights enforcement at the water board. Diversion reports are fed into a state database and support the orderly allocation of water resources by, for instance, enabling the board’s Division of Water Rights to inform water users when new requests to appropriate water might affect their own supply. Since 1966, the California Water Code has required diverters of surface water, with certain exceptions, to report their diversions to the water board. But in part because the water board lacked fining authority for many years, compliance was poor. In 2009, Senate Bill 8 gave the water board the authority to fine non-compliant diverters an initial $1,000, plus $500 for each additional day of failing to report.

Even so, SB 8 did not stipulate precisely how diversions were to be monitored. Rather, it required diverters to measure their diversions using the “best available technologies and best professional practices,” unless they could demonstrate that such technologies and practices were not locally cost-effective. That is, the requirement left wide latitude for interpretation. So things remained until 2015 — when Senate Bill 88 became law. This piece of legislation, passed amid a historically severe drought, directed the water board to draw up emergency regulations regarding water diversions. The regulations, once completed, required diverters of at least 100 acre-feet of water per year to hire an engineer or appropriately licensed contractor to install all monitoring devices. Now the requirements were clear. But the provision mandating installation by an engineer or contractor prompted an outcry from many smaller diverters, particularly those in remote areas of the state. For most diverters near sizable towns — Redding, say — complying with the regulations was manage-able, with expenses limited to the cost of a monitoring device and the services of an installer. But diverters in remote parts of Modoc County, for example, were looking at bigger bills, says Kirk Wilbur of the California Cattlemen’s Association. For such diverters, compliance might require importing an engineer or contractor from far away,blueberry packaging boxes which would entail significant travel expenses. If a site lacked electricity, as many do, the costs would pile higher . So how to reconcile the interests of the state’s diverters with those of the state? How best to balance the public and the private good? The answer, it turned out, was to empower diverters to install their own monitoring devices — with UCCE playing the empowering role. The idea originated with the Shasta County Cattlemen’s Association. It gained the support of the statewide Cattlemen’s Association. It took shape as proposed legislation in 2017 and was shepherded through the Legislature by Assemblyman Frank Bigelow . It breezed through both chambers with no votes in opposition — not even in committee. “All parties realized,” says Assemblyman Bigelow, “that Assembly Bill 589 would cut compliance costs and, as a result, increase compliance rates — which benefited both the regulators and the regulated community.” Essentially, AB 589 allows water diverters to in-stall their own monitoring devices if they successfully complete a monitoring workshop offered by UCCE. Further, it directed UCCE to develop the workshop in coordination with the water board. Khaled Bali, an irrigation water management specialist at the Kearney Agricultural Research and Extension Center, took the lead in drafting the coursework. “Then we met with the [water] board and got feedback,” Bali says. “We made changes until they said, ‘This looks good.’” Attendees at the workshops, which last three and a half hours, gain a solid foundation in the basic principles of diversion monitoring.

They learn how to monitor flows passing through a ditch, over a weir or through a pipe — or gathering in a pond. They learn how to build or install measuring devices appropriate for each type of diversion and how to calibrate those devices to comply with the state’s accuracy requirements. They learn how to navigate the water board’s rather detailed reporting system. Equipment for monitoring flows through open ditches might be limited to a tape measure, a timing device and a floating object. Installing a monitoring device for a diversion routed over a weir — a simple dam with an edge or notch that allows overflow — re-quires a bit more equipment. But once the installation is complete, the diverter need only read a staff gauge that shows the height of the water spilling over the weir’s crest . Diversions flowing through pipes must be outfitted with flow meters. Diversions feeding into a pond or reservoir can be monitored by tracking the depth of the water with a staff gauge, float or pressure transducer . So far, UCCE has offered the course in about 15 lo-cations, from Yreka to Bakersfield. According to Shasta County UCCE County Director Larry Forero — who teaches the $25 course along with Bali, Tehama County UCCE Advisor Allan Fulton and UC Davis–based UCCE Specialist Daniele Zaccaria — about 1,000 people had earned certificates of completion by early October. Even farmers and ranchers who divert less than 100 acre-feet per year are attending. “I’ve been floored,” says Wilbur, “by the number of diverters who have attended the course even though they aren’t required to — they want to better understand the regulations and make sure they’re doing the right thing.” It probably helps that the registration fee is a fraction of the cost of importing a faraway engineer. Due to their increasing use in a wide variety of beneficial industrial and consumer applications, ranging from use as a fuel catalyst, to chemical and mechanical planarization media, there have been increasing concerns about the potential environmental health and safety aspects of manufactured ceria nanomaterials.1,2 Ce is among the most abundant of the rare earth elements making up approximately 0.0046% of the Earth’s crust by weight .3 For example, Ce concentration in soils range from 2 to 150 mg kg−1 . 4 In Europe, the median concentrations of Ce were 48.2 mg kg−1 in soils, 66.6 mg kg−1 in sediment and 55 ng l−1 in water . There are many naturally occurring Ce containing minerals include rhabdophane, allanite, cerite, cerianite, samarskite, zircon, monazite and bastnasite.The existence of naturally occur-ring ceria nanoparticles is also likely and may play a key rolein controlling dissolved Ce concentrations,6 but precisely how the properties of naturally occurring ceria nanoparticles com-pare to manufactured ceria nanomaterials is unclear. There is concern that nanoceria, due to its small particle size and enhanced reactivity by design, may present unique hazards to ecological receptor species. Of critical importance are the redox properties of ceria which enables it to transition between CeIJIII and Ce, which are the key to understanding its potential toxicity.While there has been somewhat extensive investigation into the mammalian toxicity of ceria ,based on the present review, there has been considerably less effort invested into investigation of the environmental fate and effects of nanoceria. In this critical review, we discuss the likely points of environmental release along product life-cycles and resulting environmental exposure to nanoceria, methods of detection in the environment, fate and transport, as well as the available toxicity literature for ecological receptor species.