Plant growth and yield in natural environments depend on a plethora of interactions with bacteria and fungi

To avoid mortality in the greenhouse, plants received water every three to four days, which differs greatly from natural rainfall patterns, even in the wet season. During the dry season, soil moisture is often between 30% – 80% of sample weight for marsh and low ecotone soil cores in the high ecotone and upland locations, soil moisture accounts for 0% – 30% of sample weight. We have no data showing whether we achieved similar conditions with potting soil in the greenhouse, and regardless, we would expect more rapid drying in pots than for in situ field soil. Thus, as for all greenhouse studies, results presented here should be used with caution when predicting performance in the field. To expand on these results, the greenhouse experiment should be repeated using native marsh soil as the substrate and including higher salinity treatments . Response to treatment in marsh soil should provide a more accurate prediction of response to field conditions. Surprisingly, measured differences in water potential did not translate to plant performance. Neither growth nor survival were visibly affected by watering treatment, even in potentially stressful low volume / high salinity treatments. Existing literature suggests that halophytes concentrate solutes to generate low tissue water potential, allowing continued passive uptake of water. In this case, low tissue water potential is not detrimental, since it prevents or reduces water deficits that can impair growth. Another possible reason for the lack of effect on growth was timing of the experiment. We began the experiment in June, when most individuals were beginning to reproduce. Beyond this point, energy is less likely to be allocated to vegetative growth and more likely to be allocated towards reproduction or survival strategies,aeroponic tower garden system like salt management . In contrast, younger plants allocate the majority of their energy to vegetative growth .

Adaptations, such as salt glands or specialized vacuoles, are energy expensive and require energy normally allocated to growth . Additionally, decreasing water potential has been shown to inhibit cell expansion , which would disproportionately affect young plants, since the rate of cell expansion in mature plants is reduced. Therefore, by better aligning the experimental period with the natural growth period, and focusing on young plants, treatment effects on growth might become more apparent. D. spicata displayed the greatest variability in tissue water potential, and this variability may have been influenced by factors other than watering treatment. D. spicata was grown in shallower, wider pots in a sandier potting medium. In both volume treatments, water would drain quickly through the pots, leading to uneven soil saturation that likely affected treatment efficacy and making it difficult to draw definitive conclusions regarding the large range in water potential. However, low water potential values are not uncommon for D. spicata. Other authors have observed sustained, highly negative water potential used to compensate for soil salinity . The highest D. spicata mortality in our experiment occurred in the drought treatments, with three out of four deaths in the 60% seawater drought treatment. Nonetheless, increased drainage and evaporation rates likely contributed to mortality for this species. E. californica was affected by both the drought and salinity treatments, causing lower water potential and a slight negative effect on growth. Interestingly, our results contrast with those from another study. Jong measured E. californica net dry weight when irrigated with a saline Hoagland solution in sandy soil, using artificial sea salt instead of seawater. The water potential of their maximum salinity treatment was similar to our 60% seawater treatment, but the authors found that dry weight of E. californica decreased significantly as salinity increased.

This experiment used young E. californica seedlings – the first tissue harvest occurred when seedlings were one month old and continued every 8 days until all plants were harvested, with the authors noting a difference in dry weight between treatments . Since we did not observe a difference in above ground biomass, the contrasting results may be due to the misalignment of experiment start time with the natural growth period. F. salina did not show an effect of salinity and drought stress on total plant growth, since biomass was maintained across treatments. In contrast, Barbour’s and Davis’s results showed a decrease in F. salina’s growth as salinity increased, with total mortality at approximately 89% seawater Hoagland solution . Plants in their non-saline control showed the most growth, measured by the length of the main and lateral shoots . The majority of our plants remained constant in size. The high mortality rate across treatments was driven by aphid infestation, despite attempts to control aphids with Botanigard . The highest mortality occurred in the drought, 60% seawater treatment, suggesting that stringent growing conditions may have made plants more susceptible to aphid-induced mortality. J. carnosa was the only species that added biomass between the first and final surveys. However, growth did not differ across treatments . Other studies have found mixed effects of salinity and drought treatments on growth. One study found that J. carnosa grew best in non-saline or minimal saline environments , using recently germinated individuals with stalks that extended 1-10cm above the growing substrate . In contrast, two other studies found that J. carnosa can tolerate salinities twice as concentrated as seawater, but moderate salinity conditions were ideal . St. Omer and Schlesinger used Hoagland solution in a greenhouse experiment to determine that maximum J. carnosa growth, measured by total dry weight, occurred at about 30% – 60% NaCl, with growth decreasing above 60% salinity.

They did not record plant age . The age of the plants likely impacted the differences in growth among studies due to the difference of energy allocation between mature and immature plants, which would have been exacerbated with higher salinity. Barbour and Davis used younger plants, which may have been more sensitive to treatment effects compared to the St. Omer and Schlesinger experiment , and the results reported here. Our experimental results align more closely with those of St. Omer and Schlesinger , even though our experimental design was more similar to Barbour and Davis . The experiment should also be repeated with younger plants to determine if age has any effect on salinity and drought tolerance. Other experiments that used younger plants observed a decrease in growth or total biomass as salinity levels increased, contrasting with our finding that plants are largely unaffected by salinity. Seedlings are more desirable to use in revegetation operations due to the reduced propagation cost and transplant effort, so it is important to determine the range of conditions young plants can tolerate. Our experiment addressed a knowledge gap regarding halophyte salinity and drought tolerance that could inform the design of future restoration projects and experiments in Pacific coast salt marshes. Revegetation efforts often have low success rates due to the stringent abiotic conditions within the ecotone, which disproportionally affect seedlings . Furthermore, the different natural distributions of halophytes within the ecotone suggest that salinity and drought tolerance could vary among species. In our experiment,dutch buckets for sale treatments had negligible effects on growth or survival – only water potential was affected. These results imply that these five species could survive anywhere within the ecotone, by employing different physiological adaptations – such as succulence, salt glands – to withstand stressful conditions. However, our results are likely not representative of plant performance in the field due to a variety of factors. The timing of our experiment did not align with the natural growth period of the plants, causing us to use mature plants rather than young seedlings. Additionally, our use of 60% seawater is not representative of the tidal inundation that some of the species may experience in the field. Therefore, future experiments will examine how these factors influence outcomes, using lessons learned during this effort. Taken together, findings from this set of experiments will allow us to 1) identify zones within the ecotone maximizing survival and establishment on a by-species basis, or 2) demonstrate that species are flexible enough to compensate for conditions across the ecotone,making careful placement of species unnecessary. In either case, these experiments will provide valuable insight to restoration practitioners. Ultimately, we hope that this work will support rapid and robust strategies to recreate thriving salt marsh systems.The microbial community associated with roots was proposed to be assembled in two steps: first, the rhizosphere is colonized by a subset of the bulk soil community and, second, the rhizoplane and the endosphere are colonized by a subset of the rhizosphere community. Intriguingly, a set of recurring plant-associated microbes has emerged. This review focuses on how plants shape their rhizobiome. On the one hand, common factors among plants likely lead to the assembly of the core microbiome. On the other hand, factors specific to certain plants result in an association with microbes that are not members of the core microbiome. Here, we discuss evidence that plant genetic factors, specifically root morphology and root exudation, shape rhizobiomes.

Initial evidence for an influence of plant genotype on rhizobiome composition was that similar rhizobiomes assembled in association with arabidopsis and barley grown in the same experimental conditions, although they displayed different relative abundances and some specific taxonomic groups. A correlation between phylogenetic host distance and rhizobiome clustering was described for Poaceae species, distant relatives of arabidopsis, rice varieties, and maize lines, but not for closely related arabidopsis species and ecotypes. Distinct rhizobiomes were also described for domesticated plants, such as barley, maize, agave , beet , and lettuce , compared with their respective wild relatives. Interestingly, not all plants have a rhizobiome distinct from bulk soil: some species, such as maize and lotus, have assembled a distinct rhizobiome, whereas other species, such as arabidopsis and rice, assembled a rhizobiome similar to bulk soil. The former species display a strong, and the latter a weak rhizosphere effect . The cause ofthis phenomenon is currently unknown. The strength of the rhizosphere effect varies with the developmental stage of the plant. Similarly, root exudation and microbial communities were found to change with the age of the plant. Furthermore, distinct rhizobiomes were associated with different developmental stages of arabidopsis, rice, and Avena fatua grown during two consecutive seasons. Pioneering studies demonstrated the ability of microbes to alter plant development. Overall, it appears evident that host genotype, domestication, and plant development influence the composition of rhizobiomes. As an alternative to plant developmental stage, residence time of plants in soil was discussed as a hypothesis for successive microbiomes. These contrasting results might be partially explained by differing environmental influences, host plants, or soils, and additional work is needed to resolve these questions. In this review, we discuss root morphology and root exudates as two genetic factors shaping plant–microbiome interactions, and we examine the following aspects: how root morphology and border cells affect rhizobiomes; how plant exudates shape the rhizobiome; and possible plant transport proteins involved in exudation. Figure 1 provides a general overview of exometabolite networks in the rhizosphere, and Box 1 illustrates the interplay between root exudates, border cells, and rhizobiomes in phytoremediation. We conclude by integrating these ideas into a possible scenario of rhizobiome assembly.Rhizobiomes are influenced by their spatial orientation towards roots in two ways. First, the radial proximity of microbial communities to roots defines community complexity and composition, as described in recent publications, and as outlined by the two-step model of microbial root colonization mentioned above. Second, the lateral position of microbes along a root shapes the community, as exemplified by early studies . Importantly, recent microbiome studies take into consideration the former, but not the latter aspect. In this section, we discuss specific microbial associations with various root regions, and the role of spatially distinct root exudation. Root tips are the first tissues that make contact with bulk soil: root tips are associated with the highest numbers of active bacteria compared with other root tissues, and likely select microbes in an active manner. The root elongation zone is specifically colonized by Bacillus subtilis, which suggests a particular role of this zone in plant–microbe interactions. Mature root zones feature a microbial community distinct from root tips. Their community includes decomposers, which could be involved in the degradation of dead cells shedding from old root parts.

The potential for artifact introduction should be recognized

In addition, coatings, either on pristine ENMs or acquired in the test media or environment, may alter toxicity.ENM amounts and forms effecting biological impacts should be understood and related to the administered dose to inform environmental risk assessment.This is the essence of dosimetry in ENM ecotoxicology. As with other exposure concerns related to hazard assessment, appropriate dose measurement depends on receptor and ENM characteristics, which are scenario-dependent. For example, mammalian cells are harmed by ENMs that become internalized, yet uptake pathways depend on ENM characteristics.Then again, bacterial receptors that affect ecosystem-level processes may be impacted by externally associated ENMs at the cell membrane, or even in the surrounding environment. In those cases, dosimetry relies on understanding ENM behavior in the complex media in which bacteria reside , which is scenario-driven. End point observations of ENM damage will also depend on ENM processing in cells. During hazard assessments, understanding the history of biological interactions with internalized, or otherwise associated ENMs may not be feasible. Yet efforts should be made to measure and spatially associate ENM bio-burden within biological receptors, and to examine the relationships of applied ENMs to apparent effective dose and to effects.Overall, it is not recommended to categorically exclude select conditions, environmental compartments, protocols, receptors, or end points, since any may be environmentally relevant. Rather, careful experimental designs around well-conceived, plausible exposure scenarios should be emphasized; also, ENM characteristics that influence biological responses under the dynamic conditions that occur in the environment and in biota should be characterized and quantified. One could imagine identifying key material environment system determinants that could be systematically varied to provide test results across relevant determinant ranges. Such ideas are not specific to ENM ecotoxicology,hydroponic bucket but could establish defensible practices for making progress in hazard assessment while the ENM industry rapidly advances.

Mesocosms are “enclosed experimental systems [that] are intended to serve as miniaturized worlds for studying ecological processes.”While the distinctions between mesocosms and other experimental systems are not well delineated, mesocosms are generally larger experimental units and inherently more complex than bench top microcosms or more simplified laboratory experiments.Mesocosms for ENM ecotoxicology are intended to increase the complexity of experimental systems, such that more realistic ENM physical compartmentalization, speciation,and uptake into biota can be achieved alongside biotic effects.Also, the intent is to realistically characterize ENM fates and interactions with environmental system components, and to reveal fluxes among compartments of the ecosystems responsive to internal system influences that are unconstrained by investigator interventions.Mesocosms have been used for testing relative biotic effects of ENM variants ,and discerning ENM effects separately from effects of dissolution products .Mesocosm testing may occur following individual organism and microcosm studies . For example, to study how ENMs impact crops, one could first establish the potential for hydroponic plant population impacts,use soil microcosms to understand ENM bio-availability via observing soil microbial community shifts,and then scale up to greenhouse mesocosms of soil-grown crops. This sequence could provide an understanding of plant−microbe interactions,ENM transformation and uptake in plants,and effects on food nutritional quality.Still, there are relatively few published studies using mesocosms to assess ENM ecological hazards,and the design and operating variables of existing mesocosm studies are wide ranging.By contrast, wastewater-associated ENMs,and their transformations,effects, and fates in wastewater treatment plants ,along with the potential for ENMs to impact WWTP processes,have been more extensively studied. Since sewage contains ENMs, WWTPs are inherent forms of mesocosms.

Studies at entire WWTP scales elucidate ENM fates during wastewater treatment, including significant association with biological treatment biomass that becomes bio-solids.However, only 50% of bio-solids produced in the U.S. are land-applied, and these bio-solids are used on less than 1% of agricultural land in the U.S. . Bio-solids are land applied even less in the European Union.Thus, knowledge of ENM fates in WWTPs and how final residues are disposed regionally are needed to develop plausible exposure scenarios. Concerns with mesocosms include factors that can be difficult to control and that mesocosms may respond to artifacts including “wall” or “bottle” effects.Further, mesocosms can conflate direct and indirect toxicant effects, typically do not have a full complement of control conditions, and deliver inconclusive results . Biological communities in mesocosms also lack realistic ecological interconnections, interactions, and energy flows. Nevertheless, outcomes can be improved by using carefully designed mesocosms and associated experiments.For example, combined with analyzing mesocosm samples, performing practical “functional assays” such as for heteroaggregation,allows for anticipating phenomena and later interpreting ENM transformation and compartmentalization in mesocosms.Similarly, batch physical association experiments if conducted using realistic components, and over time frames that allow for quantifiable mass transfer can assess ENM biomass association and readily suggest ENM fates in WWTPs.Still, hydrodynamic conditions are different in simplified tests versus mesocosms, which are different from those in the natural environment. Hydrodynamic conditions will impact ENM fate and transport and thus exposure concentrations at receptor boundaries. The inability to capturereal environmental hydrodynamic conditions in any experimental scale is a general shortcoming for both ecotoxicology and transport studies.Although mesocosms do not fully simulate real environments,mesocosms are useful and should be employed, albeit judiciously due to their resource intensity, within a strategy . Recommendations regarding using mesocosms for assessing ENM environmental hazards are provided in Table 2. Mesocosm studies must be designed and conducted around well-conceived questions related to plausible exposure scenarios; they should use select end points, potentially including sensitive omics measurements, to answer questions or test hypotheses.Internal process and constituent characterization should be thorough and equally responsive to well-conceived, realistic scenarios.

Functional assays, that is, “intermediary, semi-empirical measures of processes or functions within a specified system that bridge the gap between nanomaterial properties and potential outcomes in complex systems”, should precede mesocosm designs and experiments, and aid interpreting mesocosm results . Mesocosm artifacts are avoidable by following best practices for design and operation, although possible interferences of particulate material testing with assays must be evaluated.As for other hazard assessments, ENMs should be tested across the product life cycle, within a motivating exposure scenario. Similarly, suitable material controls should be used to test hypotheses regarding ENM-specific effects . The recommendations made regarding exposure conditions in assessing ENM hazard potentials for model organisms should be followed for mesocosm studies . Additionally, mesocosm designs should incorporate exposure durations, which should be sufficiently long to address population growth, reproduction, bio-accumulation, trophic transfer, and possibly transgenerational effects. Sufficient measurements of ENM concentrations and time dependent properties must be made for clear interpretations. Key to successfully interpreting mesocosm studies is using validated methods for measuring ENMs in complex media. Measurements should include the size distribution, concentration and chemical composition of ENMs in the test system,stackable planters including biological tissues,over time.In some cases, transformation products are inventoried thoroughly during long-term field-relevant exposures.Detection schemes require sample preparation to assess in situ exposures before quantitative analyses, or drying and embedding before visual confirmation by electron microscopy.Recovery methods continually develop, such as cloud point extraction for concentrating ENMs from aqueous matrices.Depending on the exposure scenario, in situ aging may be a study objective. However, it is important to define what “aging” really means and the specific application domain, since “aging” is a wide-ranging term and can be used in different contexts, making comparisons impossible. At least, studies should be undertaken over sufficiently long time frames , which may include repeated ENM applications,such that appropriate aging, that is, time-dependent transformation under realistic conditions, could occur. Alternatively, preaged ENMs could be used. However, preaging protocols are not yet standardized and, while some convention could allow for comparing across studies, the appropriate aging protocol would depend on the envisioned exposure scenario. Cocontaminants should be considered and potentially introduced into mesocosms, since some ENMs sorb, concentrate, and increase exposure to other contaminants.Select end points should account for ENMs as chemosensitizers.Also, mesocosm study designs should anticipate and plan for measuring secondary effects . In summary, while few mesocosms have been used in assessing ENM ecotoxicity and are also rare for conventional chemical testing, such systems potentially offer greater realism. Still, mesocosm exposure and design considerations should derive from immediate environmental applicability. The value of mesocosms to ENM ecotoxicology can increase by following recommendations including: addressing context-dependent questions while using relevant end points; considering and minimizing artifacts; using realistic exposure durations; quantifying ENMs and their products; and considering ENM aging, cocontaminants, and secondary biological effects . Further, it should be acknowledged that mesocosms do not fully recreate natural environmental complexity. For example, aquatic mesocosms do not recreate actual environmental hydrodynamic, or temperature cycling, conditions. Hydrodynamics can significantly impact ENM aggregation or heteroaggregation, and fate and transport . Therefore, potential impacts on the resulting concentrations at the receptor boundaries should be considered.ENM environmental exposure conditions herein refer to where, how much, and in what forms ENMs may occur in the environment.

These are central issues for ecotoxicology of ENMs because they suggest test exposure scenarios in which ENMs could impact biological receptors within environmental compartments influenced by various factors . These issues also influence outcomes of key regulatory concern: persistence, bio-accumulation, and toxicity.Discharges underpin exposure scenarios,are initiated by situational contaminant releases , and are referred to as source terms. Mass balance-based multimedia simulations mathematically account for released contaminants as they are transported and exchanged between environmental media, where contaminants may be transformed and may ultimately concentrate, potentially with altered compositions and structures . Far-field exposure modeling approaches vary by question, the modeling purpose , the required spatial resolution , the temporal conditions , and the predictive accuracy required.Material Flow Analysis , which is a type of life-cycle inventory analysis, has been advanced to track ENM flows through various use patterns into volumes released into broad environmental compartments,scaled to regional ENM concentrations that release via WWTPs to water, air, landfills, and soil.Such models estimate exposure concentrations in part via engineering assumptions and in part via heuristics .Also, such material flow analysis models depend on the underlying data which are not readily available, making it difficult to validate model results and potentially leading to inaccurate estimates.Multimedia models for ENMs can predict environmental concentrations based on sources of continuous, time-dependent, or episodic releases and are similar to multimedia models that predict environmental concentrations of organic chemicals and particle-associated organic chemicals.For ENMs, predicting particle size distribution as affected by particle dissolution, agglomeration, and settling is desired for various spatial and temporal end points. For one integrated MFA and multimedia model , user-defined inputs are flexible around product use and ENM release throughout material life cycles.It is noted that although validation of multimedia models is a formidable task, various components of such models have been validated as well as model predictions with such models for particle-bound pollutants. Most far-field models of ENMs have major challenges. First, the quantities and types of ENMs being manufactured are unknown to the general public due to issues surrounding confidential business information, leading to a reliance on market research.The resulting public uncertainty will persist while nanotechnology continues a course of rapid innovation, as is typical of new industries.The rates of product use and ENM releases at all life-cycle stages are also not defined.There are challenges associated with modeling transport processes through specific media and across media , highly divergent time scales of processes, lack of required input parameters, and the need for validation of results .Several multimedia models developed for conventional chemicals could be adapted around ENMs, but few account for fate processes specific to nanoparticles .In addition, various transport models for a single medium and in the multimedia environment could be adapted for far-field analysis of ENMs, but few account for fate processes distinctive to ENMs .Moreover, their validation, which would require ENM monitoring data, is a major challenge. The lack of understanding of many fundamental ENM behaviors under environmental conditions propagates into broad uncertainties, for example in predicting ENM removal to solids or aqueous fractions in WWTPs.ENM surface chemistries fundamentally affect ENM agglomeration or dispersion and likely affect bio-availabilty.Some species on ENM surfaces may degrade in the environment,while other adsorbates can be acquired.Carbonaceous ENMs may be transformed or degraded by environmental processes such as photo-,enzymatic,chemical,and bio-degradation.Redox and other environmental conditions will affect nanomaterial surfaces, which for nano-Ag includes formation of sulfide that inhibits dissolution.Surface chemistry also affects transformation rates of primary particles and aggregates .

Nitrate–nitrogen leaching accounted for only 15% of the applied nitrogen

As the season continued and plant uptake was reduced, excess water further mobilised nitrate–nitrogen out of the root zone, as is evident from 27/04/07 and beyond . At the end of the crop season, little nitrogen remained in the soil system, and what did remain was well beyond the reach of the plants. This nitrogen is expected to continue leaching downwards over time and become a potential source of nitrate–nitrogen loading to the ground water. Additionally, peak NO3 –N concentrations in the soil profile L 1 ) and in drained water L 1 ) were significantly higher than the Australian environmental standard for protection of 80% NO3 –N L 1 ) and 95% of species NO3 –N L 1 ). The NO3 –N concentrations in the soil solution also occasionally exceeded the level of Australian drinking water quality standard for nitrate NO3 –N L 1 ). High levels of nitrate–nitrogen below the crop root zone are undesirable, as some recharge to groundwater aquifers can occur, in addition to flow into downstream rivers, which are used for drinking water and irrigation. These findings are consistent with other studies , in which high nitrate concentrations in drainage water under drip and furrow fertigated irrigation systems have been reported.The seasonal water balance was computed from cumulative fluxes calculated by HYDRUS-2D. Estimated water balance components above and below the soil surface under a mandarin tree are presented in Table 4. It can be seen that in a highly precise drip irrigation system, a large amount of applied water drained out of the root zone,hydroponic nft system even though the amount of irrigation applied was based on estimated ETC. This drainage corresponded to 33.5% of applied water, and occurred because highly permeable light textured soils, such as those found in this study, are prone to deep drainage whenever the water application exceeds ETC.

The drainage amount in our study falls within the range of recharge fluxes to groundwater reported by Kurtzman et al. under citrus orchards in a semiarid Mediterranean climate. Mandarin root water uptake amounted to 307.3 mm, which constitutes about 49% of applied water. Root water uptake slightly increased when the model was run without considering solute stress , which is not a significant difference. It further substantiates the results obtained for seasonal ECsw in Fig. 6, where salinity remained below threshold over the season. Evaporation accounted for 17.7% of the total water applied through irrigation and rainfall. The modelling study overestimated the sink components of the water balance by 4.79 mm . There were major differences between water input and output from January 2007 onwards . During this period, irrigation and precipitation significantly exceeded tree water uptake , which eventually resulted in deep drainage from March 2007 onwards. Therefore, current irrigation scheduling requires adjustment during this period. This illustrates how simulations were helpful in evaluating the overall water dynamics in soil under the mandarin tree. The nitrogen balance is presented in Table 5. The nitrogen fertilizer was applied either in the form of NH4 + or NO3 , but NH4 + transforms quickly to NO3 through the process of nitrification. Model simulations showed that nitrification of NH4 + was very rapid and most of the NH4 + –N converted to NO3 before it moved to a depth of 20 cm, and no traces of NH4 + were observed below this depth. It is apparent that the nitrification of NH4 + took place in the upper soil layer, which contains organic matter and moisture that supports microorganisms , facilitating the nitrification of NH4 + . Though NH4 + was initially nitrified to NO2 and consequently to NO3 , NO2 was short-lived in the soil and decayed to NO3 quickly. Therefore, the simulated plant NH4 + –N uptake was only 0.71 kg ha 1 . Hence, the NO3 –N form was responsible for most of the plant uptake, corresponding to about 85% of the applied nitrogen.

The monthly N applications were slightly higher than plant uptake during the flowering and fruit growth periods . However, the monthly uptake was slightly higher than the N application between these periods. High frequency of N applications in small doses resulted in similar nitrogen uptake efficiency in citrus as in other studies . Similarly, Scholberg et al. reported doubling of nitrogen use efficiency as a result of frequent application of N in a dilute solution. Slightly higher uptake was recorded when fertigation was applied in second last hour of an irrigation event , as compared to when it was applied early in the irrigation event . Hence, it can be concluded that timing of fertigation does not have a major impact in a normal fertigation schedule with small and frequent N doses within an irrigation event in light textured soils. Similar results were also obtained in our earlier study in a lysimeter planted with an orange tree , which revealed that timing of fertilizer N applications in small doses in an irrigation event with a low emitter rate had little impact on the nitrogen uptake efficiency.Monthly N balance revealed that most of the N leaching happened between March 2007 and August 2007, which was correlated with the extent of deep drainage occurring during this period. NO3 –N losses ranging from 2% to 15% were illustrated by Paramasivam et al. and Alva et al. , attributable in part to an improved management of N, which could be a contributor in the current estimation.In our study, it is evident that there were significant deep drainage and nitrate–nitrogen leaching losses , which could be reduced by appropriate management. Hence, different simulations involving the reduction of irrigation and fertigation applications during the whole or part of the crop season were conducted, to optimize water and nitrogen uptake and to reduce their losses from the soil . Increasing the irrigation frequency with short irrigation events while maintaining the same irrigation volume, had no impact on deep drainage and N leaching .

However, the seasonal salinity increased by 11% compared to the standard practice. This confirms that the current irrigation schedule followed with respect to the irrigation frequency seems to be optimal under the experimental conditions. In S2, Dr_W and Dr_N were reduced by 14.4% and 19%, respectively, but salinity increased by 11%. However, a sustained reduction in irrigation by 20% eventually reduced the Dr_W and Dr_N by 28.1 and 38.3%, respectively, at the expense of a 4.9% decline in plant water uptake, but with a 4% increase in N uptake. However, salinity increased by 25.8% compared to the normal practice, which would likely have a significant impact on plant growth. Scenarios S4 and S5 were based on decreasing the nitrogen application by 10% and 20%,nft channel resulting in a decrease in N leaching by 7.4% and 14.8%, respectively, along with a much higher reduction in plant N uptake , suggesting that the reduction in the fertilizer application alone is not a viable option to control N leaching under standard conditions. A combined reduction in irrigation and fertigation by 10% further reduced N leaching by 5.5%, compared to reducing irrigation alone , but at the same time plant N uptake was reduced by 5% more than in S2. Similarly, reducing irrigation and N application by 20% produced a pronounced reduction in N leaching and water drainage , but it also resulted in a decrease in plant N uptake by 15.8% and water uptake by 4.8%, compared to normal practice. At the same time, salinity increased by 25.8%, which is similar to S3. The reduction in plant water and N uptake would have a major impact on plant growth and yield, and would adversely impact the sustainability of this expensive irrigation system. Hence, reducing fertilizer applications does not seem to be a good proposition under the current experimental conditions, as it results in an appreciable decline in plant N uptake. However, Kurtzman et al. reported that a 25% reduction in the application of N fertilizer is a suitable agro-hydrological strategy to lower the nitrate flux to groundwater by 50% under different environmental conditions. Rather, reducing irrigation alone seems to be a better option to control the deep drainage and N leaching losses under the conditions encountered at the experimental site. Additionally, it is worth noting that in S3 and S7 the salinity during a period between October and December at a depth of 25 cm, and during December at a depth of 50 cm, increased considerably, and was higher than the threshold level , confirming that a sustained reduction in irrigation and fertigation is not a viable agro-hydrological option for controlling water and N leaching under the mandarin orchard.

However, it seems unnecessary to reduce irrigation applications uniformly across the season as suggested by Lido9n et al. . Rather, irrigation could more profitably be reduced only during a particular time period when excess water was applied. The water and N balance data in our study revealed that an imbalance between water applications and uptake happened during the second half of the crop season, i.e., from January till August 2007, resulting in maximum drainage and N leaching , coinciding with the fruit maturation and harvesting stage. Hence, there is a need to reschedule irrigation within this period, rather than reducing water applications throughout the entire season. Keeping this in mind, the following 5 scenarios were executed, in which irrigation was reduced during the second half of the crop season, i.e., between January and August, by 10%, 20%, 30%, 40%, and 50%, respectively. Scenarios S10, S11, and S12 showed an enormous potential for reducing water and N losses. In S10, Dr_W and Dr_N were reduced by 8% and 4% more than in S7, N uptake was increased by 6.9% , and salinity was also 4% less than in S7, which seems quite promising. On the other hand, in S11 and S12, the Dr_W and Dr_N were reduced to a greater extent than in S10, and soil salinity increased substantially , due to a considerable reduction in the leaching fraction. This is also shown in Fig. 12, which shows that monthly soil solution salinity in S11 and S12 at the 25 and 50 cm soil depths increased dramatically between January and August. Although ECsw remained below the threshold level, except at a 50 cm depth in S12 during March 2007, there is a significant likelihood of it increasing further in subsequent seasons, which would ultimately impact the growth and yield of mandarin trees. Hence, under current conditions, Scenario S10 represents the best option to control excessive water and N losses, and high salinity, and to increase the water and N efficiency for mandarin trees. Other permutations and combinations, involving fertilizer reductions along with S10, did not provide further improvements in controlling water and N leaching. It is concluded that simulations of irrigation and fertilizer applications, using HYDRUS, can be helpful in identifying strategies to improve the water and N efficiency for drip irrigation systems of perennial horticultural crops.Climate change, continuing population growth, and urbanization are exerting an unprecedented pressure on fresh water supply, mandating the use of nontraditional water resources , such as treated municipal wastewater, for agricultural irrigation . Treated wastewater irrigation, however, poses potential risks because a multitude of trace contaminants, including numerous pharmaceutical and personal care products , are incompletely removed during wastewater treatment and may enter the soil-plant continuum . Land application of bio-solids and animal manure constitutes yet another route for such trace contaminants to enter agroecosystems . Once in the agroecosystems, PPCPs may be translocated into edible plant parts and thus, enter the terrestrial food chains, including human diet. Consequently, plant accumulation of PPCPs is raising widespread concerns due to potentially deleterious effects on the environment and human health . Studies over the past decade show that various PPCPs can be taken up from soil by plants . For instance, Wu et al. detected 16 PPCPs in edible tissues of eight common vegetables grown with treated wastewater irrigation. Additional studies showed that some PPCPs can pose significant phytotoxicity, leading to inhibition of plant growth . For example, carbamazepine, an antiepileptic drug, displayed phytotoxic effects toward Cucurbita pepo at concentrations N 1 mg kg−1 in soil .

We are interested in the determination of protein structures in solution for several reasons

By analysing the intensities of many cross peaks in the spectrum, and paying particular attention to those at the site of the gap in the backbone, we have determined that the backbone is relatively undistorted, as compared to a standard “B DNA” even at the site opposite the gap. There does appear to be a slight over winding of the DNA at this site, giving a slightly greater than average local helix twist angle. We have also used the temperature dependence of the NMR spectrum to examine the melting in these sequences. We find that, as expected, melting occurs first as a separation of the dimer into monomers, followed at higher temperature by opening of the hairpin loop. For sequences with 4 GC base pairs in the overlap region, and six in the stem of the hairpin loop these melting events are separated by about 15°C, with the first transition having slower exchange kinetics than the latter. We have previously carried out NMR studies of the binding of the antibiotic distamycin-A to a specific DNA dodecamer. This drug shows a strong preference for binding at AT rich sequences, AATT in the cases which we have studied in detail. From the NMR data it was possible to position the drug relative to the DNA, and to qualitatively evaluate the degree of distortion of the DNA by the drug. We have now supplemented these measurements by carrying out energy minimization calculations with the AMBER program from UCSF. A general problem with such’ calculations is that the minimization algorithm cannot find a global minimum of energy in a reasonable time. We find that starting with the drug relatively far from the position determined from NMR,nft hydroponic convergence is obtained, but to a drug binding position which is far from that experimentally determined. However, when the calculation is started near the experimentally determined structure the energy obtained upon convergence is lower.

The coordinates obtained through such experimentally guided modelling are better than what can be determined from the NMR data alone. Presently we are trying to extend the measurements to other DNA sequences, and to analyse the differences in binding constant and kinetics of drug binding in light of the detailed structural information we now have about the bound state.First, we expect to be able to use cystines at specific· residues in proteins to predictably fold the remainder of the sequence. The expectations are based upon the observed folding in several naturally occuring peptides, which have common cystine positions. While we have previously assigned the resonances in two of these peptides, and qualitatively modeled their secondary structure, it is important to carry out a more quanititative analysis. To do this we have collected 2D NOE spectra and integrated cross peak intensities to obtain interproton distances for apamin, a small neurotoxic peptide from honey bee venom. With a relatively large number of such distances we have carried out distance geometry calculations. This approach takes the distance estimates,. including estimates for the experimental precision, and computes from these coordinates for structures which are consistant with all of the input data. From the range of structures obtained from repeated calculations we can analyse the precision with which the structure is determined, e.g. its effective “resolution”. In a similar fashion we have begun analysis. of the relationship between structure, function and immuno genicity of protein toxins isolated from sea anemones. These again are related through having common cystine positions, and several other conserved residues. However different toxins from this family are active against different receptors, and do not all show antibody cross reactivity.

We have now fully assigned resonances of the Radianthus paumotensis toxin II, and have established that its only regular secondary structure is beta sheet, with strands connected with a variety of loops and turns. We will now begin structure calculations for this protein, and at the same ti~e have begun assigning a related protein Rp III. We have just obtained the sequence of Rp III through a collaboration with Prof. Ken Walsh at the University of Washington, and have already established that there is a high degree of homology in the structures. From the refined structures for proteins in a related family, such as these, we should be. in a good position to look for the common structural features which give rise to the similar folding of the peptide chain, and yet be able to see the differences which lead to different activities, and immuno genicity. Phosphorus-31 NMR. spectroscopy is evolving into an important means for determining the in vivo concentrations of phosphorylated metabolites and has definite clinical implications and applications. Our previous contributions to this field demonstrated the feasibility of employing implanted radio frequency coils around organs of laboratory animals to permit elliciting the NMR. spectra over long periods to establish base line spectra. Using these devices and techniques we have determined phosphorus exchange reactions in rat hearts and kidneys, in situ, and. have demonstrated that there are pools of metabolic intermediates that are not directly visible -in the NMR. spectra. Comparison of the results from NMR spectroscopy with those obtained from radio labeling studies on Chick Embryo Fibroblasts also showed that there are significant pools of phosphorus not visible in the conventional P-31 NMR. spectrum. Both sets of studies suggest that compartmentation occurs. The invisibility of these pools is assumed to arise because of immobilization of the molecules by cellular macromolecules or organelles. The methods of solid state NMR. spectroscopy are being applied to render visible these solid like species. In particular we use the technique of magic angle sample spinning, together with cross polarization for signal enhancement.

Application of these methods to a large number of biological phosphorylated molecules, for which crystal structure data are avai1~ble, has permitted us to correlate the values of the chemical shielding tensor elements with details of chemical bonding within the phosphate moieties. Vpon application to lyophylized tissue, we observe phosphorus signals attributable to phospholipid head groups. The proton spectra of lyophylized tissue, ellicited with these techniques, are suprisingly rich and exhibit narrow features reminiscent of solution spectra. These narrow features are assigned to hydrocarbon chains of the membrane phospholipids of the tissue. Further support for such an interpretation is provided by the C-13 spectrum of these samples whose features are completely compatible with those of lipid chains .. We interpret these suprising findings to result from the fact that at the temperature of observation, ca room temperature, the membrane phospholipids of the tissue are in the liquid crystal state characteristic of their molecular composition. Normal functioning of the cellular membrane, as exemplified by the fluid-mosaic model,nft system is assumed to require a high degree of dynamic mobility. That we observe such high resolution proton spectra in lyophylized tissue is indeed dramatic support for such a model. The Chemical Bio-dynamics Division of LBL has inaugurated Tritium NMR. spectroscopy in conjunction with the establishment of the National Tritium Labeling Facility. The potential applications of TMR to problems in structural biology and biophysics are very great. They promise to extend the molecular weight range of molecules that can be profitably studied with NMR by several fold, will permit the study of interactions between en~ymes and bound substrates, between receptors and effectors, and between proteins and nucleit acids. This potential derives from the facts that the intrinsic sensitivity of· the triton is some 7% greater than that of the proton, that there will be zero interferring background signals and because the tritium spectrum will be sparse, arising only from those tritons at the sites specifically labeled. Importantly, the abundant protons can be decoupled from the tritons thus reducing their contribution to resonance broadening. We are able to work at the millicurie level of activity. In a first application of TMR to a biological problem, we have observed the conversion of glucose, tritiated at the C-l position, to lactate upon incubation with human erythrocytes. During this metabolism, additional resonances appear transiently indicative of metabolitic intermediates.

The most remarkable aspect of these spectra is the ability to observe the tritiated hydroxyl species at micromolar levels in the presence of 55 molar water without interferring background. During June of 1986 a new 300 MHz NMR spectrometer, specifically configured for optimum utility with TMR was installed in the laboratory.Our objective is to develop a molecular model for chemical mutagenesis from in vitro and in vivo studies of replication and transcription of chemically modified DNA templates. Many carcinogenic as well as chemotherapeutic agents cause covalent linkages between complementary strands of DNA. Cross linked DNA is a block to DNA replication which, if unrepaired, constitutes a lethal lesion to the cell. While the subject of DNA cross link repair has been an area of intensive study, the molecular events of this process have not been well characterized. Genetic studies of E. coli have demonstrated that ABC excision nuclease, coded for by the three unlinked genes uvr A, uvr B, and uvr C, plays a crucial role in DNA crosslink repair. To study the molecular events of ABC excision nuclease mediated DNA crosslink repair we have engineered a DNA fragment with a psoralen-DNA interstrand crosslink at a defined position, digested this substrate with the pure enzyme, and analyzed the reaction products on DNA sequencing gels. We find that the excision nuclease a) cuts only one of the two strands involved in the crosslink, cuts the crosslink by hydrolyzing the ninth phosphodiester bond 5′ and the third phosphodiester bond 3′ to the cross linked furan-side-thymine, and c) does not produce double strand breaks at any significant level. We have constructed a substrate for the ABC excision from 6 oligomers which were then ligated together to form a 40 bp DNA fragment containing the central 8-mer, TCGT*AGCT, in which the internal thymine is modified on the 3′ side by a proralen derivative 4′-hydroxymethyl- 4,5′ ,8-trimethylpsoralen . Both pyrone and furan monoadducts have been isolated, the latter of which reacts with the thymine in the opposite DNA strand to form a crosslink. The cross linked 40 bp DNA fragment was then purified on a denaturing polyacrylamide gel. The cross linked DNA was terminally labelled at either the 5′, 3′, of both termini, digested with ABC excision nuclease and the reaction products where analyzed on a DNA sequencing gel.The genome of human cells contains approximately to 9 nucleotide pairs organized into a particular sequence. The faithful replication of this amount of information into each daughter cells is obviously a formidable task. A clue as to how the· genome sequence is normally maintained in such a highly organized state during DNA replication is to learn something about the factors that destabilize the replication of the genome. Our research program centers its activity on the hypothesis that much of the control of genome replication takes place at the level of initiation of DNA replication within sections of the genome. We are interested in understanding what cellular factors regulate this initiation, how this regulation breaks down in various disease states, and how external environmental stresses can lead to aberrant initiation of DNA replication resulting in increased gene copy number. Because the human genome is so large and the mechanisms regulating its replication likely to be so complex, we have attempted to develop model systems which can help us understand these processes. One approach we have taken is to study the control of oncogenes thought to be involved with regulating the commitment of cells to DNA synthesis. The second study we have been carrying out is to study the aberrant initiation of DNA synthesis that results during gene amplification, and finally we have been investigating the effect of chemical carcinogens on DNA replication and mutation. We are investigating the involvement of protooncogene sequences in the regulation of initiation of DNA synthesis in cells growing in culture. Our hypothesis is that these sequences code for components the cells need to traverse the cell cycle and initiate DNA synthesis. We have focussed our attention on a member of the myc family of oncogenes – N-myc. The Nmyc oncogene is amplified and/or expressed at a high level in many cell lines derived from neuronal tumors; non-neuronal cells apparently do not express this gene.

We represent this as the Lifecycle Performance Assurance Framework

The inoculation treatments were control, indigenous mycorrhiza, G. mosseae, G. etunicatum, G. intraradices, G. caledonium, G.fasciculatum and a mix of these species. The seedlings were grown in a greenhouse for 32 days before being transferred to the main field plots. The experimental plots were randomized with three replicates. Each crop species was tested in a separate experiment. Seedling survival yield and nutrient uptake were measured. Fruits were collected several times and leaves and root samples were analyzed for nutrient content at flowering. Roots were stained and examined for the presence and degree of mycorrhizal infection according to Gioannetti and Mosse . This document provides best practice guidance and energy efficiency recommendations for the design, construction, and operation of high-performance office buildings in India. Through a discussion of learnings from exemplary projects and inputs from experts, it provides recommendations that can potentially help achieve enhanced working environments, economic construction/faster payback, reduced operating costs, and reduced greenhouse gas emissions. It also provides ambitious energy performance benchmarks, both as adopted targets during building modeling and during measurement and verification . These benchmarks have been derived from a set of representative best-in-class office buildings in India. The best practices strategies presented in this guide would ideally help in delivering high-performance in terms of a triad—of energy efficiency, cost efficiency,4x8ft rolling benches and occupant comfort and well-being. These best practices strategies and metrics should be normalized—that is, corrected to account for building characteristics, diversity of operations, weather, and materials and construction methods. Best practices should start by using early design principles at the whole building level.

Optimal energy efficiency can be achieved through an integrated design process , with stakeholder buy-in from the beginning at the conceptual design phase. Early in the project, the focus of the stakeholder group should be on maximizing energy efficiency of the building as a whole, and not just on the efficiency of an individual building component or system. Through multi-disciplinary interactions, the design team should explore synergies between systems such as mutually resonating strategies; or sweet spots between inharmonious strategies. Buildings are the most energy efficient when designers and operators ensure that systems throughout the building are both efficient themselves, and work efficiently together. Systems integration and operational monitoring at the whole building level can help push the envelope for building energy efficiency and performance to unprecedented levels. Whole-building systems integration throughout the building’s design, construction, and operation can assure high performance, both in terms of ensures the energy efficiency and comfort/service levels. A Life cycle Performance Assurance Framework emphasizes the critical integration between the buildings’ physical systems and the building information technologies. The building physical systems include envelope, HVAC, plugs, lighting and comfort technology systems. Whereas, building information technologies provide information on the design and functioning of the building physical systems. This can be done- first, by performing building energy simulation and modeling at the design phase one can estimate the building’s energy performance and code compliance; second, by integrating controls and sensors for communications, one can track real-time performance at the building phase, relative to the original design intent; and third, by conducting monitoring-based commissioning and bench marking during operations, one can ascertain building performance compared to peers and provide feedback loops. The next step should be asesssing best practices at the systems and components level along four intersecting building physical systems- Mechanical Systems for Heating, Ventilation and Air Conditioning , Plug Loads, Lighting and Envelope/Passive systems. The qualitative best practices described in this guide offer opportunities for building designers, owners, and operators to improve energy efficiency in commercial office buildings.

Although the practices are presented individually, they should not be thought of as an “a la carte” menu of options. Rather, building systems must be integrated to realize the maximum energy and cost benefits. Also, designers and engineers, and developers and tenants need to work together to capitalize on the synergies between systems. Last but not the least, this guide provides tangible quantitative best performance metrics, ready to be adopted by buildings in India. These metrics are concrete targets for stakeholder groups to work together and enable, by providing localized and customized solutions for each building, class, and occupants. Having targets early on in the design process also translates to more-efficient design lead times. The potential benefits of adopting these metrics include efficient operations, first-cost and life cycle cost efficiencies, and occupant comfort and well-being. The best practice strategies, if used thoughtfully provide an approach towards enabling office buildings that would deliver throughout their entire life cycle, a flexible optimization of energy consumption, productivity, safety, comfort and healthfulness. The adoption of the qualitative and quantitative goals, would provide an impetus to scale up and market transformation toward energy-efficient processes, resources, and products- in addition to generating positive outcomes on global warming and societal benefits.Buildings in India were traditionally built with high thermal mass and used natural ventilation as their principal ventilation and cooling strategy. However, contemporary office buildings are energy-intensive, increasingly being designed as aluminum and glass mid- to high- rise towers . Their construction uses resource-intensive materials, and their processes and operations require a high level of fossil fuel use. A large share of existing and upcoming Indian office space caters to high-density of occupancy and multiple shift operations. Whereas the average for U.S. government offices is 20 m2 /occupant and for US private sector offices is 30 m2 /occupant, Indian offices have a typical density of 5–10 m2 /occupant. Business Processing Office spaces have three-shift hot seats—a situation that while conserving space because of its multiple usage also leads to considerably higher EPI levels.

Moreover, with the increased demand for commercial office spaces from multinationals and IT hubs, and the current privileges being accorded to Special Economic Zones , the trend is toward larger buildings with international standards of conditioned spaces, dramatically increasing the energy footprint of Indian offices .Building energy consumption in India has seen an increase from 14% of total energy consumption in the 1970s to nearly 33% in 2004-2005. The gross built-up area added to commercial and residential spaces was about 40.8 million square meters in 2004-05,flood and drain table which is about 1% of annual average constructed floor area around the world and the trends show a sustained growth of 10% over the coming years, highlighting the pace at which the energy demand in the building sector is expected to rise in India. In 2004– 2005, the total commercial stock floor space was ~516 million m2 and the average EPI across the entire commercial building stock was ~61 kWh/m2 /year. Compare this to just five years later in 2010, when the total commercial stock floor space was ~660 million m2 and the average EPI across the entire commercial building stock almost tripled to 202 kWh/m2 /year . Energy use in the commercial sector is indeed exploding, not just due to the burgeoning of the Indian commercial sector- India is expected to triple its building stock by 2030 , but also through the increase in service-level requirements and intensity of energy use. Thus there are two intertwined effects: an increase in total building area and an increase in the EPI. According to India’s Bureau of Energy Efficiency , electricity consumption in the commercial sector is rising at double the rate of the average electricity growth rate of 5%–6% in the economy. To deliver a sustained rate of 8% to 9% through 2031-32 and to meet life time energy needs of all citizens, India would need to increase its primary energy supply by 3 to 4 times and electricity generation capacity about 6 times.According to UNEP, approximately 80%–90% of the energy a building uses during its entire life cycle is consumed for heating, cooling, lighting, and other appliances. The remaining 10%–20% is consumed during the construction, material manufacturing, and demolition phases. To manage and conserve the nation’s energy, it is imperative to aggressively manage building energy efficiency in each commercial building being designed and operated in India. By increasing energy efficiency in buildings and other sectors such as agriculture, transportation, and appliances, it is estimated that the total Indian power demand can be reduced by as much as 25% by 2030.

To this end, the best practices outlined below identify processes and strategies to boost the energy efficiency in buildings, while also focusing on cost efficiency and occupant comfort.Just as no two buildings are identical, no two owners will undertake the same energy management program. It is also improbable to include all the listed best practices into one building, since some of them will conflict with each other. The practices are presented individually; however, they should not be thought of as an “a la carte” menu of options. Rather, designers and engineers, developers, and tenants need to work together to capitalize on the synergies between systems . From the demand side, this means implementing a suite of measures that reduce internal loads as well as external heat gains . Once the demand load is reduced, improve systems efficiency. Finally, improve plant design. This is illustrated through the Best Practice strategies and Data Points in this guide. The supply side can then add value by provision of renewables, waste heat sources, and other measures that are beyond this guide’s scope .A whole-building system integration throughout the building’s design, construction, and operation can potentially assure high performance, both in terms of energy efficiency and comfort/service levels.This Lifecycle Performance Assurance Framework was conceptualized by Lawrence Berkeley National Laboratory, USA and the Center for Environmental Planning and Technology, India through U.S. and Indian stakeholder engagements during the U.S.-India Joint Center for Building Energy Research and Development proposal to the U.S. Department of Energy and Government of India, 2011. At each stage of the life cycle, it is critical to ensure integration between the buildings’ physical systems and the building information technologies. The building physical systems include Envelope, HVAC, Plugs, Lighting and Comfort technology systems . Whereas, building information technologies provide information on the design and functioning of the building physical systems. First, by performing building energy simulation and modeling at the design phase one can estimate the building’s energy performance and code compliance. This is especially relevant for certain energy conservation measures that may not be immediately attractive, but may become so through further analysis. Second, by building in controls and sensors for communications, one can track real-time performance at the building phase, relative to the original design intent. Third, by conducting monitoring-based commissioning and bench marking during operations, one can ascertain building performance, compare to peer buildings and provide feedback loops. Thus the use of building IT creates metrics at all three stages of the life cycle to help predict, commission, and measure the building performance and its systems and components. .To design and operate an energy-efficient building, focus on the energy performance based on modeled or monitored data, analyze what end uses are causing the largest consumption/waste, and apply a whole-building process to tackle the waste. For instance, peak demand in high-end commercial buildings is typically dominated by energy for air conditioning. However, for IT operations, the consumption pattern is different. In the latter, cooling and equipment plug loads are almost equally dominant loads. The equipment plug load is mostly comprised of uninterrupted power supply load from IT services and computers, and a smaller load is from raw power for elevators and miscellaneous equipment. Figure 8 shows typical energy consumption end-use pies — energy conservation measures need to specifically target these end uses. By doing so, one can tap into a huge potential for financial savings through strategic energy management. However, a utility bill does not provide enough information to mine this potential: metering and monitoring at an end-use level is necessary to understand and interpret the data at the necessary level of granularity. Energy represents 30% of operating expenses in a typical office building; this is the single largest and most manageable operating expense in offices. As a data point, in the United States, a 30% reduction in energy consumption can lower operating costs by $25,000 per year for every 5,000 square meters8 of office space. Another study of a national sample of US buildings has revealed that buildings with a “green rating” command on an average 3% higher rent and 16% higher selling price.

Using the equation-based modeling parading leads to multiple advantages

However, the various uses of water are managed through separate processes, and the impact of management objectives for one can result in sub-optimal practices for the other, and will be exacerbated with predictions of greater year-to-year climate variability. Without a coordinated analysis capability, the ability to predict the effectiveness of climate mitigation, adaptation measures, or setting the value of water and energy is severely limited. In this LDRD, we will develop a computation tool and analysis framework for linked climate-water-energy co-simulation. The LDRD’s resulting research will lay the foundation for an overall regional-scale integrated assessment capability. We will develop analysis tools and software to estimate the cost of consuming water to produce energy, and the cost of consuming energy to produce water at regional spatial scales, and decade and multi-decade temporal scales, develop analytical tools to specify the performance requirements of climate models for the aforementioned water-energy capability, develop uncertainty analysis algorithms to map the trade space between model unknowns , and demonstrate the resulting tools and software by analyzing the effects of climate uncertainty on water-energy management for the American River basin and Sacramento urban region of California. Direct chemical imaging of elemental content and impurities with extreme spatial and depth resolution and specificity is required to understand,stacking pots predict and minimize processes that adversely affect the macro-scale properties of solar and other energy systems. A fundamental lack of key analytical techniques capable of providing this information leaves a pressing need for the development of next-generation nanoscale chemical imaging tools.

The objective of this project is to develop a novel ultrafast laser spectroscopy technique based on a two near-field nanoprobe scheme which will overcome current limitations and meet the requirements of a versatile chemical imaging system for detecting and chemically mapping defects in solar energy systems and other energy materials. This project aims to develop a sensitive femto second laser chemical imaging system in which both material excitation and signal detection occurs in the optical near-field vicinity. This chemical imaging system will enable a fundamental understanding of the properties and functionality of new solar material systems at spatio-temporal scales that were previously unattainable. In the second year of the project, both ultraviolet and visible femtosecond laser pulses were coupled to the near-field excitation probe to obtain chemical signatures of different material systems including nanoparticles, crystalline, and amorphous materials. We demonstrated near field visible-range fluorescence originating from ultraviolet femtosecond laser excitation in the optical near-field. Second order diffraction was also observed in the same spectral range, enabling simultaneous femtosecond Rayleigh and femtosecond laser-induced fluorescence signal detection in the near field vicinity with the dual probe near-field system. We further optimized the near-field excitation and detection processes as a way to improve sensitivity and resolution, and compared the signals from near-field excitation/far-field detection to near-field excitation/near-field detection signals from the same material system . Significant improvements in the signal-to-noise ratio were observed in the near-field/near-field configuration, despite the significantly smaller size of excited surface area. Finally, the potential of generating surface plasmon polaritons from a “femtosecond-laser point source” was explored in the near-field/near-field configuration at a Au/glass interface, and the signal intensity was studied as a function of inter-probe distance using visible femtosecond laser irradiation.

These results underline the importance of detecting near-field signals in the near-field vicinity as a way to achieve high sensitivity, high resolution chemical imaging at small spatio-temporal scales. The purpose of this research is to build and apply to test problems a computational platform for the design, retrofit and operation of urban energy grids that include electrical systems, district heating and cooling systems, and centralized and distributed energy storage. The need for this research arises because an integration of renewable energy beyond 30% poses dynamic challenges on the generation, storage and transmission of energy that are not well understood. Such a platform is also needed to assess economic benefits for the integration of co-generation plants that generate combined heating, cooling and power at the district level in order to decrease the carbon footprint of energy generation. To address this need, this project will create a flexible computational R&D platform that allows expanding energy and policy analysis from buildings to district energy systems. Questions that this platform enables to address include where to place energy generation and storage, how to set the price structure, how to trade-off incentives for energy-efficiency versus incentives to add generation or storage capacity at buildings, how to integrate waste heat utilization to reduce the carbon footprint of district energy systems and how to upgrade the electricity grid to integrate an increasing fraction of renewable energy while ensuring grid reliability and power quality. Significant accomplishments have been made in the development of multi-physics models that describe the interaction between buildings and the electrical grid. Regarding multi-physics modeling, we completed the development of more than fifty models for analyzing buildings-to-electrical grid integration. The models are now part of the Modelica Buildings library, an equation-based object-oriented library for modeling of dynamic building energy systems.

The models can represent DC and AC systems under different assumptions such as quasi-stationary or dynamic-phasorial representation. The electrical models can be connected to thermal models of buildings in order to evaluate the impact of electrical and thermal storages, as well as of building controls, on the distribution grid. The models have been validated against standard IEEE procedures defined for testing the correctness of electrical network simulation software. The models, the results of the validation and few examples showing the ability to perform building-to-grid simulation studies were presented at the 2014 BauSIM conference in Aachen . The paper, titled “A Modelica package for building-to-electrical grid integration” won the best paper award.It allows to graphically connect components of cyber-physical systems that advance in time based on continuous time dynamics, discrete time dynamics, or event-driven dynamics, in order to study building-to-grid integration. These languages also allow accessing the mathematical structure of the entire model. Such information has been used for co-simulation and for solving optimal control problems. For example, we demonstrated how simulation models can be reused to solve optimal control problems by means of computer algebra and numerical methods. The problem investigated was to determine the optimal charge profile of a battery in a small district with multiple buildings and photovoltaic systems that minimizes energy subject to voltage constraints. The increasing availability of complete genomic sequences and whole-genome analysis tools has moved the construction of industrial hosts towards rational design by metabolic engineering and systems biology. The current genetic manipulation tool kits available for industrial hosts, however,grow lights are desperately sparse and unpolished in comparison to the array of tools available for E. coli. The goal of this project is to develop a high throughput genome editing tool to facilitate the engineering of novel applications not only in E. coli, but in under exploited industrial producers such as Streptomyces coelicolor and Corynebacterium glutamicum. The original goal of this proposal was to create a secure industrial bacterium by converting all 484 TGA termination codons to TAA in the C. glutamicum genome and then reassigning TGA to encode an unnatural amino acid. In our phase I work, we discovered that the recombineering approach alone could not achieve the frequency of allelic replacement needed to complete codon depletion in a reasonable time frame. We concluded that a more efficient genome editing tool would be needed for this project. Recent work on the Clustered Regularly Inter spaced Short Palindromic Repeat adaptive immune system of prokaryotes has led to the identification of a DNA endonuclease called Cas9 whose target sequence specificity is programmed by small spacer RNAs in the CRISPR loci. By editing spacer sequences we can direct Cas9 to cut endogenous DNA targets, thereby forcing cells to repair themselves in a predictably mutagenic manner. Such Cas9 mediated cleavage in vivo is more efficient, effective, and potentially multi-plexable than any other tools available for genomic engineering. Our most significant accomplishment has been to develop a reproducible and efficient protocol for engineering E. coli DNA in vivo. Our method uses the Streptococcus pyogenes CRISPR-Cas9 system in combination with λRed recombineering proteins in E. coli. We have created a mobile plasmid with both Cas9 and λRed activities and used it successfully in performing genome editing in all E. coli strains in hand. This protocol has been successfully used to modify gene loci in living E. coli cells within a 3 weeks time frame. The developed Cas9 toolkit and protocol have already been used in several bio-energy research projects.

We have also received requests and started disseminating the toolkit and protocol to general scientific community. We have also succeeded in developing informatics tools to aid in the design of CRISPR spacer constructs given a targeted range of genomic sequences. This tool would be handy in the design of Cas9 genome editing at scale. As we had predicted, our approach provides a significantly faster turnaround time to modify genetic codes than any available tools. We are hopeful that this method will be generally applicable to non-E. coli hosts, which will greatly aid our future goal of modifying genetic codes of industrial microbes. The purpose of this project is to develop sensitive and selective biosensors for a diverse set of target chemicals as a way to provide a high-throughput functional screening method for molecule production in microbial cells. Advances in DNA synthesis and combinatorial DNA assembly allow for the construction of thousands of pathway variants by varying both the gene content as well as the expression levels of the pathway components, a technique commonly referred to as pathway refactoring. However, a lack of sufficiently sensitive, selective, and scalable technologies to measure chemical production presents a major bottleneck that limits our ability to fully exploit large-scale synthesis efforts. We will develop and deploy novel biosensors systems based on both protein and RNA molecules that have been previously shown to respond to the presence of small molecule ligands. In the case of protein-based sensors, we will use synthetic biology approaches to modify the ligand specificity of a known transcription factor . We will screen for ligand-dependent TF function by placing TF binding sites in front of GFP, such that GFP activation should only be observed in the presence of a ligand. We will test the affinity and response of the TF mutant library to a variety of relevant ligands by using several rounds of selection using fluorescence activated cell-sorting . Samples collected after each round of selection will be sequenced using next-generation sequencing methods and we will seek to understand the relationship between TF ligand affinity and sequence evolution, as this will facilitate more rational engineering approaches. In the case of the nucleotide sensor, we will develop a system in which cell survival is linked to ligand production by coupling the switch to a chemical selection system used during cell growth. We will then deploy this system to screen a library of 20K pathway variants to select and further characterize high molecule producing E.coli strains. Selected strains will be sequenced and we will use modeling approaches to identify the key variables and bottlenecks associated to molecule production. Over the course of this LDRD funding, we have successfully developed proof of principle for an end-to-end system to screen for gene regulatory sequences in an unbiased manner. This work has been published in Nature Methods, and an additional small project resulting from this work has been reported in Biology Open. Briefly, we have shown that we can clone hundreds to thousands of random sequences into a precise location in the mouse genome that is linked to a reporter gene, which is activated when sequences are behaving as enhancers. The targeted cells can be flow sorted to isolate those cells that are actively expressing the reporter gene, and the sequences responsible for this reporter expression can be identified through DNA sequencing. To date, we have used this method to test the embryonic stem cell enhancer activity of more than 0.5Megabases of mouse or human genomic sequence in 1kilobase increments. To apply this method to a broader range of cell types, a major aim of this proposal, we have coupled the ES cell reporter assays we developed with in vitro differentiation and showed that we can accurately identify enhancers active in cardiac and neuronal cell populations.

Understanding and controlling these processes remains a fundamental science challenge

Deposition of multi-layer coatings on sawtooth substrate will allow a new kind of x-ray gratings, Multilayer-coated Blazed gratings which will be a basis for a new generation of high resolution and high throughput x-ray instrumentation. The flow of energy and electric charge in molecules are central to both natural and synthetic molecular systems that convert sunlight into fuels and evolve over a multitude of timescales.We address this challenge by probing chemically complex systems in the gas phase by combining the precise time information of ultrafast spectroscopy techniques with the chemical sensitivity characteristic of synchrotron radiation. An ultrafast pulse pair with VUV or soft x-ray photons from the synchrotron are used to make measurements with atomic-site specificity. With access to photons spanning the range of terahertz to hard X-rays that is provided by a synchrotron, coupled with the rich spectroscopy available in the UV-VIS-IR region provided by table top ultrafast lasers, a multi-dimensional tool to probe dynamics is enabled.We have developed a portable transient absorption experimental apparatus to perform time resolved analysis of two color laser excitation schemes applicable to a variety of gaseous systems. This setup is currently deployed at the soft x-ray Beamline 6.0.2 at the Advanced Light Source,ebb flow tray where we are interrogating the excited state spectroscopy and dynamics of nitrophenols: Here one ultrafast pulse excites onitrophenol while a second ultrafast infrared pulse promotes the system to nearby vibronic state after a suitable time delay. The transmitted IR light is detected by a photodiode and a high-sensitivity photon spectrometer to determine the absorption as a function of IR wavelength and time delay.

These experiments will be performed in parallel with laser-synchrotron experiments as a complementary diagnostic tool, allowing for the precise control of the electronic states in model chromophores that is crucial towards developing ultrafast laser-synchrotron multicolor spectroscopy. We have measured ion momentum images of o-nitrophenol following photoexcitation and photoionization from its electronic ground state by soft x-rays tuned near the core-level resonances of oxygen and nitrogen, at Beamline 6.0.2 at the Advanced Light Source. Ultraviolet pulses, produced from the 3rd harmonic of a Ti:sapphire laser system that is synchronized to the ALS storage ring and the 4kHz repetition rate of the soft x-ray Beamline, were also employed in these experiments in an effort to measure the products of laser photo dissociation by core-level ion momentum spectroscopy. Our subsequent improvements to the reliability of the laser systems have increased the laser pulse energies from a few hundreds of nanojoules to above 10 microjoules for each of the UV and IR laser beams that will be used for the 3 color experiments. With all the hardware and staff in place experiments are underway to probe the dynamics of evolving excited states in gas phase systems.In parallel, we have developed a dual catalyst system to homologate alpha-olefins to tertiary amines by sequential hydroformylation and reductive amination . Hydroformylation occurs in the organic phase of the reaction medium and is catalyzed by the combination of Rh2 and BISBI, a ligand developed by Eastman Kodak for hydroformylation with high selectivity for linear aldehydes. The aldehyde intermediate condenses with secondary amine reagents to form an iminium ion, which reacts with a metal hydride to afford the tertiary amine product. Reductive amination occurs in the aqueous phase of the reaction medium and is catalyzed by the combination of Cp*Ir3 and a water-soluble diphosphine ligand. Finally, we have prepared artificial enzymes by two methods, In the first, we prepared noble metal-porphyrin active sites in myoglobin. Based on prior reconstitution of myoglobin with both abiotic protoporphyrins and [M]-salen complexes, we incorporated new Ir, Rh, Co, and Ru-based cofactors into myoglobin mutants in which the axial ligand and secondary coordination sphere are varied.

In the past year, we developed a new, highly efficient method for the generation of artificially metallated myoglobins based on the direct expression and purification of apo-myoglobins. Using these new myoglobin-based catalysts, we have shown for the first time that an artificially-metallated PPIX-binding protein can catalyze organic reactions that cannot be catalyzed by the same protein binding its native Fe-PPIX cofactor. In particular, Ir-PPIX-myo catalyzes cyclopropanation of internal olefins and carbene insertion into C-H bonds, while Co-salen-myo catalyzes intramolecular hydroamination of unbiased substrates. In the second approach, we developed artificial metalloenzymes for transformations for which there are no known metal catalysts. We are doing so by a bottom-up approach in which we identify by high throughput screening of unrestricted metal-ligand combinations a model reaction using reagents and conditions compatible with proteins. We then conjugate this catalytic site into a protein hosts, using covalent or non-covalent interactions; the catalytic properties of the conjuagates are then be evaluated, and the activity of the enzyme fined-tuned by modification of the ligand used. Following this proposed methodology, identified a metalloenzyme for regioselective halogenation of aromatic substrates. A Cobalt cofactor covalently bound to nitrobindin catalyzes the halogenation of a simple, water-soluble arene. There are two synergistic purposes to this project. The first objective is to improve our ability to understand the physical factors that are responsible for intermolecular interactions. Electronic structure calculations are nowadays capable of calculating intermolecular interactions nearly as accurately as they can be measured. However such calculations by themselves do not provide any understanding of why the interactions have the magnitudes that they do. Methods for this purpose are called energy decomposition analyses . It is an important open challenge to design improved EDA’s, a problem that is best attacked by deepening our understanding of the factors controlling intermolecular interactions. The second objective of the project is to develop new, more efficient numerical methods for solving the equations of electronic structure theory for molecular clusters .

There should be natural connections between new EDA tools, and the problem of computing those interactions more efficiently than has been hitherto possible. We believe the combination of improved EDA’s for analysis together with lower scaling algorithms for calculating the interactions will be a potentially significant step forwards in quantum chemistry. The electron-electron correlation energy is negative, and attractive dispersion interactions are entirely a correlation effect, so the contribution of correlation to intermolecular binding is commonly assumed to be negative, or binding in nature. However, we have discovered that there are many cases where the long-range correlation binding energy is positive, and therefore anti binding, with certain geometries of the water dimer as a prominent example. We have also undercover the origin of this effect, which is the systematic overestimation of dipole moments by mean-field theory, leading to reduced electrostatic attraction upon inclusion of correlation. Thus, EDA’s that include correlation but do not correct mean field electrostatics are sub-optimal, especially those that describe all of the correlation energy as dispersion. This result has major implications for the correct design of new EDA’s, which we look forward to taking up in future post-LDRD work. Our second major activity has been exploring new ways of using the natural separation of energy scales between intra-molecular and intermolecular interactions to improve the efficiency of electronic structure theory calculations. Specifically,flood and drain tray we have explored whether coupled cluster calculations can be accurate approximated by a starting point where the CC calculation is performed on only the intra-molecular excitations or intra-molecular + dispersive intermolecular excitations . The remaining contributions are then evaluated approximately by perturbation theory . The question is whether this approach can improve the often-questionable accuracy of PT, without the prohibitive computational cost of a full CC calculation on a molecular cluster. Our results indicate that PT based on the linear model does not significantly improve upon direct use of PT, while the quadratic model does yield significant gains in accuracy. Work is presently underway to explore whether this result can be improved by using orbitals relaxed in the cluster environment, and how to obtain such orbitals more efficiently than brute force solution as if the cluster is a supermolecule.The purpose of this project is to develop a powerful theoretical framework capable of discovering general design rules based on nanoscale properties of molecule shape and size, charge distributions, ionic strength, and concentration to influence the mechanism, percolation, morphology, and rates of assembly over mesoscale time and lengthscales. The ability to control for structure and dynamics of the assembly process is a fundamental problem that, if solved, will broadly impact basic energy science efforts in nanoscale patterning over mesoscale assemblies of block copolymer materials, polyelectrolyte organization at solid or liquid interfaces, forces governing multi-phasic soft colloids, and growth of quantum dots in polydisperse colloidal medium. Fundamental design rules applied to complex and heterogeneous materials are important to DOE mission science that will enable next generation fuel cells, photovoltaics, and light emitting device technologies. At present our ability to design and control complex catalytic activity by coupling simpler modular systems into a network that executes novel reactive outcomes is an unsolved problem. And yet, highly complex catalytic processes in nature are organized as networks of proteins or nucleic acids that optimize spatial proximity, feedback loops, and dynamical congruence of reaction events to optimize and fine tune targeted biochemical functions.

The primary intellectual activity of bio-mimetic scaffolding – the design of spatial organizations of modular bio-catalysts – is to restore their catalytic power in these new chemical organizations after they have lost their catalytic functions optimized in a separate biological context. That is our goal. Some inspiration for our approach to catalytic network design is derived from another highly successful bio-mimetic approach- laboratory directed evolution – an experimental strategy based on the principle of natural selection. The goal is to alter the protein through multiple rounds of mutagenesis and selection to isolate the few new sequences that exhibit enhanced catalytic performance, selectivity, or protein stability, or to develop new functional properties not found in nature in the creation of new bio-catalysts. Given the limitations of our understanding of the structure-function relationship, LDE provides an attractive alternative to rational design approaches and is highly flexible in application to different bio-catalysis reactions. However, there are still outstanding problems when transferring LDE into new optimization strategies for new bio-catalysts. First the finite size and composition of the LDE libraries may be limiting for the optimization of enzymes that act on, for example, solid substrates, and there has been little effort devoted to developing LDE libraries for optimizing bio-catalytic activity in the context of chemical networks. Furthermore, although often highly successful, LDE is an opaque process because it offers no rationale as to why the mutations were successful, and therefore stands outside our ability to systematically reach novel catalysis outcomes. This proposal is a theoretical study to offer new rational design strategies for building an artificial chemical network of bio-catalytic reactions that execute complex but now non-biological catalytic functions using computational directed evolution . Traditionally enzyme optimization is often focused on the energetics of active site organization but there is correspondingly little effort directed toward optimizing entropic or dynamical effects that are also equally relevant for improvements in catalytic activity. Therefore we propose a new CDE design strategy that considers not only energetics but novel physical and theoretical concepts Recent studies report evidence that some organic aerosols might exist in the atmosphere not as well mixed liquids – the traditional description, and their general state when they are formed – but rather as highly viscous, glassy materials with extremely slow internal reaction-diffusion times and low evaporation rates. These observations suggest that the characteristics of organic aerosols currently used in regional and global climate models are fundamentally incorrect: viscosity affects reactivity and indeed, the models consistently under-predict the quantity of aerosol in the atmosphere by factors of 5 to 10. We are addressing this gap by developing a quantitative and predictive description of how initially liquid aerosols are transformed into glassy ones, in particular by gas phase oxidizers. Reaction-diffusion models that are chemically accurate and fully validated by experimental data have not been previously used in this field, and hold promise for improving parameters for atmospheric models. Model simulations are performed using stochastic methods, which are well-suited to large dynamic ranges of conditions, and capture fluctuations and rare events key to liquid-solid transitions.

How can a food system be characterized as agroecological?

State institutions, responsible for managing natural and socioeconomic disasters, can create favorable or adverse conditions for the recovery of the productive capacity of an agroecosystem. In this respect, there are institutions that favor the resilience of an agroecosystem more than others. In contrast to private or simply state property, communal forms of ownership, characteristic of traditional rural cultures, result in management approaches that adapt more easily to surprises or changes experienced by ecosystems.”This emphasis on institutions and the resilience dimension suggests stronger links between agroecology and fundamental environmental, ethical, political, and governance related questions and issues about the right and access to land and other natural resources and ecosystem services, such as water, soil, forests, and pollinators. It also underlines the importance of wider disciplinary and practical perspectives, such as landscape agroecology and the process of landscape planning in rural as well as linked rural–urban settings. Wezel and co-authors emphasize the relevance of working with “agroecology territories” in a more holistic framework combining sustainable agriculture and food systems as well as addressing biodiversity conservation, as places actively engaging in transition to sustainable farming and food systems.The agroecosystem concept and the science of agroecology provide a foundation for examining and understanding the interactions and relationships among the diverse components of the food system.There is a clear and undisputable link between how food is produced and how it goes into the food system. Stassart and co-authors emphasized ways in which agroecological systems could expand to a broader level,hydroponic grow system suggesting greater valorization of agrobiodiversity and the underlying diversity of knowledge found in both farming and food system, while providing broader perspectives of agroecology both in farming and food systems.

Logically, food cannot be claimed to be “sustainable,” even when being produced in a “sustainable way,” if it feeds into and contributes to food systems which are fundamentally unsustainable, for example, are contradicted by the use of huge amounts of fossil fuels or packaging material, or increase social inequity, or are wasteful of other tangible and intangible resources. Sustainability has multiple dimensions, and as emphasized by Gliessman : “A sustainable food system is one that recognizes the whole systems nature of food, feed and fiber production in balancing the multifaceted concerns of environmental soundness, social equity, and economic viability among all sectors of society, across all nations and generations.” Gliessman writes, with a background of 15 years of experience with an agroecology course, about the constraints of earlier framings of agroecology only as a science: “… they are primarily trying to make an argument that agroecology is basically a science for developing new food production technologies that do a lot of positive things for agriculture, the environment, and for people. This is good, but what they don’t seem to acknowledge is that agroecology is also a social movement with a strong grounding in the science of ecology. And when I say strong grounding in ecology, I mean grounded in our understanding of relationships, interactions, co-evolution, and a capacity to change to meet the complex aspects of the sustainability we are trying to achieve in food systems – from local to global.” Gliessman mentions five important elements of alternative food system : “In such a system food production and consumption has a bio-regional basis; the food supply chain has a minimum number of links; farmers, consumers, retailers, distributors, and other actors exist in the context of an interdependent community and have the opportunity for establishing real relationships; opportunities exist for the exchange of knowledge and information among all those who participate in the food system; and the benefits and burdens of the alternative food system are shared equally by all participants.

These aspects of an alternative food system are closely interrelated.” The linkages between agroecology and food sovereignty receive wide acknowledgement and detailed explanation by agroecological and food sovereignty movements , viewing agroecology as a major catalyst for enabling the realization of the agrarian reform called for by the food sovereignty movements. These movements focus upon principles of low-input use, resilience, sustainability as well as its prioritization of smallholders or peasant farmers . Food sovereignty and agroecology are also strongly united through their agency for and common defense of what are claimed as the common inheritances of humanity in terms of natural resources. Altieri and Nicholls demonstrate how different dimensions of sovereignty including food, energy, and technological sovereignties are all critical to agroecology and contribute to its resiliency. Table 1 suggests how linkages between key features of agroecology on a wider scale can be brought into important functions and structures of entire food systems. Multi-functionality and resilience are highlighted by numerous agroecological scholars and address agroecological systems’ capacities and aims . These scholars assess system properties such as ability to absorb shocks, and other inherent capacities to undergo relevant transformations, transitions, and processes of stabilization under changing and new conditions through feedback loops and iterative development processes . Resilience is a relevant key concept which potentially informs the design and maintenance of an agroecological food system, which can build upon local structures of markets, linking reciprocal flows, for example, between urban and rural landscapes, preserving food cultures and nourishment, and opening new possibilities for processing, storing, and retailing.This holistic understanding of health and the importance of maintaining a high-immunity level is also relevant for food systems, where the juxtaposition of feedback loops, like immune system response, are imagined to help regulate the resource flows and stimulate the social connectedness in the food system, and emphasize the nourishment aspect of the food which is produced, exchanged and eaten in the food system. Nourishment is an important characteristic, of food, produced under circumstances which nourish the soil and environment, but also within a food system which aims at composing our entire diets as a “sustainable diet,” as defined by FAO: “those diets with low environmental impacts which contribute to food and nutrition security and to healthy life for present and future generations.

Sustainable diets are protective and respectful of biodiversity and ecosystems, culturally acceptable, accessible, economically fair and affordable; nutritionally adequate, safe and healthy; while optimizing natural and human resources” . In addition to the established four aspects of food security , and in connection with the institutional framework and governance of food, the Ryerson University Centre for Studies in Food Security adds a fifth dimension of food security, namely “agency,” which multiple examples and cases point to as the most crucial critical factor for all aspects of food security , and which highlight equity as an important pillar of agroecological food systems. This also links to “nourishment” as a concept which goes far beyond “providing passive populations with calories,” focusing instead on peoples’ ability, access and right to grow, exchange, and eat healthy, nutritious food which is meaningful to them, in a fair and equitable way .Potentials in the agriculture and food systems that link urban and rural areas need to be maximized as a normal part of a balanced development process. City Region Food Systems is referred to as a cutting-edge concept . In this article,rolling benches we understand a city-region context for food systems as a landscape which includes rural, urban, and peri-urban areas, the two latter varying from a few thousand persons to many million people , which of course will call for widely different place-based and context relevant solutions. The increasing and partly unplanned urbanization has led to significant changes in diets, consumption patterns, and food trade , and in many urban areas, food markets are detached from local or domestic food production. In addition,huge amounts of so-called waste are produced, both in terms of food waste from processing and ensuring availability of a wide range of food at all times for eaters, as well as waste based on non-renewable resources . The fact that we talk about “waste” underlines the detachment from food production and farming, soil management, animal keeping, and resource cycles which were not present just 100 years ago These issues are addressed by the first two points in Table 1, which are strongly interlinked and enforce minimal external inputs and recycling of resources and biomass . In a city-region context, this clearly calls for a reorganization of resource cycles and avoidance of losses of energy, water, and nutrients in a combined rural–urban landscape. Where the linkages between rural and urban areas in some cases are facilitated by local governance systems in terms of markets linking, for example, smallholder farmers with urban markets , creation of full resource cycles including, for examples, compost material from cities to the soil and the rural areas, seem to be rarely addressed. Such cycles could involve human food waste being converted into animal feed and compost, energy in terms of bio-fuels produced from what normally would be considered as organic waste, minimization of plastic and packaging, and systems involving human urine and feces being composted and/or recycled in safe and responsible ways. Indeed, such agro-waste-recycling systems enabled Paris to rely on its local food shed for over 1,000 years . The system boundaries in a city-region food system cannot be clearly defined, and a “completely closed food system” would be unlikely, even a contextualized food system, shaped, and iteratively co-created by multiple involved actors, and based on recycling and closed loops principles. Referring to the four-dimensional sustainability concept including environmental, social, economic, and institutional levels, as described by Valentin and Spangenberg , Spangenberg and FAO , an agroecological food system in a city-region context will consist of a complex web of smaller food systems, for example, involving CSAs, urban, and peri-urban farming and a number of different supply chains and levels of organization, which interact and overlap internally as well as with surrounding landscapes and food systems.

Most likely, products from other geographic and climatic zones, for example, coffee and spices, will be involved, and inclusion of surrounding marine or other landscape elements further blur apparently clear systems boundaries. Furthermore, vulnerability to local shocks raises the general idea of crisis-preparedness and will always call for a certain ability of all food systems to step in and assist others, in case of failing harvests or natural disasters, and make wider connections between food systems desirable. Trade and transport between different food systems can be organized in ways which are equitable and environmentally not burdening, and can supplement local food systems rather than displace local produce. These aspects need to be considered if the aims and characteristics of agroecological food systems are to be taken seriously. Mendéz and co-authors discussed transformative agroecology and stated that agroecology is explicitly committed to a more just and sustainable future by reshaping power relations from farm to table. In our contextualization of agroecological food systems, we see the need to explore how the food system can be connected in whole cycles, that is, from table to farm as well. As mentioned above, Gliessman discusses what “our food system” would look like, if transformed so that it follows the basic thinking of agroecology. This is envisioned as the unfolding across five potential levels of transformation, where the first three address agroecosystem changes, and levels four and five target formation of more local and global food systems, respectively. Level four targets the local level food systems and creation of the above mentioned “food citizenship,” where food is grounded in a direct relationship between eaters and growers. Level 5, however, targets a wider change: “… build a new global food system, based on equity, participation, democracy, and justice, that is not only sustainable, but helps restore and protects earth’s life support systems upon which we all depend” . This vision for integrating webs of different food systems – whilst emphasizing the importance of fairness throughout the systems – becomes of high relevance in complex and multifunctional city-region food systems.There is much evidence of severe negative long-term environmental and social effects of our current globalized food system, for example, the feed and livestock production as one example . The ideas of agroecological food systems present alternatives to this, among others by contributing to local economic and resource circulation and inclusive, equitable food systems. Such systems should perhaps be described as “socio-agroecological food systems,” emphasizing the closely woven social, agroecological, and ecological interactions, for example, in terms of networks involving both farmers and non-farmers and between actors in the regions, no matter whether we talk ecological or political zones.

Another effective natural enemy group are the below ground invertebrates and microbes

In Toronto, surveys showed that besides the typical local vegetables , farmers grew an additional 16 vegetable crops to supply the local community with foods unavailable in local grocery stores. These crops included Asian vegetables, such as bok choy, long bean, hairy gourd, and edible chrysanthemums and substantially increased the vegetative diversity of the urban garden system .Plant diversity is a principle predictor of insect diversity at small spatial scales , and plant diversity and small-scale structural complexity is important for tree-dwelling arthropods , ground-dwelling arthropods , web spiders , grasshoppers, bees, and ground-dwelling beetles . However, arthropod species richness has been shown to decrease with increasing impervious surface and intensive management in urban green areas, and intensive UA would presumably have a negative impact of species richness . In a study of urban backyard gardens in Toronto, invertebrate abundance and diversity was enhanced as the number of woody plant structures and plant species diversity increased, and backyard gardens had higher abundances of winged flying invertebrates when compared with urban grasslands and forests . Likewise, within domestic gardens in the UK, invertebrate species richness was positively affected by vegetation complexity, especially the abundance of trees . In Pennsylvania, butterfly diversity increased with native plantings within suburban gardens , and parasitoid diversity increased with floral diversity within urban sites . Because allotment gardens often exhibit a rich abundance of flowering plants and thus a prolonged season for nectar supply, allotment gardens can support urban pollinators for long periods of time . In a survey of 16 allotment gardens in Stockholm, the number of bee species observed per allotment garden ranged between 5 and 11, including a large number of bumble bees, which were observed on a total of 168 plant species,mobile vertical farm especially those in the Lamiaceae, Asteraceae, Fabaceae, Boraginaceae and Malvaceae . In a survey of gardens in Vancouver, a mean richness of 23 bee species were found across the different garden types sampled .

Similarly, community gardens in NYC were found to provide a range of ornamental plants and food cropsthat supported 54 bee species, including species that nest in cavities, hives, pith, and wood . In another study in NYC community gardens, the authors found that butterflies and bees responded to sunlight and floral area, but bee species richness also responded positively to garden canopy cover and the presence of wild/unmanaged areas in the garden . In Ohio, bee abundance in private, backyard gardens increased with native plantings, increases in floral abundance, and taller herbaceous vegetation . Overall, these studies support the idea that UA management with high vegetation diversity can have positive effects on invertebrate biodiversity in urban systems.Wildlife friendly features implemented in gardens can increase vertebrate diversity . Practices such as planting fruit/seed-bearing plants, limiting the use of pesticides and herbicides, and constructing compost heaps and bird tables increase bird and vertebrate abundance and diversity . Numerous avian studies have also shown that gardens with sufficient native vegetation can support large populations of both native and exotic bird species at the local level , and at the landscape level, garden heterogeneity can increase the overall diversity of insectivorous birds . Heterogeneity that includes native plant species may be particularly important, as studies of suburban gardens in Australia show that nectarivorous birds prefer native genera over exotic genera as foraging sites . For non-avian vertebrates, garden size and management style is critical for persistence in urban areas. Baker and Harris reported 22 mammalian species/species groups recorded in garden visitation surveys within the UK; however, mammal garden use declined as housing became more urbanized and garden size and structure decreased. Key findings from a range of garden studies show that in addition to high cultivated floral diversity, the three dimensional structure of garden vegetation is an important predictor of vertebrate abundance and diversity .

Increases in the vegetation structure and genetic diversity of domestic garden habitats have been shown to improve the connectivity of native populations currently limited to remnants and aid in the conservation of threatened species . For example, one study in Latin America documented that garden area and tree height were positively related to the presence and abundance of iguanas within urban areas, and increased patio extent allowed for greater iguana movement across the urban landscape . In addition to habitat quality, habitat connectivity may also affect the ability of ground dwelling animals to persist in the urban landscape ; thus, UA systems will need to be connected to other vegetated areas to allow for landscape movement. These studies show that garden structures or management practices that provide food and nesting resources or movement corridors can be important strategies for maintaining vertebrate diversity in cities.Ecosystem services are often a function of biodiversity levels , thus the composition, diversity, and structure of plant and animal communities within and around UA are important to consider for urban ecosystem service conservation. specifically, biodiversity provides opportunities for ecosystem services that city planners value–including energy efficiency, storm water runoff, air pollution removal, carbon storage and sequestration, and water quality provision . Within agricultural systems, ecosystem services like water storage, pollination, and pest control increase US crop production resilience and protect production values by over $57 billion per year . However, there remains a large knowledge gap around the provisioning of services in UA systems. This is especially concerning given increasing global food demands, climate-related crop failure, and consistent limitations in fresh food access within urban centers . We posit that UA systems provide a suite of ecosystem services, and that the extent and quality of the services are largely dependent on the biodiversity and vegetative structure of the UA system. Thus the form and management of urban gardens can radically influence service provision. Small garden patches are able to supply structural habitat diversity and carbon storage , while allotment gardens can potentially support ecosystem services such as pollination, seed dispersal, and pest regulation to the broader urban landscape .

In contrast, reductions in biodiversity can cause a reduction in the resilience of urban ecosystems overall . specifically, we review some of the most studied and important ecosystem services to the urban agricultural system: pollination, pest control, and climate control.As mentioned previously, urban agriculture can support a diverse assemblage of bees and butterflies , and the number of native flowering species can positively impact bee abundance and diversity . This may have large implications for fruit set and crop production given that crops experience higher or more stabilized fruit set in habitats with greater native bee diversity . Additionally, floral cover can positively impact conspecific pollen deposition by attracting a greater number of pollinators into an urban garden . Some studies suggest that pollinator foraging and dispersal needs are best supported by a network of small,hydroponic growing natural habitat fragments across urban areas . In general, bee foraging distance correlates with body size , and some larger bodied bees can regularly fly >1 km from their nest to forage on floral patches . Thus proximity to natural habitat can increase bee abundance, diversity, and pollination success for a wide range of crop species and may similarly impact bee diversity within urban landscapes. Research in rural and exurban habitats suggests that bumble bee nesting densities are positively impacted by the proportion of suburban gardens and wooded habitat and that bees are willing to forage further for high diversity flowering patches . Furthermore, both nesting density and dispersal are negatively impacted by the amount of impervious cover in a landscape , revealing the potential for urban landscapes to obstruct pollinator foraging and dispersal.Likewise, heavy development that leads to shaded and closed-off garden areas tend to limit local pollinator diversity . Overall, urban landscapes that maintain diverse natural habitat fragments and minimize impervious cover can promote bee nesting and dispersal. Insights from these studies and others suggest that pollination services may be higher in urban gardens if natural habitat patches and diverse flowering resources are available.Biological control is a method of controlling pest populations through the utilization of other organisms . Bio-control has been used for centuries within agricultural systems and could potentially enable sustainable crop production in cities without the reliance of toxic chemical pesticides. This is especially useful in high density urban areas where human exposure to toxins is more risky . The natural enemy complex responsible for bio-control includes predators, parasitoids, and pathogens that regulate pest populations . Different natural enemy taxa often have specific habitat preferences; therefore management for bio-control in urban areas requires knowledge of those factors that influence the abundance and richness of natural enemies.

One of the most effective groups of natural enemiesis parasitic Hymenoptera, which reduce herbivorous insect damage to urban trees, ornamental landscape plantings, and residential fruit and vegetable gardens . Bennett and Gratton showed that local and landscape scale variables associated with urbanization influence parasitic Hymenoptera abundance and diversity in residential and commercial properties along a rural to urban landscape gradient in Wisconsin. They found that parasitoid abundance was a positive function of flower diversity, and parasitoid diversity decreased as impervious surface increased in the surrounding landscape. This suggests that parasitoids benefit from increased floral resource availability and decreased impervious cover, similar to patterns described for pollinators.Below-ground natural enemies can prey on soil-dwelling stages of insect pests in urban lawns, often reducing the frequency and intensity of pest outbreaks . Yadav et al. tested if changes in urban habitat structure of gardens and vacant lots influenced below-ground bio-control services rendered by invertebrate and microbial communities. They showed that ants and microbial communities contributed a majority of the bio-control service, with ants exhibiting significantly higher bio-control activity than microbes, particularly in vacant lots. The high levels of below ground bio-control activity in vacant lots and urban gardens could serve as a foundation for building sustainable pest management practices for urban agriculture in cities. A number of other natural enemies provide bio-control services in UA landscapes, such as birds, bats, spiders and beetles , but there is still very little research done regarding their role in urban agriculture. The use of organic composts to support pest control by encouraging predatory species has shown some success . More work will be required to understand how urban systems, and especially urban agriculture, affect foraging behaviour in higher trophic level natural enemies .As climate models continue to indicate an increased likelihood of heat waves in urban areas, there has been great interest into the relationship between green infrastructure and mitigation of the urban heat island effect . Two main approaches have been proposed as solutions to reduce the urban heat island effect, maintaining more urban green space and reducing impervious surfaces. Increasing the proportion of green space within the urban matrix can reduce both surface and air temperatures . However, the variety of vegetative infrastructure, management, and plant species within UA systems will vary in their cooling potential. Akbari et al. predicted that up to a quarter of the cooling effect by urban trees in US cities are a result of garden/street trees contributing direct cooling of adjacent buildings, and this effect is dependent on tree size,species, maturity, and architecture. At the garden level, vegetation can influence the energy loads on individual buildings, but how this impacts air temperatures across the wider urban environment is still unclear . However, considering the potential impact that increased vegetation has toward regulating temperatures, there could be big implications on energy use and comfort levels for urban communities. Additionally, gardens located in areas unsuitable for buildings or established as buffer zones along rail corridors and highways, may be helpful in balancing the urban microclimate. Gardens also provide storm attenuation services to the urban matrix. Vegetation, trees especially, intercept intense precipitation and hold water temporarily within their canopy, thus reducing peak flow and easing demand on storm drains . In German cities, allotment gardens used on green belts to facilitate drainage have been shown to reduce heat and demand for air conditioning . In contrast, hard paving increases impervious surface, and in Leeds, UK, increased hard paving in residential front gardens has been linked to more frequent and severe local flooding .

Water was used to identify growth rates under a no-nutrient condition

As previously reported by Hauck et al. , Pst DC3000 did not induce strong callose deposition on its Arabidopsis plant host , although callose deposit frequency was significantly higher than in the water control . However, the Pst DC3000 type-three secretion system mutant induced 2.5 times more callose deposits than the wild type Pst DC3000 in Arabidopsis leaves . Altogether, our findings suggest that STm 14028s can induce a weak defense in lettuce leaves, similar to that of Pst DC3000 in Arabidopsis leaves. A major function of the SPI genomic region is to assemble the TTSS apparatus and encode effector proteins that could potentially suppress plant defenses. However, we observed that, unlike in the Arabidopsis-Pst DC3000 pathosystem where the TTSS is involved in suppressing plant immune response such as callose , the SPI-1 and SPI-2 regions of STm 14028s are not involved in this process in the lettuce system.Growth rates of STm 14028s, Mut3, and Mut9 in AWF and LSLB were determined during the log-phase of bacterial growth.As expected, there was minimal bacterial growth in water , indicating that residual nutrients in the inoculum were not transferred to LSLB or AWF to enhance growth. In an attempt to correlate the ability of the bacterium to survive within the apoplast with the ability to utilize apoplastic nutrients for growth, we included in this analysis Mut3 that contains a deletion of SPI-1 and adjacent genes and shows apoplastic persistence similar to the wild type STm 14028s .

When grown on LSLB,drainage pot both Mut3 and Mut9 had statistically significant lower growth rates than STm 14028s. Mut3, Mut9, and STm 14028s had growth rates of 2.78, 2.18, and 3.53 generations/hour, respectively . When grown in lettuce AWF, Mut3 and STm 14028s had similar growth rates, while the Mut9 growth rate was significantly lower . This finding suggests that the STm’s ability to persist in the apoplast may be linked to nutrient acquisition or the overall bacterial fitness in this niche that is dependent on yet-to-be determined gene and operon deleted in Mut9.The importance of food borne illness caused by contamination of produce by Salmonella spp. and the prevalence of contamination associated with leafy greens led us to investigate the molecular mechanisms allowing Salmonella spp. to use this alternate host for survival. As apoplastic populations of human pathogenic bacteria in lettuce are a potential risk for food borne illnesses due to persistence from production to consumption, we directed our focus on the bacterial internalization into leaves through stomata and endophytic survival. S. enterica internalization of leaves can occur through the stomatal pore . We were able to identify ten regions in the STm 14028s genome that may directly or indirectly contribute to the bacterium’s ability to open the stomatal pore facilitating its entry into the apoplast. Although it is not obvious which genes in those regions are specifically responsible for the observed phenotype on the leaf surface, the major metabolic functions of these regions are associated with sensing the environment, bacterium chemotaxis and movement, membrane transporters, and biosynthesis of surface appendices . Previously, these functions have been found to be associated with epiphytic fitness of bacterial phytopathogens . Furthermore, Kroupitski et al. observed that STm SL1344 aggregates near open stomata and uses chemotaxis and motility for internalization through lettuce stomata. Additionally, darkness prevents STm SL1344’s ability to re-open the stomatal pore and internalization into the leaves possibly due to the lack of chemo attractant leaching through closed stomata .

These findings suggest that close proximity to stomata may be required for Salmonella to induce opening of the pore. Therefore, STm invasion of the apoplast may be a consequence of a combined behavior of the bacterium on the phylloplane that can be modulated by plant-derived cues and,with this study, we have defined key genomic regions involved in this complex process. Not all the genomic regions required for initiation of the leaf colonization are essential for continuing bacterial survival as an endophyte . For instance, genes deleted from Mut3 and Mut6 [encoding unspecified membrane proteins, the PhoP/Q two-component system, SopE2 , phage genes, a transcriptional repressor , and some unspecified transporters] do not contribute to endophytic survival. Thus, these regions missing in Mut3/6 are potential targets for disrupting leaf surface colonization, but not endophytic persistence. This observation is not entirely surprising as the phylloplane and the apoplast environments are unique and they pose different challenges for bacterial survival in these niches. STm seems to have metabolic plasticity for adaptation to varying conditions in the leaf. For instance, STm SL1344 can shift its metabolism to utilize nutrients available in decaying lettuce and cilantro leaves and STm 14028s uses distinct metabolism strategies to colonize tomatoes and animal infection . We also observed that seven regions of the STm 14028s genome have opposite effects on the different phases of colonization. Mut1/2/4/5/7/8/10 seem to lack the ability to promote penetration into the leaf , but they show better fitness than that of the wild type strain in the apoplast . One hypothesis is that the increased bacterial population titers are due to lack of energy expenditure for maintaining large genomic segments that are not essential for survival as an endophyte, so that the excess energy can be spent on survival. However, this indirect effect of the deletion may not be valid for Mut4/10, where only small genomic regions are missing . Alternatively, these regions might encode for proteins that negatively affect bacterial survival in leaves.

This interesting observation is worth future investigation. Intriguingly, we found that genes deleted in Mut9 are important for re-opening the stomatal pore and successful endophytic survival. This deletion includes SPI-2 that functions in the production of the TTSS-2 apparatus, effectors, and a two component regulatory system of this island , which are important for the virulence of STm in animal systems . The contribution of the TTSS-2 apparatus and effectors to the bacterium’s ability to colonize the phyllosphere has been studied in several laboratories and it is largely dependent on the plant species analyzed . Nonetheless, so far there is no evidence for the ability of STm to inject TTSS effectors inside plant cells . Furthermore, the STm 14028s ssaV-structural mutant, that cannot form the TTSS-2 apparatus , survives in the lettuce cv. Romit 936 to the same extent as the wild type bacterium after surface inoculation . Our data also support the notion that the TTSS- 2 is not involved in STm ability to induce or subvert defenses, such as callose deposition in lettuce cv. Salinas . While studies in other plant systems have suggested that TTSS and encoded effectors may contribute to bacterial survival in the plant environment or in some cases are detrimental for bacterial colonization of plant tissues , it has become evident that the TTSS-2 within the SPI-2 region is not relevant in the STm 14028s-lettuce leaf interaction. It is important to note that SPI-2 is a genomic segment of roughly 40 kb with 42 open reading frames arranged into 17 operons . It is present in all pathogenic serovars and strains of S. enterica, but only partially present in species of a more distant common ancestor, such as S. bongori . Besides encoding structural and regulatory components of the TTSS-2 , SPI2 also carries genes coding for a tetrathionate reductase complex, a cysteine desulfurase enzyme complex, membrane transport proteins, murein transpeptidases, as well as genes with still uncharacterized functions . Thus, it is possible that genes and operons, other than the ones associated with TTSS-2,gallon pot may have a function in the bacterium colonization of the lettuce leaf. To date, it has not been demonstrated whether STm 14028s can access and utilize nutrients from the apoplast of intact lettuce leaves. Although nutrients in the apoplast might be limiting , it has been hypothesized that Salmonella may scavenge nutrients to persist in the plant environment and/or adjust its metabolism to synthesize compounds that are not readily available at the colonization site. For instance, a mutant screen analysis indicated that STm 14028s requires genes for biosynthesis of nucleotides, lipopolysaccharide, and amino acids during colonization of tomato fruits . Moreover, plants might secrete antimicrobial compounds into the apoplast as a plant defense mechanism, imposing a stressful condition to the microbial invader . Therefore, considering that subversion of plant defenses is not a function of the TTSS-2 in the apoplast of lettuce , it is possible that the Mut9 population reduces 20 fold over 21 days due to its inability to obtain nutrients from this niche and/or to cope with plant defenses. Although Mut9 shows reduced growth on lettuce leaf AWF , additional experimentation is required to distinguish between these two possibilities. It is tempting to speculate, however, that the tetrathionate reductase gene cluster within SPI-2 or the sulfur mobilization operon deleted in Mut9 might be involved in this process.

Particular to the ttr operon, TtrAB forms the enzyme complex, TtrC anchors the enzyme to the membrane, whereas TtrS and TtrR are the sensor kinase and DNA-binding response regulator, respectively . The reduction of tetrathionate by this membrane-localized enzyme is part of the Salmonella’s anaerobic respiration . Intriguingly, the use of tetrathionate as an electron acceptor during propanediol and ethanolamine utilization by the bacterium has been suggested to occur in macerated leaf tissue . A significant number of genes involved in the PDU ,EUT , and cobalamin pathways as well as the ttrC gene are upregulated in STm SL1344 when co-inoculated with the soft rot pathogen Dickeya dadantii onto cilantro and lettuce leaf cuts . Altogether, these findings suggest that these biochemical pathways may occur in both soft rot contaminated and healthy leaves. Considering that the encounter of the plant with a pathogenic bacterium triggers molecular action and reaction in both organisms overtime, it is not surprising that multiple regions of the STm 14028s genome may be required for lettuce leaf colonization. For instance, Goudeau et al. reported that 718 genes of the STm SL1344 genome were transcriptionally regulated upon exposure to degrading lettuce cell wall. In any case, further studies using single-gene mutants are still required to identify the specific genes and functions within each MGD mutant that are involved in the interaction between STm 14028s and lettuce cultivar Salinas. Butenolides are lactone-containing heterocyclic molecules with important biochemical and physiological roles in plant life. Although previously recognized as secondary metabolites, some types of butenolides were recently classified as plant hormones . Strigolactones are carotenoid-derived molecules bearing essential butenolide moieties that were originally described as chemical cues promoting seed germination of parasitic Striga species . It has since become evident that SLs are involved in controlling a wide range of plant developmental processes, including root architecture, establishment of mycorrhiza, stature and shoot branching, seedling growth, senescence, leaf morphology and cambial activity . SLs are synthesized via a sequential cleavage of alltrans-β-carotene by DWARF27 and the resulting 9-cis-β-carotene by MORE AXILLARY GROWTH3 and 4 . The SL precursor carlactone is then transported through the xylem and biologically active SLs are formed by MAX1 and its homologs and LATERAL BRANCHING OXIDOREDUCTASE . Cumulative evidence supports the idea that the DWARF14 α/β-fold hydrolase functions as a SL receptor and is required for the perception of the SL signal in Petunia , rice , Arabidopsis and pea . Upon binding, D14 proteins hydrolyze SL by action of its conserved Ser-His-Asp catalytic triad, followed by thermal destabilization of the proteins . As a consequence, the structural rearrangement of D14 proteins in the presence of SL enables the protein to physically interact with the F-box proteins MAX2 and SMAX1-LIKE  family proteins SMXL 6, 7 and 8 to form a Skp-Cullin-Fbox ubiquitin ligase complex that polyubiquitinates SMXLs and targets them for degradation by the 26S proteasome. The subsequent signaling events are largely unknown, but tentatively the mechanism is similar to other systems employing targeted protein degradation . In Arabidopsis, two paralogs of AtD14 have been identified . One paralog, KARRIKIN INSENSITIVE2 was identified in a mutant in Ler background which showed insensitivity to karrikin , a butenolide-type germination stimulant from smoke water . Although both AtD14 and KAI2 signaling pathways converge upon MAX2 and might employ similar mechanisms to transduce the signal, the two proteins regulate separate physiological events.